WorldWideScience

Sample records for gibbs sampling strategy

  1. Gibbs sampling on large lattice with GMRF

    Science.gov (United States)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  2. Geometric and Texture Inpainting by Gibbs Sampling

    DEFF Research Database (Denmark)

    Gustafsson, David Karl John; Pedersen, Kim Steenstrup; Nielsen, Mads

    2007-01-01

    . In this paper we use the well-known FRAME (Filters, Random Fields and Maximum Entropy) for inpainting. We introduce a temperature term in the learned FRAME Gibbs distribution. By sampling using different temperature in the FRAME Gibbs distribution, different contents of the image are reconstructed. We propose...... a two step method for inpainting using FRAME. First the geometric structure of the image is reconstructed by sampling from a cooled Gibbs distribution, then the stochastic component is reconstructed by sample froma heated Gibbs distribution. Both steps in the reconstruction process are necessary...

  3. Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width.

    Science.gov (United States)

    De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher

    2015-12-01

    Gibbs sampling on factor graphs is a widely used inference technique, which often produces good empirical results. Theoretical guarantees for its performance are weak: even for tree structured graphs, the mixing time of Gibbs may be exponential in the number of variables. To help understand the behavior of Gibbs sampling, we introduce a new (hyper)graph property, called hierarchy width . We show that under suitable conditions on the weights, bounded hierarchy width ensures polynomial mixing time. Our study of hierarchy width is in part motivated by a class of factor graph templates, hierarchical templates , which have bounded hierarchy width-regardless of the data used to instantiate them. We demonstrate a rich application from natural language processing in which Gibbs sampling provably mixes rapidly and achieves accuracy that exceeds human volunteers.

  4. Large scale inference in the Infinite Relational Model: Gibbs sampling is not enough

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon; Moth, Andreas Leon Aagard; Mørup, Morten

    2013-01-01

    . We find that Gibbs sampling can be computationally scaled to handle millions of nodes and billions of links. Investigating the behavior of the Gibbs sampler for different sizes of networks we find that the mixing ability decreases drastically with the network size, clearly indicating a need...

  5. Sampling informative/complex a priori probability distributions using Gibbs sampling assisted by sequential simulation

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Mosegaard, Klaus; Cordua, Knud Skou

    2010-01-01

    Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample the solutions to non-linear inverse problems. In principle these methods allow incorporation of arbitrarily complex a priori information, but current methods allow only relatively simple...... this algorithm with the Metropolis algorithm to obtain an efficient method for sampling posterior probability densities for nonlinear inverse problems....

  6. Inverse problems with non-trivial priors: efficient solution through sequential Gibbs sampling

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Cordua, Knud Skou; Mosegaard, Klaus

    2012-01-01

    Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample solutions to non-linear inverse problems. In principle, these methods allow incorporation of prior information of arbitrary complexity. If an analytical closed form description of the prior...... is available, which is the case when the prior can be described by a multidimensional Gaussian distribution, such prior information can easily be considered. In reality, prior information is often more complex than can be described by the Gaussian model, and no closed form expression of the prior can be given....... We propose an algorithm, called sequential Gibbs sampling, allowing the Metropolis algorithm to efficiently incorporate complex priors into the solution of an inverse problem, also for the case where no closed form description of the prior exists. First, we lay out the theoretical background...

  7. Simultaneous alignment and clustering of peptide data using a Gibbs sampling approach

    DEFF Research Database (Denmark)

    Andreatta, Massimo; Lund, Ole; Nielsen, Morten

    2013-01-01

    Motivation: Proteins recognizing short peptide fragments play a central role in cellular signaling. As a result of high-throughput technologies, peptide-binding protein specificities can be studied using large peptide libraries at dramatically lower cost and time. Interpretation of such large...... peptide datasets, however, is a complex task, especially when the data contain multiple receptor binding motifs, and/or the motifs are found at different locations within distinct peptides.Results: The algorithm presented in this article, based on Gibbs sampling, identifies multiple specificities...... of unaligned peptide datasets of variable length. Example applications described in this article include mixtures of binders to different MHC class I and class II alleles, distinct classes of ligands for SH3 domains and sub-specificities of the HLA-A*02:01 molecule.Availability: The Gibbs clustering method...

  8. Gibbs-non-Gibbs transitions and vector-valued integration

    NARCIS (Netherlands)

    Zuijlen, van W.B.

    2016-01-01

    This thesis consists of two distinct topics. The first part of the thesis con- siders Gibbs-non-Gibbs transitions. Gibbs measures describe the macro- scopic state of a system of a large number of components that is in equilib- rium. It may happen that when the system is transformed, for example, by

  9. Improved prediction of MHC class I and class II epitopes using a novel Gibbs sampling approach

    DEFF Research Database (Denmark)

    Nielsen, Morten; Lundegaard, Claus; Worning, Peder

    2004-01-01

    Prediction of which peptides will bind a specific major histocompatibility complex (MHC) constitutes an important step in identifying potential T-cell epitopes suitable as vaccine candidates. MHC class II binding peptides have a broad length distribution complicating such predictions. Thus......, identifying the correct alignment is a crucial part of identifying the core of an MHC class II binding motif. In this context, we wish to describe a novel Gibbs motif sampler method ideally suited for recognizing such weak sequence motifs. The method is based on the Gibbs sampling method, and it incorporates...

  10. Screening disrupted molecular functions and pathways associated with clear cell renal cell carcinoma using Gibbs sampling.

    Science.gov (United States)

    Nan, Ning; Chen, Qi; Wang, Yu; Zhai, Xu; Yang, Chuan-Ce; Cao, Bin; Chong, Tie

    2017-10-01

    To explore the disturbed molecular functions and pathways in clear cell renal cell carcinoma (ccRCC) using Gibbs sampling. Gene expression data of ccRCC samples and adjacent non-tumor renal tissues were recruited from public available database. Then, molecular functions of expression changed genes in ccRCC were classed to Gene Ontology (GO) project, and these molecular functions were converted into Markov chains. Markov chain Monte Carlo (MCMC) algorithm was implemented to perform posterior inference and identify probability distributions of molecular functions in Gibbs sampling. Differentially expressed molecular functions were selected under posterior value more than 0.95, and genes with the appeared times in differentially expressed molecular functions ≥5 were defined as pivotal genes. Functional analysis was employed to explore the pathways of pivotal genes and their strongly co-regulated genes. In this work, we obtained 396 molecular functions, and 13 of them were differentially expressed. Oxidoreductase activity showed the highest posterior value. Gene composition analysis identified 79 pivotal genes, and survival analysis indicated that these pivotal genes could be used as a strong independent predictor of poor prognosis in patients with ccRCC. Pathway analysis identified one pivotal pathway - oxidative phosphorylation. We identified the differentially expressed molecular functions and pivotal pathway in ccRCC using Gibbs sampling. The results could be considered as potential signatures for early detection and therapy of ccRCC. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Inverse Gaussian model for small area estimation via Gibbs sampling

    African Journals Online (AJOL)

    We present a Bayesian method for estimating small area parameters under an inverse Gaussian model. The method is extended to estimate small area parameters for finite populations. The Gibbs sampler is proposed as a mechanism for implementing the Bayesian paradigm. We illustrate the method by application to ...

  12. Josiah Willard Gibbs

    Indian Academy of Sciences (India)

    The younger Gibbs grew up in the liberal and academic atmos- phere at Yale, where .... research in the premier European universities at the time when a similar culture ... tion in obscure journals, Gibbs' work did not receive wide recognition in ...

  13. Quantum Gibbs Samplers: The Commuting Case

    Science.gov (United States)

    Kastoryano, Michael J.; Brandão, Fernando G. S. L.

    2016-06-01

    We analyze the problem of preparing quantum Gibbs states of lattice spin Hamiltonians with local and commuting terms on a quantum computer and in nature. Our central result is an equivalence between the behavior of correlations in the Gibbs state and the mixing time of the semigroup which drives the system to thermal equilibrium (the Gibbs sampler). We introduce a framework for analyzing the correlation and mixing properties of quantum Gibbs states and quantum Gibbs samplers, which is rooted in the theory of non-commutative {mathbb{L}_p} spaces. We consider two distinct classes of Gibbs samplers, one of them being the well-studied Davies generator modelling the dynamics of a system due to weak-coupling with a large Markovian environment. We show that their spectral gap is independent of system size if, and only if, a certain strong form of clustering of correlations holds in the Gibbs state. Therefore every Gibbs state of a commuting Hamiltonian that satisfies clustering of correlations in this strong sense can be prepared efficiently on a quantum computer. As concrete applications of our formalism, we show that for every one-dimensional lattice system, or for systems in lattices of any dimension at temperatures above a certain threshold, the Gibbs samplers of commuting Hamiltonians are always gapped, giving an efficient way of preparing the associated Gibbs states on a quantum computer.

  14. A logistic regression estimating function for spatial Gibbs point processes

    DEFF Research Database (Denmark)

    Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege

    We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related to the p......We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...

  15. Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much.

    Science.gov (United States)

    He, Bryan; De Sa, Christopher; Mitliagkas, Ioannis; Ré, Christopher

    2016-01-01

    Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions. To prove these relative bounds, we introduce a method of augmenting the state space to study systematic scan using conductance.

  16. Near-Optimal Detection in MIMO Systems using Gibbs Sampling

    DEFF Research Database (Denmark)

    Hansen, Morten; Hassibi, Babak; Dimakis, Georgios Alexandros

    2009-01-01

    In this paper we study a Markov Chain Monte Carlo (MCMC) Gibbs sampler for solving the integer least-squares problem. In digital communication the problem is equivalent to preforming Maximum Likelihood (ML) detection in Multiple-Input Multiple-Output (MIMO) systems. While the use of MCMC methods...... sampler provides a computationally efficient way of achieving approximative ML detection in MIMO systems having a huge number of transmit and receive dimensions. In fact, they further suggest that the Markov chain is rapidly mixing. Thus, it has been observed that even in cases were ML detection using, e...

  17. Notes on the development of the gibbs potential; Sur le developpement du potentiel de gibbs

    Energy Technology Data Exchange (ETDEWEB)

    Bloch, C; Dominicis, C de [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires

    1959-07-01

    A short account is given of some recent work on the perturbation expansion of the Gibbs potential of quantum statistical mechanics. (author) [French] Expose en resume de quelques travaux sur le developpement dans la theorie des perturbations du potentiel de Gibbs de la Mecanique Statistique. (auteur)

  18. A Bayesian approach to PET reconstruction using image-modeling Gibbs priors: Implementation and comparison

    International Nuclear Information System (INIS)

    Chan, M.T.; Herman, G.T.; Levitan, E.

    1996-01-01

    We demonstrate that (i) classical methods of image reconstruction from projections can be improved upon by considering the output of such a method as a distorted version of the original image and applying a Bayesian approach to estimate from it the original image (based on a model of distortion and on a Gibbs distribution as the prior) and (ii) by selecting an open-quotes image-modelingclose quotes prior distribution (i.e., one which is such that it is likely that a random sample from it shares important characteristics of the images of the application area) one can improve over another Gibbs prior formulated using only pairwise interactions. We illustrate our approach using simulated Positron Emission Tomography (PET) data from realistic brain phantoms. Since algorithm performance ultimately depends on the diagnostic task being performed. we examine a number of different medically relevant figures of merit to give a fair comparison. Based on a training-and-testing evaluation strategy, we demonstrate that statistically significant improvements can be obtained using the proposed approach

  19. An Introduction to the DA-T Gibbs Sampler for the Two-Parameter Logistic (2PL Model and Beyond

    Directory of Open Access Journals (Sweden)

    Gunter Maris

    2005-01-01

    Full Text Available The DA-T Gibbs sampler is proposed by Maris and Maris (2002 as a Bayesian estimation method for a wide variety of Item Response Theory (IRT models. The present paper provides an expository account of the DAT Gibbs sampler for the 2PL model. However, the scope is not limited to the 2PL model. It is demonstrated how the DA-T Gibbs sampler for the 2PL may be used to build, quite easily, Gibbs samplers for other IRT models. Furthermore, the paper contains a novel, intuitive derivation of the Gibbs sampler and could be read for a graduate course on sampling.

  20. Enzyme Catalysis and the Gibbs Energy

    Science.gov (United States)

    Ault, Addison

    2009-01-01

    Gibbs-energy profiles are often introduced during the first semester of organic chemistry, but are less often presented in connection with enzyme-catalyzed reactions. In this article I show how the Gibbs-energy profile corresponds to the characteristic kinetics of a simple enzyme-catalyzed reaction. (Contains 1 figure and 1 note.)

  1. Inferring the Gibbs state of a small quantum system

    International Nuclear Information System (INIS)

    Rau, Jochen

    2011-01-01

    Gibbs states are familiar from statistical mechanics, yet their use is not limited to that domain. For instance, they also feature in the maximum entropy reconstruction of quantum states from incomplete measurement data. Outside the macroscopic realm, however, estimating a Gibbs state is a nontrivial inference task, due to two complicating factors: the proper set of relevant observables might not be evident a priori; and whenever data are gathered from a small sample only, the best estimate for the Lagrange parameters is invariably affected by the experimenter's prior bias. I show how the two issues can be tackled with the help of Bayesian model selection and Bayesian interpolation, respectively, and illustrate the use of these Bayesian techniques with a number of simple examples.

  2. Evolution algebras generated by Gibbs measures

    International Nuclear Information System (INIS)

    Rozikov, Utkir A.; Tian, Jianjun Paul

    2009-03-01

    In this article we study algebraic structures of function spaces defined by graphs and state spaces equipped with Gibbs measures by associating evolution algebras. We give a constructive description of associating evolution algebras to the function spaces (cell spaces) defined by graphs and state spaces and Gibbs measure μ. For finite graphs we find some evolution subalgebras and other useful properties of the algebras. We obtain a structure theorem for evolution algebras when graphs are finite and connected. We prove that for a fixed finite graph, the function spaces have a unique algebraic structure since all evolution algebras are isomorphic to each other for whichever Gibbs measures are assigned. When graphs are infinite graphs then our construction allows a natural introduction of thermodynamics in studying of several systems of biology, physics and mathematics by theory of evolution algebras. (author)

  3. Finite Cycle Gibbs Measures on Permutations of

    Science.gov (United States)

    Armendáriz, Inés; Ferrari, Pablo A.; Groisman, Pablo; Leonardi, Florencia

    2015-03-01

    We consider Gibbs distributions on the set of permutations of associated to the Hamiltonian , where is a permutation and is a strictly convex potential. Call finite-cycle those permutations composed by finite cycles only. We give conditions on ensuring that for large enough temperature there exists a unique infinite volume ergodic Gibbs measure concentrating mass on finite-cycle permutations; this measure is equal to the thermodynamic limit of the specifications with identity boundary conditions. We construct as the unique invariant measure of a Markov process on the set of finite-cycle permutations that can be seen as a loss-network, a continuous-time birth and death process of cycles interacting by exclusion, an approach proposed by Fernández, Ferrari and Garcia. Define as the shift permutation . In the Gaussian case , we show that for each , given by is an ergodic Gibbs measure equal to the thermodynamic limit of the specifications with boundary conditions. For a general potential , we prove the existence of Gibbs measures when is bigger than some -dependent value.

  4. Gibbs perturbations of a two-dimensional gauge field

    International Nuclear Information System (INIS)

    Petrova, E.N.

    1981-01-01

    Small Gibbs perturbations of random fields have been investigated up to now for a few initial fields only. Among them there are independent fields, Gaussian fields and some others. The possibility for the investigation of Gibbs modifications of a random field depends essentially on the existence of good estimates for semiinvariants of this field. This is the reason why the class of random fields for which the investigation of Gibbs perturbations with arbitrary potential of bounded support is possible is rather small. The author takes as initial a well-known model: a two-dimensional gauge field. (Auth.)

  5. Reflections on Gibbs: From Critical Phenomena to the Amistad

    Science.gov (United States)

    Kadanoff, Leo P.

    2003-03-01

    J. Willard Gibbs, the younger was the first American theorist. He was one of the inventors of statistical physics. His introduction and development of the concepts of phase space, phase transitions, and thermodynamic surfaces was remarkably correct and elegant. These three concepts form the basis of different but related areas of physics. The connection among these areas has been a subject of deep reflection from Gibbs' time to our own. I shall talk about these connections by using concepts suggested by the work of Michael Berry and explicitly put forward by the philosopher Robert Batterman. This viewpoint relates theory-connection to the applied mathematics concepts of asymptotic analysis and singular perturbations. J. Willard Gibbs, the younger, had all his achievements concentrated in science. His father, also J. Willard Gibbs, also a Professor at Yale, had one great achievement that remains unmatched in our day. I shall describe it.

  6. A brief critique of the Adam-Gibbs entropy model

    DEFF Research Database (Denmark)

    Dyre, J. C.; Hecksher, Tina; Niss, Kristine

    2009-01-01

    This paper critically discusses the entropy model proposed by Adam and Gibbs in 1965 for the dramatic temperature dependence of glass-forming liquids' average relaxation time, which is one of the most influential models during the last four decades. We discuss the Adam-Gibbs model's theoretical...

  7. Thermodynamic fluctuations within the Gibbs and Einstein approaches

    International Nuclear Information System (INIS)

    Rudoi, Yurii G; Sukhanov, Alexander D

    2000-01-01

    A comparative analysis of the descriptions of fluctuations in statistical mechanics (the Gibbs approach) and in statistical thermodynamics (the Einstein approach) is given. On this basis solutions are obtained for the Gibbs and Einstein problems that arise in pressure fluctuation calculations for a spatially limited equilibrium (or slightly nonequilibrium) macroscopic system. A modern formulation of the Gibbs approach which allows one to calculate equilibrium pressure fluctuations without making any additional assumptions is presented; to this end the generalized Bogolyubov - Zubarev and Hellmann - Feynman theorems are proved for the classical and quantum descriptions of a macrosystem. A statistical version of the Einstein approach is developed which shows a fundamental difference in pressure fluctuation results obtained within the context of two approaches. Both the 'genetic' relation between the Gibbs and Einstein approaches and the conceptual distinction between their physical grounds are demonstrated. To illustrate the results, which are valid for any thermodynamic system, an ideal nondegenerate gas of microparticles is considered, both classically and quantum mechanically. Based on the results obtained, the correspondence between the micro- and macroscopic descriptions is considered and the prospects of statistical thermodynamics are discussed. (reviews of topical problems)

  8. Reflections on Gibbs: From Statistical Physics to the Amistad V3.0

    Science.gov (United States)

    Kadanoff, Leo P.

    2014-07-01

    This note is based upon a talk given at an APS meeting in celebration of the achievements of J. Willard Gibbs. J. Willard Gibbs, the younger, was the first American physical sciences theorist. He was one of the inventors of statistical physics. He introduced and developed the concepts of phase space, phase transitions, and thermodynamic surfaces in a remarkably correct and elegant manner. These three concepts form the basis of different areas of physics. The connection among these areas has been a subject of deep reflection from Gibbs' time to our own. This talk therefore celebrated Gibbs by describing modern ideas about how different parts of physics fit together. I finished with a more personal note. Our own J. Willard Gibbs had all his many achievements concentrated in science. His father, also J. Willard Gibbs, also a Professor at Yale, had one great non-academic achievement that remains unmatched in our day. I describe it.

  9. Gibbs phenomenon for dispersive PDEs on the line

    OpenAIRE

    Biondini, Gino; Trogdon, Thomas

    2014-01-01

    We investigate the Cauchy problem for linear, constant-coefficient evolution PDEs on the real line with discontinuous initial conditions (ICs) in the small-time limit. The small-time behavior of the solution near discontinuities is expressed in terms of universal, computable special functions. We show that the leading-order behavior of the solution of dispersive PDEs near a discontinuity of the ICs is characterized by Gibbs-type oscillations and gives exactly the Wilbraham-Gibbs constant.

  10. A comparison of various Gibbs energy dissipation correlations for predicting microbial growth yields

    Energy Technology Data Exchange (ETDEWEB)

    Liu, J.-S. [Laboratory of Chemical and Biochemical Engineering, Swiss Federal Institute of Technology, EPFL, CH-1015 Lausanne (Switzerland); Vojinovic, V. [Laboratory of Chemical and Biochemical Engineering, Swiss Federal Institute of Technology, EPFL, CH-1015 Lausanne (Switzerland); Patino, R. [Cinvestav-Merida, Departamento de Fisica Aplicada, Km. 6 carretera antigua a Progreso, AP 73 Cordemex, 97310 Merida, Yucatan (Mexico); Maskow, Th. [UFZ Centre for Environmental Research, Department of Environmental Microbiology, Permoserstrasse 15, D-04318 Leipzig (Germany); Stockar, U. von [Laboratory of Chemical and Biochemical Engineering, Swiss Federal Institute of Technology, EPFL, CH-1015 Lausanne (Switzerland)]. E-mail: urs.vonStockar@epfl.ch

    2007-06-25

    Thermodynamic analysis may be applied in order to predict microbial growth yields roughly, based on an empirical correlation of the Gibbs energy of the overall growth reaction or Gibbs energy dissipation. Due to the well-known trade-off between high biomass yield and high Gibbs energy dissipation necessary for fast growth, an optimal range of Gibbs energy dissipation exists and it can be correlated to physical characteristics of the growth substrates. A database previously available in the literature has been extended significantly in order to test such correlations. An analysis of the relationship between biomass yield and Gibbs energy dissipation reveals that one does not need a very precise estimation of the latter to predict the former roughly. Approximating the Gibbs energy dissipation with a constant universal value of -500 kJ C-mol{sup -1} of dry biomass grown predicts many experimental growth yields nearly as well as a carefully designed, complex correlation available from the literature, even though a number of predictions are grossly out of range. A new correlation for Gibbs energy dissipation is proposed which is just as accurate as the complex literature correlation despite its dramatically simpler structure.

  11. A comparison of various Gibbs energy dissipation correlations for predicting microbial growth yields

    International Nuclear Information System (INIS)

    Liu, J.-S.; Vojinovic, V.; Patino, R.; Maskow, Th.; Stockar, U. von

    2007-01-01

    Thermodynamic analysis may be applied in order to predict microbial growth yields roughly, based on an empirical correlation of the Gibbs energy of the overall growth reaction or Gibbs energy dissipation. Due to the well-known trade-off between high biomass yield and high Gibbs energy dissipation necessary for fast growth, an optimal range of Gibbs energy dissipation exists and it can be correlated to physical characteristics of the growth substrates. A database previously available in the literature has been extended significantly in order to test such correlations. An analysis of the relationship between biomass yield and Gibbs energy dissipation reveals that one does not need a very precise estimation of the latter to predict the former roughly. Approximating the Gibbs energy dissipation with a constant universal value of -500 kJ C-mol -1 of dry biomass grown predicts many experimental growth yields nearly as well as a carefully designed, complex correlation available from the literature, even though a number of predictions are grossly out of range. A new correlation for Gibbs energy dissipation is proposed which is just as accurate as the complex literature correlation despite its dramatically simpler structure

  12. Extension of Gibbs-Duhem equation including influences of external fields

    Science.gov (United States)

    Guangze, Han; Jianjia, Meng

    2018-03-01

    Gibbs-Duhem equation is one of the fundamental equations in thermodynamics, which describes the relation among changes in temperature, pressure and chemical potential. Thermodynamic system can be affected by external field, and this effect should be revealed by thermodynamic equations. Based on energy postulate and the first law of thermodynamics, the differential equation of internal energy is extended to include the properties of external fields. Then, with homogeneous function theorem and a redefinition of Gibbs energy, a generalized Gibbs-Duhem equation with influences of external fields is derived. As a demonstration of the application of this generalized equation, the influences of temperature and external electric field on surface tension, surface adsorption controlled by external electric field, and the derivation of a generalized chemical potential expression are discussed, which show that the extended Gibbs-Duhem equation developed in this paper is capable to capture the influences of external fields on a thermodynamic system.

  13. Concentration inequalities for functions of Gibbs fields with application to diffraction and random Gibbs measures

    CERN Document Server

    Külske, C

    2003-01-01

    We derive useful general concentration inequalities for functions of Gibbs fields in the uniqueness regime. We also consider expectations of random Gibbs measures that depend on an additional disorder field, and prove concentration w.r.t the disorder field. Both fields are assumed to be in the uniqueness regime, allowing in particular for non-independent disorder field. The modification of the bounds compared to the case of an independent field can be expressed in terms of constants that resemble the Dobrushin contraction coefficient, and are explicitly computable. On the basis of these inequalities, we obtain bounds on the deviation of a diffraction pattern created by random scatterers located on a general discrete point set in the Euclidean space, restricted to a finite volume. Here we also allow for thermal dislocations of the scatterers around their equilibrium positions. Extending recent results for independent scatterers, we give a universal upper bound on the probability of a deviation of the random sc...

  14. Dynamical predictive power of the generalized Gibbs ensemble revealed in a second quench.

    Science.gov (United States)

    Zhang, J M; Cui, F C; Hu, Jiangping

    2012-04-01

    We show that a quenched and relaxed completely integrable system is hardly distinguishable from the corresponding generalized Gibbs ensemble in a dynamical sense. To be specific, the response of the quenched and relaxed system to a second quench can be accurately reproduced by using the generalized Gibbs ensemble as a substitute. Remarkably, as demonstrated with the transverse Ising model and the hard-core bosons in one dimension, not only the steady values but even the transient, relaxation dynamics of the physical variables can be accurately reproduced by using the generalized Gibbs ensemble as a pseudoinitial state. This result is an important complement to the previously established result that a quenched and relaxed system is hardly distinguishable from the generalized Gibbs ensemble in a static sense. The relevance of the generalized Gibbs ensemble in the nonequilibrium dynamics of completely integrable systems is then greatly strengthened.

  15. Just Another Gibbs Additive Modeler: Interfacing JAGS and mgcv

    Directory of Open Access Journals (Sweden)

    Simon N. Wood

    2016-12-01

    Full Text Available The BUGS language offers a very flexible way of specifying complex statistical models for the purposes of Gibbs sampling, while its JAGS variant offers very convenient R integration via the rjags package. However, including smoothers in JAGS models can involve some quite tedious coding, especially for multivariate or adaptive smoothers. Further, if an additive smooth structure is required then some care is needed, in order to centre smooths appropriately, and to find appropriate starting values. R package mgcv implements a wide range of smoothers, all in a manner appropriate for inclusion in JAGS code, and automates centring and other smooth setup tasks. The purpose of this note is to describe an interface between mgcv and JAGS, based around an R function, jagam, which takes a generalized additive model (GAM as specified in mgcv and automatically generates the JAGS model code and data required for inference about the model via Gibbs sampling. Although the auto-generated JAGS code can be run as is, the expectation is that the user would wish to modify it in order to add complex stochastic model components readily specified in JAGS. A simple interface is also provided for visualisation and further inference about the estimated smooth components using standard mgcv functionality. The methods described here will be un-necessarily inefficient if all that is required is fully Bayesian inference about a standard GAM, rather than the full flexibility of JAGS. In that case the BayesX package would be more efficient.

  16. Time-dependent generalized Gibbs ensembles in open quantum systems

    Science.gov (United States)

    Lange, Florian; Lenarčič, Zala; Rosch, Achim

    2018-04-01

    Generalized Gibbs ensembles have been used as powerful tools to describe the steady state of integrable many-particle quantum systems after a sudden change of the Hamiltonian. Here, we demonstrate numerically that they can be used for a much broader class of problems. We consider integrable systems in the presence of weak perturbations which break both integrability and drive the system to a state far from equilibrium. Under these conditions, we show that the steady state and the time evolution on long timescales can be accurately described by a (truncated) generalized Gibbs ensemble with time-dependent Lagrange parameters, determined from simple rate equations. We compare the numerically exact time evolutions of density matrices for small systems with a theory based on block-diagonal density matrices (diagonal ensemble) and a time-dependent generalized Gibbs ensemble containing only a small number of approximately conserved quantities, using the one-dimensional Heisenberg model with perturbations described by Lindblad operators as an example.

  17. Psychoanalytic Interpretation of Blueberries by Susan Gibb

    Directory of Open Access Journals (Sweden)

    Maya Zalbidea Paniagua

    2014-06-01

    Full Text Available Blueberries (2009 by Susan Gibb, published in the ELO (Electronic Literature Organization, invites the reader to travel inside the protagonist’s mind to discover real and imaginary experiences examining notions of gender, sex, body and identity of a traumatised woman. This article explores the verbal and visual modes in this digital short fiction following semiotic patterns as well as interpreting the psychological states that are expressed through poetical and technological components. A comparative study of the consequences of trauma in the protagonist will be developed including psychoanalytic theories by Sigmund Freud, Jacques Lacan and the feminist psychoanalysts: Melanie Klein and Bracha Ettinger. The reactions of the protagonist will be studied: loss of reality, hallucinations and Electra Complex, as well as the rise of defence mechanisms and her use of the artistic creativity as a healing therapy. The interactivity of the hypermedia, multiple paths and endings will be analyzed as a literary strategy that increases the reader’s capacity of empathizing with the speaker.

  18. Determination of Gibbs energies of formation in aqueous solution using chemical engineering tools.

    Science.gov (United States)

    Toure, Oumar; Dussap, Claude-Gilles

    2016-08-01

    Standard Gibbs energies of formation are of primary importance in the field of biothermodynamics. In the absence of any directly measured values, thermodynamic calculations are required to determine the missing data. For several biochemical species, this study shows that the knowledge of the standard Gibbs energy of formation of the pure compounds (in the gaseous, solid or liquid states) enables to determine the corresponding standard Gibbs energies of formation in aqueous solutions. To do so, using chemical engineering tools (thermodynamic tables and a model enabling to predict activity coefficients, solvation Gibbs energies and pKa data), it becomes possible to determine the partial chemical potential of neutral and charged components in real metabolic conditions, even in concentrated mixtures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. The ecosystem of the Mid-Atlantic Ridge at the sub-polar front and Charlie-Gibbs Fracture Zone; ECO-MAR project strategy and description of the sampling programme 2007-2010

    Science.gov (United States)

    Priede, Imants G.; Billett, David S. M.; Brierley, Andrew S.; Hoelzel, A. Rus; Inall, Mark; Miller, Peter I.; Cousins, Nicola J.; Shields, Mark A.; Fujii, Toyonobu

    2013-12-01

    The ECOMAR project investigated photosynthetically-supported life on the North Mid-Atlantic Ridge (MAR) between the Azores and Iceland focussing on the Charlie-Gibbs Fracture Zone area in the vicinity of the sub-polar front where the North Atlantic Current crosses the MAR. Repeat visits were made to four stations at 2500 m depth on the flanks of the MAR in the years 2007-2010; a pair of northern stations at 54°N in cold water north of the sub-polar front and southern stations at 49°N in warmer water influenced by eddies from the North Atlantic Current. At each station an instrumented mooring was deployed with current meters and sediment traps (100 and 1000 m above the sea floor) to sample downward flux of particulate matter. The patterns of water flow, fronts, primary production and export flux in the region were studied by a combination of remote sensing and in situ measurements. Sonar, tow nets and profilers sampled pelagic fauna over the MAR. Swath bathymetry surveys across the ridge revealed sediment-covered flat terraces parallel to the axis of the MAR with intervening steep rocky slopes. Otter trawls, megacores, baited traps and a suite of tools carried by the R.O.V. Isis including push cores, grabs and a suction device collected benthic fauna. Video and photo surveys were also conducted using the SHRIMP towed vehicle and the R.O.V. Isis. Additional surveying and sampling by landers and R.O.V. focussed on the summit of a seamount (48°44‧N, 28°10‧W) on the western crest of the MAR between the two southern stations.

  20. Consistent estimation of Gibbs energy using component contributions.

    Directory of Open Access Journals (Sweden)

    Elad Noor

    Full Text Available Standard Gibbs energies of reactions are increasingly being used in metabolic modeling for applying thermodynamic constraints on reaction rates, metabolite concentrations and kinetic parameters. The increasing scope and diversity of metabolic models has led scientists to look for genome-scale solutions that can estimate the standard Gibbs energy of all the reactions in metabolism. Group contribution methods greatly increase coverage, albeit at the price of decreased precision. We present here a way to combine the estimations of group contribution with the more accurate reactant contributions by decomposing each reaction into two parts and applying one of the methods on each of them. This method gives priority to the reactant contributions over group contributions while guaranteeing that all estimations will be consistent, i.e. will not violate the first law of thermodynamics. We show that there is a significant increase in the accuracy of our estimations compared to standard group contribution. Specifically, our cross-validation results show an 80% reduction in the median absolute residual for reactions that can be derived by reactant contributions only. We provide the full framework and source code for deriving estimates of standard reaction Gibbs energy, as well as confidence intervals, and believe this will facilitate the wide use of thermodynamic data for a better understanding of metabolism.

  1. Generalization of Gibbs Entropy and Thermodynamic Relation

    OpenAIRE

    Park, Jun Chul

    2010-01-01

    In this paper, we extend Gibbs's approach of quasi-equilibrium thermodynamic processes, and calculate the microscopic expression of entropy for general non-equilibrium thermodynamic processes. Also, we analyze the formal structure of thermodynamic relation in non-equilibrium thermodynamic processes.

  2. Gibbs Energy Modeling of Digenite and Adjacent Solid-State Phases

    Science.gov (United States)

    Waldner, Peter

    2017-08-01

    All sulfur potential and phase diagram data available in the literature for solid-state equilibria related to digenite have been assessed. Thorough thermodynamic analysis at 1 bar total pressure has been performed. A three-sublattice approach has been developed to model the Gibbs energy of digenite as a function of composition and temperature using the compound energy formalism. The Gibbs energies of the adjacent solid-state phases covelitte and high-temperature chalcocite are also modeled treating both sulfides as stoichiometric compounds. The novel model for digenite offers new interpretation of experimental data, may contribute from a thermodynamic point of view to the elucidation of the role of copper species within the crystal structure and allows extrapolation to composition regimes richer in copper than stoichiometric digenite Cu2S. Preliminary predictions into the ternary Cu-Fe-S system at 1273 K (1000 °C) using the Gibbs energy model of digenite for calculating its iron solubility are promising.

  3. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  4. Gibbs free energy of formation of liquid lanthanide-bismuth alloys

    International Nuclear Information System (INIS)

    Sheng Jiawei; Yamana, Hajimu; Moriyama, Hirotake

    2001-01-01

    The linear free energy relationship developed by Sverjensky and Molling provides a way to predict Gibbs free energies of liquid Ln-Bi alloys formation from the known thermodynamic properties of aqueous trivalent lanthanides (Ln 3(5(6+ ). The Ln-Bi alloys are divided into two isostructural families named as the LnBi 2 (Ln=La, Ce, Pr, Nd and Pm) and LnBi (Ln=Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm and Yb). The calculated Gibbs free energy values are well agreed with experimental data

  5. Boltzmann, Gibbs and Darwin-Fowler approaches in parastatistics

    International Nuclear Information System (INIS)

    Ponczek, R.L.; Yan, C.C.

    1976-01-01

    Derivations of the equilibrium values of occupation numbers are made using three approaches, namely, the Boltzmann 'elementary' one, the ensemble method of Gibbs, and that of Darwin and Fowler as well [pt

  6. Continuous spin mean-field models : Limiting kernels and Gibbs properties of local transforms

    NARCIS (Netherlands)

    Kulske, Christof; Opoku, Alex A.

    2008-01-01

    We extend the notion of Gibbsianness for mean-field systems to the setup of general (possibly continuous) local state spaces. We investigate the Gibbs properties of systems arising from an initial mean-field Gibbs measure by application of given local transition kernels. This generalizes previous

  7. Numerical implementation and oceanographic application of the Gibbs potential of ice

    Directory of Open Access Journals (Sweden)

    R. Feistel

    2005-01-01

    Full Text Available The 2004 Gibbs thermodynamic potential function of naturally abundant water ice is based on much more experimental data than its predecessors, is therefore significantly more accurate and reliable, and for the first time describes the entire temperature and pressure range of existence of this ice phase. It is expressed in the ITS-90 temperature scale and is consistent with the current scientific pure water standard, IAPWS-95, and the 2003 Gibbs potential of seawater. The combination of these formulations provides sublimation pressures, freezing points, and sea ice properties covering the parameter ranges of oceanographic interest. This paper provides source code examples in Visual Basic, Fortran and C++ for the computation of the Gibbs function of ice and its partial derivatives. It reports the most important related thermodynamic equations for ice and sea ice properties.

  8. Efficiency of alternative MCMC strategies illustrated using the reaction norm model

    DEFF Research Database (Denmark)

    Shariati, Mohammad Mahdi; Sørensen, D.

    2008-01-01

    The Markov chain Monte Carlo (MCMC) strategy provides remarkable flexibility for fitting complex hierarchical models. However, when parameters are highly correlated in their posterior distributions and their number is large, a particular MCMC algorithm may perform poorly and the resulting...... in the low correlation scenario where SG was the best strategy. The two LH proposals could not compete with any of the Gibbs sampling algorithms. In this study it was not possible to find an MCMC strategy that performs optimally across the range of target distributions and across all possible values...

  9. Variación de la energía libre de Gibbs de la caolinita en función de la cristalinidad y tamaño de partícula

    OpenAIRE

    La Iglesia, A.

    1989-01-01

    The effect of grinding on crystallinity, particle size and solubility of two samples of kaolinite was studied. The standard Gibbs free energies of formation of different ground samples were calculated from solubility measurements, and show a direct relationship between Gibbs free energy and particle size-crystallinity variation. Values of -3752.2 and -3776.4 KJ/mol. were determinated for ΔGºl (am) and ΔGºl (crys) of kaolinite, respectively. A new th...

  10. Illustrating Enzyme Inhibition Using Gibbs Energy Profiles

    Science.gov (United States)

    Bearne, Stephen L.

    2012-01-01

    Gibbs energy profiles have great utility as teaching and learning tools because they present students with a visual representation of the energy changes that occur during enzyme catalysis. Unfortunately, most textbooks divorce discussions of traditional kinetic topics, such as enzyme inhibition, from discussions of these same topics in terms of…

  11. On P-Adic Quasi Gibbs Measures for Q + 1-State Potts Model on the Cayley Tree

    International Nuclear Information System (INIS)

    Mukhamedov, Farrukh

    2010-06-01

    In the present paper we introduce a new class of p-adic measures, associated with q +1-state Potts model, called p-adic quasi Gibbs measure, which is totally different from the p-adic Gibbs measure. We establish the existence p-adic quasi Gibbs measures for the model on a Cayley tree. If q is divisible by p, then we prove the occurrence of a strong phase transition. If q and p are relatively prime, then there is a quasi phase transition. These results are totally different from the results of [F.M.Mukhamedov, U.A. Rozikov, Indag. Math. N.S. 15(2005) 85-100], since q is divisible by p, which means that q + 1 is not divided by p, so according to a main result of the mentioned paper, there is a unique and bounded p-adic Gibbs measure (different from p-adic quasi Gibbs measure). (author)

  12. Oxidation potentials, Gibbs energies, enthalpies and entropies of actinide ions in aqueous solutions

    International Nuclear Information System (INIS)

    1977-01-01

    The values of the Gibbs energy, enthalpy, and entropy of different actinide ions, thermodynamic characteristics of the processes of hydration of these ions, and the presently known ionization potentials of actinides are given. The enthalpy and entropy components of the oxidation potentials of actinide elements are considered. The curves of the dependence of the Gibbs energy of ion formation on the atomic number of the element and the Frost diagrams are analyzed. The diagram proposed by Frost represents the graphical dependence of the Gibbs energy of hydrated ions on the degree of oxidation of the element. Using the Frost diagram it is easy to establish whether a given ion is stable to disproportioning

  13. Numerical implementation and oceanographic application of the Gibbs thermodynamic potential of seawater

    Directory of Open Access Journals (Sweden)

    R. Feistel

    2005-01-01

    Full Text Available The 2003 Gibbs thermodynamic potential function represents a very accurate, compact, consistent and comprehensive formulation of equilibrium properties of seawater. It is expressed in the International Temperature Scale ITS-90 and is fully consistent with the current scientific pure water standard, IAPWS-95. Source code examples in FORTRAN, C++ and Visual Basic are presented for the numerical implementation of the potential function and its partial derivatives, as well as for potential temperature. A collection of thermodynamic formulas and relations is given for possible applications in oceanography, from density and chemical potential over entropy and potential density to mixing heat and entropy production. For colligative properties like vapour pressure, freezing points, and for a Gibbs potential of sea ice, the equations relating the Gibbs function of seawater to those of vapour and ice are presented.

  14. Soil sampling strategies: Evaluation of different approaches

    Energy Technology Data Exchange (ETDEWEB)

    De Zorzi, Paolo [Agenzia per la Protezione dell' Ambiente e per i Servizi Tecnici (APAT), Servizio Metrologia Ambientale, Via di Castel Romano, 100-00128 Roma (Italy)], E-mail: paolo.dezorzi@apat.it; Barbizzi, Sabrina; Belli, Maria [Agenzia per la Protezione dell' Ambiente e per i Servizi Tecnici (APAT), Servizio Metrologia Ambientale, Via di Castel Romano, 100-00128 Roma (Italy); Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia [Agenzia Regionale per la Prevenzione e Protezione dell' Ambiente del Veneto, ARPA Veneto, U.O. Centro Qualita Dati, Via Spalato, 14-36045 Vicenza (Italy)

    2008-11-15

    The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2{sigma}, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies.

  15. Soil sampling strategies: Evaluation of different approaches

    International Nuclear Information System (INIS)

    De Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia

    2008-01-01

    The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2σ, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies

  16. Soil sampling strategies: evaluation of different approaches.

    Science.gov (United States)

    de Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia

    2008-11-01

    The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2sigma, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies.

  17. An efficient estimator for Gibbs random fields

    Czech Academy of Sciences Publication Activity Database

    Janžura, Martin

    2014-01-01

    Roč. 50, č. 6 (2014), s. 883-895 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : Gibbs random field * efficient estimator * empirical estimator Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2015/SI/janzura-0441325.pdf

  18. Periodic p-adic Gibbs Measures of q-State Potts Model on Cayley Trees I: The Chaos Implies the Vastness of the Set of p-Adic Gibbs Measures

    Science.gov (United States)

    Ahmad, Mohd Ali Khameini; Liao, Lingmin; Saburov, Mansoor

    2018-06-01

    We study the set of p-adic Gibbs measures of the q-state Potts model on the Cayley tree of order three. We prove the vastness of the set of the periodic p-adic Gibbs measures for such model by showing the chaotic behavior of the corresponding Potts-Bethe mapping over Q_p for the prime numbers p≡1 (mod 3). In fact, for 0< |θ -1|_p< |q|_p^2 < 1 where θ =\\exp _p(J) and J is a coupling constant, there exists a subsystem that is isometrically conjugate to the full shift on three symbols. Meanwhile, for 0< |q|_p^2 ≤ |θ -1|_p< |q|_p < 1, there exists a subsystem that is isometrically conjugate to a subshift of finite type on r symbols where r ≥ 4. However, these subshifts on r symbols are all topologically conjugate to the full shift on three symbols. The p-adic Gibbs measures of the same model for the prime numbers p=2,3 and the corresponding Potts-Bethe mapping are also discussed. On the other hand, for 0< |θ -1|_p< |q|_p < 1, we remark that the Potts-Bethe mapping is not chaotic when p=3 and p≡ 2 (mod 3) and we could not conclude the vastness of the set of the periodic p-adic Gibbs measures. In a forthcoming paper with the same title, we will treat the case 0< |q|_p ≤ |θ -1|_p < 1 for all prime numbers p.

  19. Modeling adsorption of cationic surfactants at air/water interface without using the Gibbs equation.

    Science.gov (United States)

    Phan, Chi M; Le, Thu N; Nguyen, Cuong V; Yusa, Shin-ichi

    2013-04-16

    The Gibbs adsorption equation has been indispensable in predicting the surfactant adsorption at the interfaces, with many applications in industrial and natural processes. This study uses a new theoretical framework to model surfactant adsorption at the air/water interface without the Gibbs equation. The model was applied to two surfactants, C14TAB and C16TAB, to determine the maximum surface excesses. The obtained values demonstrated a fundamental change, which was verified by simulations, in the molecular arrangement at the interface. The new insights, in combination with recent discoveries in the field, expose the limitations of applying the Gibbs adsorption equation to cationic surfactants at the air/water interface.

  20. The MaxEnt extension of a quantum Gibbs family, convex geometry and geodesics

    International Nuclear Information System (INIS)

    Weis, Stephan

    2015-01-01

    We discuss methods to analyze a quantum Gibbs family in the ultra-cold regime where the norm closure of the Gibbs family fails due to discontinuities of the maximum-entropy inference. The current discussion of maximum-entropy inference and irreducible correlation in the area of quantum phase transitions is a major motivation for this research. We extend a representation of the irreducible correlation from finite temperatures to absolute zero

  1. Prediction of Gibbs energies of formation and stability constants of some secondary uranium minerals containing the uranyl group

    International Nuclear Information System (INIS)

    Genderen, A.C.G. van; Weijden, C.H. van der

    1984-01-01

    For a group of minerals containing a common anion there exists a linear relationship between two parameters called ΔO and ΔF.ΔO is defined as the difference between the Gibbs energy of formation of a solid oxide and the Gibbs energy of formation of its aqueous cation, while ΔF is defined as the Gibbs energy of reaction of the formation of a mineral from the constituting oxide(s) and the acid. Using the Gibbs energies of formation of a number of known minerals the corresponding ΔO's and ΔF's were calculated and with the resulting regression equation it is possible to predict values for the Gibbs energies of formation of other minerals containing the same anion. This was done for 29 minerals containing the uranyl-ion together with phosphate, vanadate, arsenate or carbonate. (orig.)

  2. Spent nuclear fuel sampling strategy

    International Nuclear Information System (INIS)

    Bergmann, D.W.

    1995-01-01

    This report proposes a strategy for sampling the spent nuclear fuel (SNF) stored in the 105-K Basins (105-K East and 105-K West). This strategy will support decisions concerning the path forward SNF disposition efforts in the following areas: (1) SNF isolation activities such as repackaging/overpacking to a newly constructed staging facility; (2) conditioning processes for fuel stabilization; and (3) interim storage options. This strategy was developed without following the Data Quality Objective (DQO) methodology. It is, however, intended to augment the SNF project DQOS. The SNF sampling is derived by evaluating the current storage condition of the SNF and the factors that effected SNF corrosion/degradation

  3. Existence and uniqueness of Gibbs states for a statistical mechanical polyacetylene model

    International Nuclear Information System (INIS)

    Park, Y.M.

    1987-01-01

    One-dimensional polyacetylene is studied as a model of statistical mechanics. In a semiclassical approximation the system is equivalent to a quantum XY model interacting with unbounded classical spins in one-dimensional lattice space Z. By establishing uniform estimates, an infinite-volume-limit Hilbert space, a strongly continuous time evolution group of unitary operators, and an invariant vector are constructed. Moreover, it is proven that any infinite-limit state satisfies Gibbs conditions. Finally, a modification of Araki's relative entropy method is used to establish the uniqueness of Gibbs states

  4. Exploring Fourier Series and Gibbs Phenomenon Using Mathematica

    Science.gov (United States)

    Ghosh, Jonaki B.

    2011-01-01

    This article describes a laboratory module on Fourier series and Gibbs phenomenon which was undertaken by 32 Year 12 students. It shows how the use of CAS played the role of an "amplifier" by making higher level mathematical concepts accessible to students of year 12. Using Mathematica students were able to visualise Fourier series of…

  5. User-driven sampling strategies in image exploitation

    Science.gov (United States)

    Harvey, Neal; Porter, Reid

    2013-12-01

    Visual analytics and interactive machine learning both try to leverage the complementary strengths of humans and machines to solve complex data exploitation tasks. These fields overlap most significantly when training is involved: the visualization or machine learning tool improves over time by exploiting observations of the human-computer interaction. This paper focuses on one aspect of the human-computer interaction that we call user-driven sampling strategies. Unlike relevance feedback and active learning sampling strategies, where the computer selects which data to label at each iteration, we investigate situations where the user selects which data is to be labeled at each iteration. User-driven sampling strategies can emerge in many visual analytics applications but they have not been fully developed in machine learning. User-driven sampling strategies suggest new theoretical and practical research questions for both visualization science and machine learning. In this paper we identify and quantify the potential benefits of these strategies in a practical image analysis application. We find user-driven sampling strategies can sometimes provide significant performance gains by steering tools towards local minima that have lower error than tools trained with all of the data. In preliminary experiments we find these performance gains are particularly pronounced when the user is experienced with the tool and application domain.

  6. HERITABILITY AND BREEDING VALUE OF SHEEP FERTILITY ESTIMATED BY MEANS OF THE GIBBS SAMPLING METHOD USING THE LINEAR AND THRESHOLD MODELS

    Directory of Open Access Journals (Sweden)

    DARIUSZ Piwczynski

    2013-03-01

    Full Text Available The research was carried out on 4,030 Polish Merino ewes born in the years 1991- 2001, kept in 15 flocks from the Pomorze and Kujawy region. Fertility of ewes in subsequent reproduction seasons was analysed with the use of multiple logistic regression. The research showed that there is a statistical influence of the flock, year of birth, age of dam, flock year interaction of birth on the ewes fertility. In order to estimate the genetic parameters, the Gibbs sampling method was applied, using the univariate animal models, both linear as well as threshold. Estimates of fertility depending on the model equalled 0.067 to 0.104, whereas the estimates of repeatability equalled respectively: 0.076 and 0.139. The obtained genetic parameters were then used to estimate the breeding values of the animals in terms of controlled trait (Best Linear Unbiased Prediction method using linear and threshold models. The obtained animal breeding values rankings in respect of the same trait with the use of linear and threshold models were strongly correlated with each other (rs = 0.972. Negative genetic trends of fertility (0.01-0.08% per year were found.

  7. On the Tsallis Entropy for Gibbs Random Fields

    Czech Academy of Sciences Publication Activity Database

    Janžura, Martin

    2014-01-01

    Roč. 21, č. 33 (2014), s. 59-69 ISSN 1212-074X R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional research plan: CEZ:AV0Z1075907 Keywords : Tsallis entropy * Gibbs random fields * phase transitions * Tsallis entropy rate Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2014/SI/janzura-0441885.pdf

  8. Discrete tomographic reconstruction of 2D polycrystal orientation maps from X-ray diffraction projections using Gibbs priors

    DEFF Research Database (Denmark)

    Rodek, L.; Knudsen, E.; Poulsen, H.F.

    2005-01-01

    discrete tomographic algorithm, applying image-modelling Gibbs priors and a homogeneity condition. The optimization of the objective function is accomplished via the Gibbs Sampler in conjunction with simulated annealing. In order to express the structure of the orientation map, the similarity...

  9. A Simple Method to Calculate the Temperature Dependence of the Gibbs Energy and Chemical Equilibrium Constants

    Science.gov (United States)

    Vargas, Francisco M.

    2014-01-01

    The temperature dependence of the Gibbs energy and important quantities such as Henry's law constants, activity coefficients, and chemical equilibrium constants is usually calculated by using the Gibbs-Helmholtz equation. Although, this is a well-known approach and traditionally covered as part of any physical chemistry course, the required…

  10. Evaluation of sampling strategies to estimate crown biomass

    Directory of Open Access Journals (Sweden)

    Krishna P Poudel

    2015-01-01

    Full Text Available Background Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree. Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products, fuel load assessments and fire management strategies, and wildfire modeling. However, crown biomass is difficult to predict because of the variability within and among species and sites. Thus the allometric equations used for predicting crown biomass should be based on data collected with precise and unbiased sampling strategies. In this study, we evaluate the performance different sampling strategies to estimate crown biomass and to evaluate the effect of sample size in estimating crown biomass. Methods Using data collected from 20 destructively sampled trees, we evaluated 11 different sampling strategies using six evaluation statistics: bias, relative bias, root mean square error (RMSE, relative RMSE, amount of biomass sampled, and relative biomass sampled. We also evaluated the performance of the selected sampling strategies when different numbers of branches (3, 6, 9, and 12 are selected from each tree. Tree specific log linear model with branch diameter and branch length as covariates was used to obtain individual branch biomass. Results Compared to all other methods stratified sampling with probability proportional to size estimation technique produced better results when three or six branches per tree were sampled. However, the systematic sampling with ratio estimation technique was the best when at least nine branches per tree were sampled. Under the stratified sampling strategy, selecting unequal number of branches per stratum produced approximately similar results to simple random sampling, but it further decreased RMSE when information on branch diameter is used in the design and estimation phases. Conclusions Use of

  11. First-Year University Chemistry Textbooks' Misrepresentation of Gibbs Energy

    Science.gov (United States)

    Quilez, Juan

    2012-01-01

    This study analyzes the misrepresentation of Gibbs energy by college chemistry textbooks. The article reports the way first-year university chemistry textbooks handle the concepts of spontaneity and equilibrium. Problems with terminology are found; confusion arises in the meaning given to [delta]G, [delta][subscript r]G, [delta]G[degrees], and…

  12. Virial theorem and Gibbs thermodynamic potential for Coulomb systems

    International Nuclear Information System (INIS)

    Bobrov, V. B.; Trigger, S. A.

    2014-01-01

    Using the grand canonical ensemble and the virial theorem, we show that the Gibbs thermodynamic potential of the non-relativistic system of charged particles is uniquely defined by single-particle Green functions of electrons and nuclei. This result is valid beyond the perturbation theory with respect to the interparticle interaction

  13. Virial theorem and Gibbs thermodynamic potential for Coulomb systems

    OpenAIRE

    Bobrov, V. B.; Trigger, S. A.

    2013-01-01

    Using the grand canonical ensemble and the virial theorem, we show that the Gibbs thermodynamic potential of the non-relativistic system of charged particles is uniquely defined by single-particle Green functions of electrons and nuclei. This result is valid beyond the perturbation theory with respect to the interparticle interaction.

  14. One of Gibbs's ideas that has gone unnoticed (comment on chapter IX of his classic book)

    International Nuclear Information System (INIS)

    Sukhanov, Alexander D; Rudoi, Yurii G

    2006-01-01

    We show that contrary to the commonly accepted view, Chapter IX of Gibbs's book [1] contains the prolegomena to a macroscopic statistical theory that is qualitatively different from his own microscopic statistical mechanics. The formulas obtained by Gibbs were the first results in the history of physics related to the theory of fluctuations in any macroparameters, including temperature. (from the history of physics)

  15. Gibbs free energy of formation of lanthanum rhodate by quadrupole mass spectrometer

    International Nuclear Information System (INIS)

    Prasad, R.; Banerjee, Aparna; Venugopal, V.

    2003-01-01

    The ternary oxide in the system La-Rh-O is of considerable importance because of its application in catalysis. Phase equilibria in the pseudo-binary system La 2 O 3 -Rh 2 O 3 has been investigated by Shevyakov et. al. Gibbs free energy of LaRhO 3 (s) was determined by Jacob et. al. using a solid state Galvanic cell in the temperature range 890 to 1310 K. No other thermodynamic data were available in the literature. Hence it was decided to determine Gibbs free energy of formation of LaRhO 3 (s) by an independent technique, viz. quadrupole mass spectrometer (QMS) coupled with a Knudsen effusion cell and the results are presented

  16. Validated sampling strategy for assessing contaminants in soil stockpiles

    International Nuclear Information System (INIS)

    Lame, Frank; Honders, Ton; Derksen, Giljam; Gadella, Michiel

    2005-01-01

    Dutch legislation on the reuse of soil requires a sampling strategy to determine the degree of contamination. This sampling strategy was developed in three stages. Its main aim is to obtain a single analytical result, representative of the true mean concentration of the soil stockpile. The development process started with an investigation into how sample pre-treatment could be used to obtain representative results from composite samples of heterogeneous soil stockpiles. Combining a large number of random increments allows stockpile heterogeneity to be fully represented in the sample. The resulting pre-treatment method was then combined with a theoretical approach to determine the necessary number of increments per composite sample. At the second stage, the sampling strategy was evaluated using computerised models of contaminant heterogeneity in soil stockpiles. The now theoretically based sampling strategy was implemented by the Netherlands Centre for Soil Treatment in 1995. It was applied to all types of soil stockpiles, ranging from clean to heavily contaminated, over a period of four years. This resulted in a database containing the analytical results of 2570 soil stockpiles. At the final stage these results were used for a thorough validation of the sampling strategy. It was concluded that the model approach has indeed resulted in a sampling strategy that achieves analytical results representative of the mean concentration of soil stockpiles. - A sampling strategy that ensures analytical results representative of the mean concentration in soil stockpiles is presented and validated

  17. Robust identification of transcriptional regulatory networks using a Gibbs sampler on outlier sum statistic.

    Science.gov (United States)

    Gu, Jinghua; Xuan, Jianhua; Riggins, Rebecca B; Chen, Li; Wang, Yue; Clarke, Robert

    2012-08-01

    Identification of transcriptional regulatory networks (TRNs) is of significant importance in computational biology for cancer research, providing a critical building block to unravel disease pathways. However, existing methods for TRN identification suffer from the inclusion of excessive 'noise' in microarray data and false-positives in binding data, especially when applied to human tumor-derived cell line studies. More robust methods that can counteract the imperfection of data sources are therefore needed for reliable identification of TRNs in this context. In this article, we propose to establish a link between the quality of one target gene to represent its regulator and the uncertainty of its expression to represent other target genes. Specifically, an outlier sum statistic was used to measure the aggregated evidence for regulation events between target genes and their corresponding transcription factors. A Gibbs sampling method was then developed to estimate the marginal distribution of the outlier sum statistic, hence, to uncover underlying regulatory relationships. To evaluate the effectiveness of our proposed method, we compared its performance with that of an existing sampling-based method using both simulation data and yeast cell cycle data. The experimental results show that our method consistently outperforms the competing method in different settings of signal-to-noise ratio and network topology, indicating its robustness for biological applications. Finally, we applied our method to breast cancer cell line data and demonstrated its ability to extract biologically meaningful regulatory modules related to estrogen signaling and action in breast cancer. The Gibbs sampler MATLAB package is freely available at http://www.cbil.ece.vt.edu/software.htm. xuan@vt.edu Supplementary data are available at Bioinformatics online.

  18. The Gibbs-Thomson equation for a spherical coherent precipitate with applications to nucleation

    International Nuclear Information System (INIS)

    Rottman, C.; Voorhees, P.W.; Johnson, W.C.

    1988-01-01

    The conditions for interfacial thermodynamic equilibrium form the basis for the derivation of a number of basic equations in materials science, including the various forms of the Gibbs-Thomson equation. The equilibrium conditions pertaining to a curved interface in a two-phase fluid system are well-known. In contrast, the conditions for thermodynamic equilibrium at a curved interface in nonhydrostatically stressed solids have only recently been examined. These conditions can be much different from those at a fluid interface and, as a result, the Gibbs-Thomson equation appropriate to coherent solids is likely to be considerably different from that for fluids. In this paper, the authors first derive the conditions necessary for thermodynamic equilibrium at the precipitate-matrix interface of a coherent spherical precipitate. The authors' derivation of these equilibrium conditions includes a correction to the equilibrium conditions of Johnson and Alexander for a spherical precipitate in an isotropic matrix. They then use these conditions to derive the dependence of the interfacial precipitate and matrix concentrations on precipitate radius (Gibbs-Thomson equation) for a such a precipitate. In addition, these relationships are then used to calculate the critical radius for the nucleation of a coherent misfitting precipitate

  19. LA CASA GIBBS Y EL MONOPOLIO SALITRERO PERUANO: 1876-1878

    Directory of Open Access Journals (Sweden)

    Manuel Ravest Mora

    2008-06-01

    Full Text Available El objeto de este breve trabajo es mostrar la disposición de Anthony Gibbs & Sons, y de sus filiales, a apoyar el proyecto monopólico salitrero del Perú con recursos monetarios y los manejos de sus directores en la única empresa que, dada su capacidad de elaboración, podía hacerlo fracasar: la Compañía de Salitres y Ferrocarril de Antofagasta, de la que Gibbs era el segundo mayor accionista. Para el gobierno chileno la causa primaria de la guerra de 1879 fue el intento del Perú por monopolizar la producción salitrera. Bolivia, su aliada secreta desde 1873, colaboró arrendándole y vendiéndole sus depósitos de nitrato, e imponiendo a la exportación del salitre un tributo que infringió la condición -estipulada en un Tratado de Límites- bajo la cual Chile le cedió territorio. Su recuperación manu militari inició el conflicto. A partir de la segunda mitad del siglo pasado esta tesis economicista-legalista fue cuestionada en Chile y en el exterior, desplazando el acento causal al reordenamiento de los mercados de materias primas -de las que los beligerantes eran exportadores- a consecuencia de la crisis mundial de la década de 1870.This brief study aims at showing Anthony Gibbs & Sons disposition in supporting the Peruvian monopolistic nitrate project with monetary resources and its Director's influences in the only company which, due its production's capacity, could make the project fail: the Chilean Antofagasta Nitrate and Railway Co. in which Gibbs was the second most important stockholder. According to Chilean government the primary cause of 1879's war was Peru's attempt to monopolize nitrate production. Bolivia, its secret allied since 1873, helped renting and selling him her nitrate fields and imposing a tax on the nitrate exports of the Chilean company in Antofagasta, thus violating the condition stated in a Border Treaty by which Chile had ceded territory. Its recovery through the use of military forcé was the first act

  20. Estimates of Gibbs free energies of formation of chlorinated aliphatic compounds

    NARCIS (Netherlands)

    Dolfing, Jan; Janssen, Dick B.

    1994-01-01

    The Gibbs free energy of formation of chlorinated aliphatic compounds was estimated with Mavrovouniotis' group contribution method. The group contribution of chlorine was estimated from the scarce data available on chlorinated aliphatics in the literature, and found to vary somewhat according to the

  1. GibbsCluster: unsupervised clustering and alignment of peptide sequences

    DEFF Research Database (Denmark)

    Andreatta, Massimo; Alvarez, Bruno; Nielsen, Morten

    2017-01-01

    motif characterizing each cluster. Several parameters are available to customize cluster analysis, including adjustable penalties for small clusters and overlapping groups and a trash cluster to remove outliers. As an example application, we used the server to deconvolute multiple specificities in large......-scale peptidome data generated by mass spectrometry. The server is available at http://www.cbs.dtu.dk/services/GibbsCluster-2.0....

  2. Determination of standard molar Gibbs energy of formation of Sm6UO12(s)

    International Nuclear Information System (INIS)

    Sahu, Manjulata; Dash, Smruti

    2015-01-01

    The standard molar Gibbs energies of formation of Sm 6 UO 12 (s) have been measured using an oxygen concentration cell with yttria stabilized zirconia as solid electrolyte. Δ f G o m (T) for Sm 6 UO 12 (s) has been calculated using the measured and required thermodynamic data from the literature. The calculated Gibbs energy expression in the temperature range 899 to 1127 K can be given as: Δ f G o m (Nd 6 UO 12 , s,T)/(±2.3) kJ∙ mol -1 = -6681 +1.099 (T/K) (899-1127 K)(T/K). (author)

  3. Gibbs' theorem for open systems with incomplete statistics

    International Nuclear Information System (INIS)

    Bagci, G.B.

    2009-01-01

    Gibbs' theorem, which is originally intended for canonical ensembles with complete statistics has been generalized to open systems with incomplete statistics. As a result of this generalization, it is shown that the stationary equilibrium distribution of inverse power law form associated with the incomplete statistics has maximum entropy even for open systems with energy or matter influx. The renormalized entropy definition given in this paper can also serve as a measure of self-organization in open systems described by incomplete statistics.

  4. Standard Gibbs free energies for transfer of actinyl ions at the aqueous/organic solution interface

    International Nuclear Information System (INIS)

    Kitatsuji, Yoshihiro; Okugaki, Tomohiko; Kasuno, Megumi; Kubota, Hiroki; Maeda, Kohji; Kimura, Takaumi; Yoshida, Zenko; Kihara, Sorin

    2011-01-01

    Research highlights: → Standard Gibbs free energies for ion-transfer of tri- to hexavalent actinide ions. → Determination is based on distribution method combined with ion-transfer voltammetry. → Organic solvents examined are nitrobenzene, DCE, benzonitrile, acetophenone and NPOE. → Gibbs free energies of U(VI), Np(VI) and Pu(VI) are similar to each other. → Gibbs free energies of Np(V) is very large, comparing with ordinary monovalent cations. - Abstract: Standard Gibbs free energies for transfer (ΔG tr 0 ) of actinyl ions (AnO 2 z+ ; z = 2 or 1; An: U, Np, or Pu) between an aqueous solution and an organic solution were determined based on distribution method combined with voltammetry for ion transfer at the interface of two immiscible electrolyte solutions. The organic solutions examined were nitrobenzene, 1,2-dichloroethane, benzonitrile, acetophenone, and 2-nitrophenyl octyl ether. Irrespective of the type of organic solutions, ΔG tr 0 of UO 2 2+ ,NpO 2 2+ , and PuO 2 2+ were nearly equal to each other and slightly larger than that of Mg 2+ . The ΔG tr 0 of NpO 2 + was extraordinary large compared with those of ordinary monovalent cations. The dependence of ΔG tr 0 of AnO 2 z+ on the type of organic solutions was similar to that of H + or Mg 2+ . The ΔG tr 0 of An 3+ and An 4+ were also discussed briefly.

  5. Uniqueness of Gibbs Measure for Models with Uncountable Set of Spin Values on a Cayley Tree

    International Nuclear Information System (INIS)

    Eshkabilov, Yu. Kh.; Haydarov, F. H.; Rozikov, U. A.

    2013-01-01

    We consider models with nearest-neighbor interactions and with the set [0, 1] of spin values, on a Cayley tree of order K ≥ 1. It is known that the ‘splitting Gibbs measures’ of the model can be described by solutions of a nonlinear integral equation. For arbitrary k ≥ 2 we find a sufficient condition under which the integral equation has unique solution, hence under the condition the corresponding model has unique splitting Gibbs measure.

  6. Gibbs Free Energy of Formation for Selected Platinum Group Minerals (PGM

    Directory of Open Access Journals (Sweden)

    Spiros Olivotos

    2016-01-01

    Full Text Available Thermodynamic data for platinum group (Os, Ir, Ru, Rh, Pd and Pt minerals are very limited. The present study is focused on the calculation of the Gibbs free energy of formation (ΔfG° for selected PGM occurring in layered intrusions and ophiolite complexes worldwide, applying available experimental data on their constituent elements at their standard state (ΔG = G(species − ΔG(elements, using the computer program HSC Chemistry software 6.0. The evaluation of the accuracy of the calculation method was made by the calculation of (ΔGf of rhodium sulfide phases. The calculated values were found to be ingood agreement with those measured in the binary system (Rh + S as a function of temperature by previous authors (Jacob and Gupta (2014. The calculated Gibbs free energy (ΔfG° followed the order RuS2 < (Ir,OsS2 < (Pt, PdS < (Pd, PtTe2, increasing from compatible to incompatible noble metals and from sulfides to tellurides.

  7. Combining the AFLOW GIBBS and elastic libraries to efficiently and robustly screen thermomechanical properties of solids

    Science.gov (United States)

    Toher, Cormac; Oses, Corey; Plata, Jose J.; Hicks, David; Rose, Frisco; Levy, Ohad; de Jong, Maarten; Asta, Mark; Fornari, Marco; Buongiorno Nardelli, Marco; Curtarolo, Stefano

    2017-06-01

    Thorough characterization of the thermomechanical properties of materials requires difficult and time-consuming experiments. This severely limits the availability of data and is one of the main obstacles for the development of effective accelerated materials design strategies. The rapid screening of new potential materials requires highly integrated, sophisticated, and robust computational approaches. We tackled the challenge by developing an automated, integrated workflow with robust error-correction within the AFLOW framework which combines the newly developed "Automatic Elasticity Library" with the previously implemented GIBBS method. The first extracts the mechanical properties from automatic self-consistent stress-strain calculations, while the latter employs those mechanical properties to evaluate the thermodynamics within the Debye model. This new thermoelastic workflow is benchmarked against a set of 74 experimentally characterized systems to pinpoint a robust computational methodology for the evaluation of bulk and shear moduli, Poisson ratios, Debye temperatures, Grüneisen parameters, and thermal conductivities of a wide variety of materials. The effect of different choices of equations of state and exchange-correlation functionals is examined and the optimum combination of properties for the Leibfried-Schlömann prediction of thermal conductivity is identified, leading to improved agreement with experimental results than the GIBBS-only approach. The framework has been applied to the AFLOW.org data repositories to compute the thermoelastic properties of over 3500 unique materials. The results are now available online by using an expanded version of the REST-API described in the Appendix.

  8. Uniqueness of Gibbs states and global Markov property for Euclidean fields

    International Nuclear Information System (INIS)

    Albeverio, S.; Hoeegh-Krohn, R.

    1981-01-01

    The authors briefly discuss the proof of the uniqueness of solutions of the DLR equations (uniqueness of Gibbs states) in the class of regular generalized random fields (in the sense of having second moments bounded by those of some Euclidean field), for the Euclidean fields with trigonometric interaction. (Auth.)

  9. Sampling strategies for estimating brook trout effective population size

    Science.gov (United States)

    Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher

    2012-01-01

    The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...

  10. Proposed Empirical Entropy and Gibbs Energy Based on Observations of Scale Invariance in Open Nonequilibrium Systems.

    Science.gov (United States)

    Tuck, Adrian F

    2017-09-07

    There is no widely agreed definition of entropy, and consequently Gibbs energy, in open systems far from equilibrium. One recent approach has sought to formulate an entropy and Gibbs energy based on observed scale invariances in geophysical variables, particularly in atmospheric quantities, including the molecules constituting stratospheric chemistry. The Hamiltonian flux dynamics of energy in macroscopic open nonequilibrium systems maps to energy in equilibrium statistical thermodynamics, and corresponding equivalences of scale invariant variables with other relevant statistical mechanical variables such as entropy, Gibbs energy, and 1/(k Boltzmann T), are not just formally analogous but are also mappings. Three proof-of-concept representative examples from available adequate stratospheric chemistry observations-temperature, wind speed and ozone-are calculated, with the aim of applying these mappings and equivalences. Potential applications of the approach to scale invariant observations from the literature, involving scales from molecular through laboratory to astronomical, are considered. Theoretical support for the approach from the literature is discussed.

  11. Quantitative Boltzmann-Gibbs Principles via Orthogonal Polynomial Duality

    Science.gov (United States)

    Ayala, Mario; Carinci, Gioia; Redig, Frank

    2018-06-01

    We study fluctuation fields of orthogonal polynomials in the context of particle systems with duality. We thereby obtain a systematic orthogonal decomposition of the fluctuation fields of local functions, where the order of every term can be quantified. This implies a quantitative generalization of the Boltzmann-Gibbs principle. In the context of independent random walkers, we complete this program, including also fluctuation fields in non-stationary context (local equilibrium). For other interacting particle systems with duality such as the symmetric exclusion process, similar results can be obtained, under precise conditions on the n particle dynamics.

  12. Experimental Determination of Third Derivative of the Gibbs Free Energy, G II

    DEFF Research Database (Denmark)

    Koga, Yoshikata; Westh, Peter; Inaba, Akira

    2010-01-01

    We have been evaluating third derivative quantities of the Gibbs free energy, G, by graphically differentiating the second derivatives that are accessible experimentally, and demonstrated their power in elucidating the mixing schemes in aqueous solutions. Here we determine directly one of the third...

  13. Inference with minimal Gibbs free energy in information field theory

    International Nuclear Information System (INIS)

    Ensslin, Torsten A.; Weig, Cornelius

    2010-01-01

    Non-linear and non-Gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the Gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from Poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a Gaussian signal with unknown spectrum, and (iii) inference of a Poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how Gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-Gaussian posterior.

  14. The importance of the photosynthetic Gibbs effect in the elucidation of the Calvin-Benson-Bassham cycle.

    Science.gov (United States)

    Ebenhöh, Oliver; Spelberg, Stephanie

    2018-02-19

    The photosynthetic carbon reduction cycle, or Calvin-Benson-Bassham (CBB) cycle, is now contained in every standard biochemistry textbook. Although the cycle was already proposed in 1954, it is still the subject of intense research, and even the structure of the cycle, i.e. the exact series of reactions, is still under debate. The controversy about the cycle's structure was fuelled by the findings of Gibbs and Kandler in 1956 and 1957, when they observed that radioactive 14 CO 2 was dynamically incorporated in hexoses in a very atypical and asymmetrical way, a phenomenon later termed the 'photosynthetic Gibbs effect'. Now, it is widely accepted that the photosynthetic Gibbs effect is not in contradiction to the reaction scheme proposed by CBB, but the arguments given have been largely qualitative and hand-waving. To fully appreciate the controversy and to understand the difficulties in interpreting the Gibbs effect, it is illustrative to illuminate the history of the discovery of the CBB cycle. We here give an account of central scientific advances and discoveries, which were essential prerequisites for the elucidation of the cycle. Placing the historic discoveries in the context of the modern textbook pathway scheme illustrates the complexity of the cycle and demonstrates why especially dynamic labelling experiments are far from easy to interpret. We conclude by arguing that it requires sound theoretical approaches to resolve conflicting interpretations and to provide consistent quantitative explanations. © 2018 The Author(s).

  15. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Directory of Open Access Journals (Sweden)

    Jake M Ferguson

    2014-06-01

    Full Text Available The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  16. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Science.gov (United States)

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  17. Gibbs paradox of entropy of mixing experimental facts. Its rejection, and the theoretical consequences

    International Nuclear Information System (INIS)

    Lin, Shu-Kun

    1996-01-01

    Gibbs paradox statement of entropy of mixing has been regarded as the theoretical foundation of statistical mechanics, quantum theory and biophysics. However, all the relevant chemical experimental observations and logical analyses indicate that the Gibbs paradox statement is false. I prove that this statement is wrong: Gibbs paradox statement implies that entropy decreases with the increase in symmetry (as represented by a symmetry number σ; see any statistical mechanics textbook). From group theory any system has at least a symmetry number σ=1 which is the identity operation for a strictly asymmetric system. It follows that the entropy of a system is equal to, or less than, zero. However, from either von Neumann-Shannon entropy formula (S(w) =-Σ ω in p 1 ) or the Boltzmann entropy formula (S = in w) and the original definition, entropy is non-negative. Therefore, this statement is false. It should not be a surprise that for the first time, many outstanding problems such as the validity of Pauling's resonance theory, the explanation of second order phase transition phenomena, the biophysical problem of protein folding and the related hydrophobic effect, etc., can be solved. Empirical principles such as Pauli principle (and Hund's rule) and HSAB principle, etc., can also be given a theoretical explanation

  18. Gibb's energy and intermolecular free length of 'Borassus Flabellifier' (BF) and Adansonia digitata (AnD) aqueous binary mixture

    International Nuclear Information System (INIS)

    Phadke, Sushil; Shrivastava, Bhakt Darshan; Ujle, S K; Mishra, Ashutosh; Dagaonkar, N

    2014-01-01

    One of the potential driving forces behind a chemical reaction is favourable a new quantity known as the Gibbs free energy (G) of the system, which reflects the balance between these forces. Ultrasonic velocity and absorption measurements in liquids and liquid mixtures find extensive application to study the nature of intermolecular forces. Ultrasonic velocity measurements have been successfully employed to detect weak and strong molecular interactions present in binary and ternary liquid mixtures. After measuring the density and ultrasonic velocity of aqueous solution of 'Borassus Flabellifier' BF and Adansonia digitata And, we calculated Gibb's energy and intermolecular free length. The velocity of ultrasonic waves was measured, using a multi-frequency ultrasonic interferometer with a high degree of accuracy operating Model M-84 by M/s Mittal Enterprises, New Delhi, at a fixed frequency of 2 MHz. Natural sample 'Borassus Flabellifier' BF fruit pulp and Adansonia digitata AnD powder was collected from Dhar, District of MP, India for this study.

  19. Thermodynamics, Gibbs Method and Statistical Physics of Electron Gases Gibbs Method and Statistical Physics of Electron Gases

    CERN Document Server

    Askerov, Bahram M

    2010-01-01

    This book deals with theoretical thermodynamics and the statistical physics of electron and particle gases. While treating the laws of thermodynamics from both classical and quantum theoretical viewpoints, it posits that the basis of the statistical theory of macroscopic properties of a system is the microcanonical distribution of isolated systems, from which all canonical distributions stem. To calculate the free energy, the Gibbs method is applied to ideal and non-ideal gases, and also to a crystalline solid. Considerable attention is paid to the Fermi-Dirac and Bose-Einstein quantum statistics and its application to different quantum gases, and electron gas in both metals and semiconductors is considered in a nonequilibrium state. A separate chapter treats the statistical theory of thermodynamic properties of an electron gas in a quantizing magnetic field.

  20. Extensitivity of entropy and modern form of Gibbs paradox

    International Nuclear Information System (INIS)

    Home, D.; Sengupta, S.

    1981-01-01

    The extensivity property of entropy is clarified in the light of a critical examination of the entropy formula based on quantum statistics and the relevant thermodynamic requirement. The modern form of the Gibbs paradox, related to the discontinuous jump in entropy due to identity or non-identity of particles, is critically investigated. Qualitative framework of a new resolution of this paradox, which analyses the general effect of distinction mark on the Hamiltonian of a system of identical particles, is outlined. (author)

  1. Empirical Statistical Power for Testing Multilocus Genotypic Effects under Unbalanced Designs Using a Gibbs Sampler

    Directory of Open Access Journals (Sweden)

    Chaeyoung Lee

    2012-11-01

    Full Text Available Epistasis that may explain a large portion of the phenotypic variation for complex economic traits of animals has been ignored in many genetic association studies. A Baysian method was introduced to draw inferences about multilocus genotypic effects based on their marginal posterior distributions by a Gibbs sampler. A simulation study was conducted to provide statistical powers under various unbalanced designs by using this method. Data were simulated by combined designs of number of loci, within genotype variance, and sample size in unbalanced designs with or without null combined genotype cells. Mean empirical statistical power was estimated for testing posterior mean estimate of combined genotype effect. A practical example for obtaining empirical statistical power estimates with a given sample size was provided under unbalanced designs. The empirical statistical powers would be useful for determining an optimal design when interactive associations of multiple loci with complex phenotypes were examined.

  2. Excess Gibbs energy for six binary solid solutions of molecularly simple substances

    Energy Technology Data Exchange (ETDEWEB)

    Lobo, L J; Staveley, L A.K.

    1985-01-01

    In this paper we apply the method developed in a previous study of Ar + CH/sub 4/ to the evaluation of the excess Gibbs energy G /SUP E.S/ for solid solutions of two molecularly simple components. The method depends on combining information on the excess Gibbs energy G /SUP E.L/ for the liquid mixture of the two components with a knowledge of the (T, x) solid-liquid phase diagram. Certain thermal properties o the pure substances are also needed. G /SUP E.S/ has been calculated for binary mixtures of Ar + Kr, Kr + CH/sub 4/, CO + N/sub 2/, Kr + Xe, Ar + N/sub 2/, and Ar + CO. In general, but not always, the solid mixtures are more non-ideal than the liquid mixtures of the same composition at the same temperature. Except for the Kr + CH/sub 4/ system, the ratio r = G /SUP E.S/ /G /SUP E.L/ is larger the richer the solution in the component with the smaller molecules.

  3. The Concentration Dependence of the (Delta)s Term in the Gibbs Free Energy Function: Application to Reversible Reactions in Biochemistry

    Science.gov (United States)

    Gary, Ronald K.

    2004-01-01

    The concentration dependence of (delta)S term in the Gibbs free energy function is described in relation to its application to reversible reactions in biochemistry. An intuitive and non-mathematical argument for the concentration dependence of the (delta)S term in the Gibbs free energy equation is derived and the applicability of the equation to…

  4. Fast covariance estimation for innovations computed from a spatial Gibbs point process

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Rubak, Ege

    In this paper, we derive an exact formula for the covariance of two innovations computed from a spatial Gibbs point process and suggest a fast method for estimating this covariance. We show how this methodology can be used to estimate the asymptotic covariance matrix of the maximum pseudo...

  5. GPU-accelerated Gibbs ensemble Monte Carlo simulations of Lennard-Jonesium

    Science.gov (United States)

    Mick, Jason; Hailat, Eyad; Russo, Vincent; Rushaidat, Kamel; Schwiebert, Loren; Potoff, Jeffrey

    2013-12-01

    This work describes an implementation of canonical and Gibbs ensemble Monte Carlo simulations on graphics processing units (GPUs). The pair-wise energy calculations, which consume the majority of the computational effort, are parallelized using the energetic decomposition algorithm. While energetic decomposition is relatively inefficient for traditional CPU-bound codes, the algorithm is ideally suited to the architecture of the GPU. The performance of the CPU and GPU codes are assessed for a variety of CPU and GPU combinations for systems containing between 512 and 131,072 particles. For a system of 131,072 particles, the GPU-enabled canonical and Gibbs ensemble codes were 10.3 and 29.1 times faster (GTX 480 GPU vs. i5-2500K CPU), respectively, than an optimized serial CPU-bound code. Due to overhead from memory transfers from system RAM to the GPU, the CPU code was slightly faster than the GPU code for simulations containing less than 600 particles. The critical temperature Tc∗=1.312(2) and density ρc∗=0.316(3) were determined for the tail corrected Lennard-Jones potential from simulations of 10,000 particle systems, and found to be in exact agreement with prior mixed field finite-size scaling calculations [J.J. Potoff, A.Z. Panagiotopoulos, J. Chem. Phys. 109 (1998) 10914].

  6. The use of computational thermodynamics for the determination of surface tension and Gibbs-Thomson coefficient of multicomponent alloys

    Science.gov (United States)

    Ferreira, D. J. S.; Bezerra, B. N.; Collyer, M. N.; Garcia, A.; Ferreira, I. L.

    2018-04-01

    The simulation of casting processes demands accurate information on the thermophysical properties of the alloy; however, such information is scarce in the literature for multicomponent alloys. Generally, metallic alloys applied in industry have more than three solute components. In the present study, a general solution of Butler's formulation for surface tension is presented for multicomponent alloys and is applied in quaternary Al-Cu-Si-Fe alloys, thus permitting the Gibbs-Thomson coefficient to be determined. Such coefficient is a determining factor to the reliability of predictions furnished by microstructure growth models and by numerical computations of solidification thermal parameters, which will depend on the thermophysical properties assumed in the calculations. The Gibbs-Thomson coefficient for ternary and quaternary alloys is seldom reported in the literature. A numerical model based on Powell's hybrid algorithm and a finite difference Jacobian approximation has been coupled to a Thermo-Calc TCAPI interface to assess the excess Gibbs energy of the liquid phase, permitting liquidus temperature, latent heat, alloy density, surface tension and Gibbs-Thomson coefficient for Al-Cu-Si-Fe hypoeutectic alloys to be calculated, as an example of calculation capabilities for multicomponent alloys of the proposed method. The computed results are compared with thermophysical properties of binary Al-Cu and ternary Al-Cu-Si alloys found in the literature and presented as a function of the Cu solute composition.

  7. Work and entropy production in generalised Gibbs ensembles

    International Nuclear Information System (INIS)

    Perarnau-Llobet, Martí; Riera, Arnau; Gallego, Rodrigo; Wilming, Henrik; Eisert, Jens

    2016-01-01

    Recent years have seen an enormously revived interest in the study of thermodynamic notions in the quantum regime. This applies both to the study of notions of work extraction in thermal machines in the quantum regime, as well as to questions of equilibration and thermalisation of interacting quantum many-body systems as such. In this work we bring together these two lines of research by studying work extraction in a closed system that undergoes a sequence of quenches and equilibration steps concomitant with free evolutions. In this way, we incorporate an important insight from the study of the dynamics of quantum many body systems: the evolution of closed systems is expected to be well described, for relevant observables and most times, by a suitable equilibrium state. We will consider three kinds of equilibration, namely to (i) the time averaged state, (ii) the Gibbs ensemble and (iii) the generalised Gibbs ensemble, reflecting further constants of motion in integrable models. For each effective description, we investigate notions of entropy production, the validity of the minimal work principle and properties of optimal work extraction protocols. While we keep the discussion general, much room is dedicated to the discussion of paradigmatic non-interacting fermionic quantum many-body systems, for which we identify significant differences with respect to the role of the minimal work principle. Our work not only has implications for experiments with cold atoms, but also can be viewed as suggesting a mindset for quantum thermodynamics where the role of the external heat baths is instead played by the system itself, with its internal degrees of freedom bringing coarse-grained observables to equilibrium. (paper)

  8. The application of computational thermodynamics and a numerical model for the determination of surface tension and Gibbs-Thomson coefficient of aluminum based alloys

    International Nuclear Information System (INIS)

    Jacome, Paulo A.D.; Landim, Mariana C.; Garcia, Amauri; Furtado, Alexandre F.; Ferreira, Ivaldo L.

    2011-01-01

    Highlights: → Surface tension and the Gibbs-Thomson coefficient are computed for Al-based alloys. → Butler's scheme and ThermoCalc are used to compute the thermophysical properties. → Predictive cell/dendrite growth models depend on accurate thermophysical properties. → Mechanical properties can be related to the microstructural cell/dendrite spacing. - Abstract: In this paper, a solution for Butler's formulation is presented permitting the surface tension and the Gibbs-Thomson coefficient of Al-based binary alloys to be determined. The importance of Gibbs-Thomson coefficient for binary alloys is related to the reliability of predictions furnished by predictive cellular and dendritic growth models and of numerical computations of solidification thermal variables, which will be strongly dependent on the thermophysical properties assumed for the calculations. A numerical model based on Powell hybrid algorithm and a finite difference Jacobian approximation was coupled to a specific interface of a computational thermodynamics software in order to assess the excess Gibbs energy of the liquid phase, permitting the surface tension and Gibbs-Thomson coefficient for Al-Fe, Al-Ni, Al-Cu and Al-Si hypoeutectic alloys to be calculated. The computed results are presented as a function of the alloy composition.

  9. Experimental Pragmatics and What Is Said: A Response to Gibbs and Moise.

    Science.gov (United States)

    Nicolle, Steve; Clark, Billy

    1999-01-01

    Attempted replication of Gibbs and Moise (1997) experiments regarding the recognition of a distinction between what is said and what is implicated. Results showed that, under certain conditions, subject selected implicatures when asked to select the paraphrase best reflecting what a speaker has said. Suggests that results can be explained with the…

  10. Efficient sampling of complex network with modified random walk strategies

    Science.gov (United States)

    Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei

    2018-02-01

    We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.

  11. Uniqueness of Gibbs measure for Potts model with countable set of spin values

    International Nuclear Information System (INIS)

    Ganikhodjaev, N.N.; Rozikov, U.A.

    2004-11-01

    We consider a nearest-neighbor Potts model with countable spin values 0,1,..., and non zero external field, on a Cayley tree of order k (with k+1 neighbors). We study translation-invariant 'splitting' Gibbs measures. We reduce the problem to the description of the solutions of some infinite system of equations. For any k≥1 and any fixed probability measure ν with ν(i)>0 on the set of all non negative integer numbers Φ={0,1,...} we show that the set of translation-invariant splitting Gibbs measures contains at most one point, independently on parameters of the Potts model with countable set of spin values on Cayley tree. Also we give a full description of the class of measures ν on Φ such that wit respect to each element of this class our infinite system of equations has unique solution {a i =1,2,...}, where a is an element of (0,1). (author)

  12. The application of computational thermodynamics and a numerical model for the determination of surface tension and Gibbs-Thomson coefficient of aluminum based alloys

    Energy Technology Data Exchange (ETDEWEB)

    Jacome, Paulo A.D.; Landim, Mariana C. [Department of Mechanical Engineering, Fluminense Federal University, Av. dos Trabalhadores, 420-27255-125 Volta Redonda, RJ (Brazil); Garcia, Amauri, E-mail: amaurig@fem.unicamp.br [Department of Materials Engineering, University of Campinas, UNICAMP, PO Box 6122, 13083-970 Campinas, SP (Brazil); Furtado, Alexandre F.; Ferreira, Ivaldo L. [Department of Mechanical Engineering, Fluminense Federal University, Av. dos Trabalhadores, 420-27255-125 Volta Redonda, RJ (Brazil)

    2011-08-20

    Highlights: {yields} Surface tension and the Gibbs-Thomson coefficient are computed for Al-based alloys. {yields} Butler's scheme and ThermoCalc are used to compute the thermophysical properties. {yields} Predictive cell/dendrite growth models depend on accurate thermophysical properties. {yields} Mechanical properties can be related to the microstructural cell/dendrite spacing. - Abstract: In this paper, a solution for Butler's formulation is presented permitting the surface tension and the Gibbs-Thomson coefficient of Al-based binary alloys to be determined. The importance of Gibbs-Thomson coefficient for binary alloys is related to the reliability of predictions furnished by predictive cellular and dendritic growth models and of numerical computations of solidification thermal variables, which will be strongly dependent on the thermophysical properties assumed for the calculations. A numerical model based on Powell hybrid algorithm and a finite difference Jacobian approximation was coupled to a specific interface of a computational thermodynamics software in order to assess the excess Gibbs energy of the liquid phase, permitting the surface tension and Gibbs-Thomson coefficient for Al-Fe, Al-Ni, Al-Cu and Al-Si hypoeutectic alloys to be calculated. The computed results are presented as a function of the alloy composition.

  13. A Bayesian sampling strategy for hazardous waste site characterization

    International Nuclear Information System (INIS)

    Skalski, J.R.

    1987-12-01

    Prior knowledge based on historical records or physical evidence often suggests the existence of a hazardous waste site. Initial surveys may provide additional or even conflicting evidence of site contamination. This article presents a Bayes sampling strategy that allocates sampling at a site using this prior knowledge. This sampling strategy minimizes the environmental risks of missing chemical or radionuclide hot spots at a waste site. The environmental risk is shown to be proportional to the size of the undetected hot spot or inversely proportional to the probability of hot spot detection. 12 refs., 2 figs

  14. Monte Carlo Molecular Simulation with Isobaric-Isothermal and Gibbs-NPT Ensembles

    KAUST Repository

    Du, Shouhong

    2012-01-01

    This thesis presents Monte Carlo methods for simulations of phase behaviors of Lennard-Jones fluids. The isobaric-isothermal (NPT) ensemble and Gibbs-NPT ensemble are introduced in detail. NPT ensemble is employed to determine the phase diagram of pure component. The reduced simulation results are verified by comparison with the equation of state by by Johnson et al. and results with L-J parameters of methane agree considerably with the experiment measurements. We adopt the blocking method for variance estimation and error analysis of the simulation results. The relationship between variance and number of Monte Carlo cycles, error propagation and Random Number Generator performance are also investigated. We review the Gibbs-NPT ensemble employed for phase equilibrium of binary mixture. The phase equilibrium is achieved by performing three types of trial move: particle displacement, volume rearrangement and particle transfer. The simulation models and the simulation details are introduced. The simulation results of phase coexistence for methane and ethane are reported with comparison of the experimental data. Good agreement is found for a wide range of pressures. The contribution of this thesis work lies in the study of the error analysis with respect to the Monte Carlo cycles and number of particles in some interesting aspects.

  15. Monte Carlo Molecular Simulation with Isobaric-Isothermal and Gibbs-NPT Ensembles

    KAUST Repository

    Du, Shouhong

    2012-05-01

    This thesis presents Monte Carlo methods for simulations of phase behaviors of Lennard-Jones fluids. The isobaric-isothermal (NPT) ensemble and Gibbs-NPT ensemble are introduced in detail. NPT ensemble is employed to determine the phase diagram of pure component. The reduced simulation results are verified by comparison with the equation of state by by Johnson et al. and results with L-J parameters of methane agree considerably with the experiment measurements. We adopt the blocking method for variance estimation and error analysis of the simulation results. The relationship between variance and number of Monte Carlo cycles, error propagation and Random Number Generator performance are also investigated. We review the Gibbs-NPT ensemble employed for phase equilibrium of binary mixture. The phase equilibrium is achieved by performing three types of trial move: particle displacement, volume rearrangement and particle transfer. The simulation models and the simulation details are introduced. The simulation results of phase coexistence for methane and ethane are reported with comparison of the experimental data. Good agreement is found for a wide range of pressures. The contribution of this thesis work lies in the study of the error analysis with respect to the Monte Carlo cycles and number of particles in some interesting aspects.

  16. Molar Surface Gibbs Energy of the Aqueous Solution of Ionic Liquid [C4mim][Oac

    Institute of Scientific and Technical Information of China (English)

    TONG Jing; ZHENG Xu; TONG Jian; QU Ye; LIU Lu; LI Hui

    2017-01-01

    The values of density and surface tension for aqueous solution of ionic liquid(IL) 1-butyl-3-methylimidazolium acetate([C4mim][OAc]) with various molalities were measured in the range of 288.15-318.15 K at intervals of 5 K.On the basis of thermodynamics,a semi-empirical model-molar surface Gibbs energy model of the ionic liquid solution that could be used to predict the surface tension or molar volume of solutions was put forward.The predicted values of the surface tension for aqueous [C4im][OAc] and the corresponding experimental ones were highly correlated and extremely similar.In terms of the concept of the molar Gibbs energy,a new E(o)tv(o)s equation was obtained and each parameter of the new equation has a clear physical meaning.

  17. How Does the Gibbs Inequality Condition Affect the Stability and Detachment of Floating Spheres from the Free Surface of Water?

    Science.gov (United States)

    Feng, Dong-xia; Nguyen, Anh V

    2016-03-01

    Floating objects on the air-water interfaces are central to a number of everyday activities, from walking on water by insects to flotation separation of valuable minerals using air bubbles. The available theories show that a fine sphere can float if the force of surface tension and buoyancies can support the sphere at the interface with an apical angle subtended by the circle of contact being larger than the contact angle. Here we show that the pinning of the contact line at the sharp edge, known as the Gibbs inequality condition, also plays a significant role in controlling the stability and detachment of floating spheres. Specifically, we truncated the spheres with different angles and used a force sensor device to measure the force of pushing the truncated spheres from the interface into water. We also developed a theoretical modeling to calculate the pushing force that in combination with experimental results shows different effects of the Gibbs inequality condition on the stability and detachment of the spheres from the water surface. For small angles of truncation, the Gibbs inequality condition does not affect the sphere detachment, and hence the classical theories on the floatability of spheres are valid. For large truncated angles, the Gibbs inequality condition determines the tenacity of the particle-meniscus contact and the stability and detachment of floating spheres. In this case, the classical theories on the floatability of spheres are no longer valid. A critical truncated angle for the transition from the classical to the Gibbs inequality regimes of detachment was also established. The outcomes of this research advance our understanding of the behavior of floating objects, in particular, the flotation separation of valuable minerals, which often contain various sharp edges of their crystal faces.

  18. The thermodynamic properties of the upper continental crust: Exergy, Gibbs free energy and enthalpy

    International Nuclear Information System (INIS)

    Valero, Alicia; Valero, Antonio; Vieillard, Philippe

    2012-01-01

    This paper shows a comprehensive database of the thermodynamic properties of the most abundant minerals of the upper continental crust. For those substances whose thermodynamic properties are not listed in the literature, their enthalpy and Gibbs free energy are calculated with 11 different estimation methods described in this study, with associated errors of up to 10% with respect to values published in the literature. Thanks to this procedure we have been able to make a first estimation of the enthalpy, Gibbs free energy and exergy of the bulk upper continental crust and of each of the nearly 300 most abundant minerals contained in it. Finally, the chemical exergy of the continental crust is compared to the exergy of the concentrated mineral resources. The numbers obtained indicate the huge chemical exergy wealth of the crust: 6 × 10 6 Gtoe. However, this study shows that approximately only 0.01% of that amount can be effectively used by man.

  19. The Gibbs Energy Basis and Construction of Boiling Point Diagrams in Binary Systems

    Science.gov (United States)

    Smith, Norman O.

    2004-01-01

    An illustration of how excess Gibbs energies of the components in binary systems can be used to construct boiling point diagrams is given. The underlying causes of the various types of behavior of the systems in terms of intermolecular forces and the method of calculating the coexisting liquid and vapor compositions in boiling point diagrams with…

  20. Sampling strategies in antimicrobial resistance monitoring: evaluating how precision and sensitivity vary with the number of animals sampled per farm.

    Directory of Open Access Journals (Sweden)

    Takehisa Yamamoto

    Full Text Available Because antimicrobial resistance in food-producing animals is a major public health concern, many countries have implemented antimicrobial monitoring systems at a national level. When designing a sampling scheme for antimicrobial resistance monitoring, it is necessary to consider both cost effectiveness and statistical plausibility. In this study, we examined how sampling scheme precision and sensitivity can vary with the number of animals sampled from each farm, while keeping the overall sample size constant to avoid additional sampling costs. Five sampling strategies were investigated. These employed 1, 2, 3, 4 or 6 animal samples per farm, with a total of 12 animals sampled in each strategy. A total of 1,500 Escherichia coli isolates from 300 fattening pigs on 30 farms were tested for resistance against 12 antimicrobials. The performance of each sampling strategy was evaluated by bootstrap resampling from the observational data. In the bootstrapping procedure, farms, animals, and isolates were selected randomly with replacement, and a total of 10,000 replications were conducted. For each antimicrobial, we observed that the standard deviation and 2.5-97.5 percentile interval of resistance prevalence were smallest in the sampling strategy that employed 1 animal per farm. The proportion of bootstrap samples that included at least 1 isolate with resistance was also evaluated as an indicator of the sensitivity of the sampling strategy to previously unidentified antimicrobial resistance. The proportion was greatest with 1 sample per farm and decreased with larger samples per farm. We concluded that when the total number of samples is pre-specified, the most precise and sensitive sampling strategy involves collecting 1 sample per farm.

  1. inverse gaussian model for small area estimation via gibbs sampling

    African Journals Online (AJOL)

    ADMIN

    For example, MacGibbon and Tomberlin. (1989) have considered estimating small area rates and binomial parameters using empirical Bayes methods. Stroud (1991) used hierarchical Bayes approach for univariate natural exponential families with quadratic variance functions in sample survey applications, while Chaubey ...

  2. Martin Gibbs (1922-2006): Pioneer of (14)C research, sugar metabolism & photosynthesis; vigilant Editor-in-Chief of Plant Physiology; sage Educator; and humanistic Mentor.

    Science.gov (United States)

    Black, Clanton C

    2008-01-01

    The very personal touch of Professor Martin Gibbs as a worldwide advocate for photosynthesis and plant physiology was lost with his death in July 2006. Widely known for his engaging humorous personality and his humanitarian lifestyle, Martin Gibbs excelled as a strong international science diplomat; like a personal science family patriarch encouraging science and plant scientists around the world. Immediately after World War II he was a pioneer at the Brookhaven National Laboratory in the use of (14)C to elucidate carbon flow in metabolism and particularly carbon pathways in photosynthesis. His leadership on carbon metabolism and photosynthesis extended for four decades of working in collaboration with a host of students and colleagues. In 1962, he was selected as the Editor-in-Chief of Plant Physiology. That appointment initiated 3 decades of strong directional influences by Gibbs on plant research and photosynthesis. Plant Physiology became and remains a premier source of new knowledge about the vital and primary roles of plants in earth's environmental history and the energetics of our green-blue planet. His leadership and charismatic humanitarian character became the quintessence of excellence worldwide. Martin Gibbs was in every sense the personification of a model mentor not only for scientists but also shown in devotion to family. Here we pay tribute and honor to an exemplary humanistic mentor, Martin Gibbs.

  3. Effective sampling strategy to detect food and feed contamination

    NARCIS (Netherlands)

    Bouzembrak, Yamine; Fels, van der Ine

    2018-01-01

    Sampling plans for food safety hazards are aimed to be used to determine whether a lot of food is contaminated (with microbiological or chemical hazards) or not. One of the components of sampling plans is the sampling strategy. The aim of this study was to compare the performance of three

  4. Novel strategies for sample preparation in forensic toxicology.

    Science.gov (United States)

    Samanidou, Victoria; Kovatsi, Leda; Fragou, Domniki; Rentifis, Konstantinos

    2011-09-01

    This paper provides a review of novel strategies for sample preparation in forensic toxicology. The review initially outlines the principle of each technique, followed by sections addressing each class of abused drugs separately. The novel strategies currently reviewed focus on the preparation of various biological samples for the subsequent determination of opiates, benzodiazepines, amphetamines, cocaine, hallucinogens, tricyclic antidepressants, antipsychotics and cannabinoids. According to our experience, these analytes are the most frequently responsible for intoxications in Greece. The applications of techniques such as disposable pipette extraction, microextraction by packed sorbent, matrix solid-phase dispersion, solid-phase microextraction, polymer monolith microextraction, stir bar sorptive extraction and others, which are rapidly gaining acceptance in the field of toxicology, are currently reviewed.

  5. Oxygen concentration cell for the measurements of the standard molar Gibbs energy of formation of Nd6UO12(s)

    International Nuclear Information System (INIS)

    Sahu, Manjulata; Dash, Smruti

    2011-01-01

    The standard molar Gibbs energies of formation of Nd 6 UO 12 (s) have been measured using an oxygen concentration cell with yttria stabilized zirconia as solid electrolyte. Δ f G m o (T) for Nd 6 UO 12 (s) has been calculated using the measured and required thermodynamic data from the literature. The calculated Gibbs energy expression can be given as: Δ f G m o (Nd 6 UO 12 , s,T)/(± 2.3) kJmol -1 = -6660.1+1.0898 (T/K). (author)

  6. Specification and comparative calculation of enthalpies and Gibbs formation energies of anhydrous lanthanide nitrates

    International Nuclear Information System (INIS)

    Del' Pino, Kh.; Chukurov, P.M.; Drakin, S.I.

    1980-01-01

    Analyzed are the results of experimental depermination of formation enthalpies of waterless nitrates of lanthane cerium, praseodymium, neodymium and samarium. Using method of comparative calculation computed are enthalpies of formation of waterless lanthanide and yttrium nitrates. Calculated values of enthalpies and Gibbs energies of waterless lanthanide nitrate formation are tabulated

  7. Limited-sampling strategies for anti-infective agents: systematic review.

    Science.gov (United States)

    Sprague, Denise A; Ensom, Mary H H

    2009-09-01

    Area under the concentration-time curve (AUC) is a pharmacokinetic parameter that represents overall exposure to a drug. For selected anti-infective agents, pharmacokinetic-pharmacodynamic parameters, such as AUC/MIC (where MIC is the minimal inhibitory concentration), have been correlated with outcome in a few studies. A limited-sampling strategy may be used to estimate pharmacokinetic parameters such as AUC, without the frequent, costly, and inconvenient blood sampling that would be required to directly calculate the AUC. To discuss, by means of a systematic review, the strengths, limitations, and clinical implications of published studies involving a limited-sampling strategy for anti-infective agents and to propose improvements in methodology for future studies. The PubMed and EMBASE databases were searched using the terms "anti-infective agents", "limited sampling", "optimal sampling", "sparse sampling", "AUC monitoring", "abbreviated AUC", "abbreviated sampling", and "Bayesian". The reference lists of retrieved articles were searched manually. Included studies were classified according to modified criteria from the US Preventive Services Task Force. Twenty studies met the inclusion criteria. Six of the studies (involving didanosine, zidovudine, nevirapine, ciprofloxacin, efavirenz, and nelfinavir) were classified as providing level I evidence, 4 studies (involving vancomycin, didanosine, lamivudine, and lopinavir-ritonavir) provided level II-1 evidence, 2 studies (involving saquinavir and ceftazidime) provided level II-2 evidence, and 8 studies (involving ciprofloxacin, nelfinavir, vancomycin, ceftazidime, ganciclovir, pyrazinamide, meropenem, and alpha interferon) provided level III evidence. All of the studies providing level I evidence used prospectively collected data and proper validation procedures with separate, randomly selected index and validation groups. However, most of the included studies did not provide an adequate description of the methods or

  8. Potential-Decomposition Strategy in Markov Chain Monte Carlo Sampling Algorithms

    International Nuclear Information System (INIS)

    Shangguan Danhua; Bao Jingdong

    2010-01-01

    We introduce the potential-decomposition strategy (PDS), which can he used in Markov chain Monte Carlo sampling algorithms. PDS can be designed to make particles move in a modified potential that favors diffusion in phase space, then, by rejecting some trial samples, the target distributions can be sampled in an unbiased manner. Furthermore, if the accepted trial samples are insufficient, they can be recycled as initial states to form more unbiased samples. This strategy can greatly improve efficiency when the original potential has multiple metastable states separated by large barriers. We apply PDS to the 2d Ising model and a double-well potential model with a large barrier, demonstrating in these two representative examples that convergence is accelerated by orders of magnitude.

  9. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    International Nuclear Information System (INIS)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy

  10. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    Science.gov (United States)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.

  11. Gibbs Free Energy of Hydrolytic Water Molecule in Acyl-Enzyme Intermediates of a Serine Protease: A Potential Application for Computer-Aided Discovery of Mechanism-Based Reversible Covalent Inhibitors.

    Science.gov (United States)

    Masuda, Yosuke; Yamaotsu, Noriyuki; Hirono, Shuichi

    2017-01-01

    In order to predict the potencies of mechanism-based reversible covalent inhibitors, the relationships between calculated Gibbs free energy of hydrolytic water molecule in acyl-trypsin intermediates and experimentally measured catalytic rate constants (k cat ) were investigated. After obtaining representative solution structures by molecular dynamics (MD) simulations, hydration thermodynamics analyses using WaterMap™ were conducted. Consequently, we found for the first time that when Gibbs free energy of the hydrolytic water molecule was lower, logarithms of k cat were also lower. The hydrolytic water molecule with favorable Gibbs free energy may hydrolyze acylated serine slowly. Gibbs free energy of hydrolytic water molecule might be a useful descriptor for computer-aided discovery of mechanism-based reversible covalent inhibitors of hydrolytic enzymes.

  12. Excess Gibbs Energy for Ternary Lattice Solutions of Nonrandom Mixing

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Hae Young [DukSung Womens University, Seoul (Korea, Republic of)

    2008-12-15

    It is assumed for three components lattice solution that the number of ways of arranging particles randomly on the lattice follows a normal distribution of a linear combination of N{sub 12}, N{sub 23}, N{sub 13} which are the number of the nearest neighbor interactions between different molecules. It is shown by random number simulations that this assumption is reasonable. From this distribution, an approximate equation for the excess Gibbs energy of three components lattice solution is derived. Using this equation, several liquid-vapor equilibria are calculated and compared with the results from other equations.

  13. A comparative proteomics method for multiple samples based on a 18O-reference strategy and a quantitation and identification-decoupled strategy.

    Science.gov (United States)

    Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin

    2017-08-15

    Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Gibbs Ensemble Simulation on Polarizable Models: Vapor-liquid Equilibrium in Baranyai-Kiss Models of Water

    Czech Academy of Sciences Publication Activity Database

    Moučka, F.; Nezbeda, Ivo

    2013-01-01

    Roč. 360, DEC 25 (2013), s. 472-476 ISSN 0378-3812 Grant - others:GA MŠMT(CZ) LH12019 Institutional support: RVO:67985858 Keywords : multi-particle move monte carlo * Gibbs ensemble * vapor-liquid-equilibria Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.241, year: 2013

  15. Using Graphs of Gibbs Energy versus Temperature in General Chemistry Discussions of Phase Changes and Colligative Properties

    Science.gov (United States)

    Hanson, Robert M.; Riley, Patrick; Schwinefus, Jeff; Fischer, Paul J.

    2008-01-01

    The use of qualitative graphs of Gibbs energy versus temperature is described in the context of chemical demonstrations involving phase changes and colligative properties at the general chemistry level. (Contains 5 figures and 1 note.)

  16. Algunas Precisiones en torno a las funciones termodinámicas energía libre de Gibbs

    OpenAIRE

    Solaz Portolés, Joan Josep; Quílez Pardo, Juan

    2001-01-01

    The aim of this study is to elucidate some didactic misundertandings related with the use and the appli cability of the delta functions ∆G, ∆rG and ∆rG0, which derive from the thermodynamic potential Gibbs Free Energy, G.

  17. Entropy Calculation of Reversible Mixing of Ideal Gases Shows Absence of Gibbs Paradox

    OpenAIRE

    Oleg Borodiouk; Vasili Tatarin

    1999-01-01

    Abstract: We consider the work of reversible mixing of ideal gases using a real process. Now assumptions were made concerning infinite shifts, infinite number of cycles and infinite work to provide an accurate calculation of entropy resulting from reversible mixing of ideal gases. We derived an equation showing the dependence of this entropy on the difference in potential of mixed gases, which is evidence for the absence of Gibbs' paradox.

  18. Zoeal morphology of Pachygrapsus transversus (Gibbes (Decapoda, Grapsidae reared in the laboratory

    Directory of Open Access Journals (Sweden)

    Ana Luiza Brossi-Garcia

    1997-12-01

    Full Text Available Ovigerous females of Pachygrapsus transversus (Gibbes, 1850 were collected on the Praia Dura and Saco da Ribeira beaches, Ubatuba, São Paulo, Brazil. Larvae were individually reared in a climatic room at 25ºC temperature, salinities of 28, 32 and 35‰ and under natural photoperiod conditions. The best rearing results were observed at 35%o salinity. Seven zoeal instars were observed, drawing and described in detail. The data are compared with those obtained for P. gracilis (Saussure, 1858.

  19. Solid oxide galvanic cell for determination of Gibbs energy of formation of Tb6UO12(s)

    International Nuclear Information System (INIS)

    Sahu, Manjulata; Dash, Smruti

    2013-01-01

    Citrate-nitrate combustion method was used to synthesise Tb 6 UO 12 (s). Gibbs energy of formation of Tb 6 UO 12 (s) was measured using solid oxide galvanic cell in the temperature range 957-1175 K. (author)

  20. Measurement of radioactivity in the environment - Soil - Part 2: Guidance for the selection of the sampling strategy, sampling and pre-treatment of samples

    International Nuclear Information System (INIS)

    2007-01-01

    This part of ISO 18589 specifies the general requirements, based on ISO 11074 and ISO/IEC 17025, for all steps in the planning (desk study and area reconnaissance) of the sampling and the preparation of samples for testing. It includes the selection of the sampling strategy, the outline of the sampling plan, the presentation of general sampling methods and equipment, as well as the methodology of the pre-treatment of samples adapted to the measurements of the activity of radionuclides in soil. This part of ISO 18589 is addressed to the people responsible for determining the radioactivity present in soil for the purpose of radiation protection. It is applicable to soil from gardens, farmland, urban or industrial sites, as well as soil not affected by human activities. This part of ISO 18589 is applicable to all laboratories regardless of the number of personnel or the range of the testing performed. When a laboratory does not undertake one or more of the activities covered by this part of ISO 18589, such as planning, sampling or testing, the corresponding requirements do not apply. Information is provided on scope, normative references, terms and definitions and symbols, principle, sampling strategy, sampling plan, sampling process, pre-treatment of samples and recorded information. Five annexes inform about selection of the sampling strategy according to the objectives and the radiological characterization of the site and sampling areas, diagram of the evolution of the sample characteristics from the sampling site to the laboratory, example of sampling plan for a site divided in three sampling areas, example of a sampling record for a single/composite sample and example for a sample record for a soil profile with soil description. A bibliography is provided

  1. Standard Gibbs energies of formation and equilibrium constants from ab-initio calculations: Covalent dimerization of NO2 and synthesis of NH3

    International Nuclear Information System (INIS)

    Awasthi, Neha; Ritschel, Thomas; Lipowsky, Reinhard; Knecht, Volker

    2013-01-01

    Highlights: • ΔG and K eq for NO 2 dimerization and NH 3 synthesis calculated via ab-initio methods. • Vis-á-vis experiments, W1 and CCSD(T) are accurate and G3B3 also does quite well. • CBS-APNO most accurate for NH 3 reaction but shows limitations in modeling NO 2 . • Temperature dependence of ΔG and K eq is calculated for the NH 3 reaction. • Good agreement of calculated K eq with experiments and the van’t Hoff approximation. -- Abstract: Standard quantum chemical methods are used for accurate calculation of thermochemical properties such as enthalpies of formation, entropies and Gibbs energies of formation. Equilibrium reactions are widely investigated and experimental measurements often lead to a range of reaction Gibbs energies and equilibrium constants. It is useful to calculate these equilibrium properties from quantum chemical methods in order to address the experimental differences. Furthermore, most standard calculation methods differ in accuracy and feasibility of the system size. Hence, a systematic comparison of equilibrium properties calculated with different numerical algorithms would provide a useful reference. We select two well-known gas phase equilibrium reactions with small molecules: covalent dimer formation of NO 2 (2NO 2 ⇌ N 2 O 4 ) and the synthesis of NH 3 (N 2 + 3 H 2 ⇌ 2NH 3 ). We test four quantum chemical methods denoted by G3B3, CBS-APNO, W1 and CCSD(T) with aug-cc-pVXZ basis sets (X = 2, 3, and 4), to obtain thermochemical data for NO 2 , N 2 O 4 , and NH 3 . The calculated standard formation Gibbs energies Δ f G° are used to calculate standard reaction Gibbs energies Δ r G° and standard equilibrium constants K eq for the two reactions. Standard formation enthalpies Δ f H° are calculated in a more reliable way using high-level methods such as W1 and CCSD(T). Standard entropies S° for the molecules are calculated well within the range of experiments for all methods, however, the values of standard formation

  2. Calculation of Gibbs energy of Zr-Al-Ni, Zr-Al-Cu, Al-Ni-Cu and Zr-Al-Ni-Cu liquid alloys based on quasiregular solution model

    International Nuclear Information System (INIS)

    Li, H.Q.; Yang, Y.S.; Tong, W.H.; Wang, Z.Y.

    2007-01-01

    With the effects of electronic structure and atomic size being introduced, the mixing enthalpy as well as the Gibbs energy of the ternary Zr-Al-Cu, Ni-Al-Cu, Zr-Ni-Al and quaternary Zr-Al-Ni-Cu systems are calculated based on quasiregular solution model. The computed results agree well with the experimental data. The sequence of Gibbs energies of different systems is: G Zr-Al-Ni-Cu Zr-Al-Ni Zr-Al-Cu Cu-Al-Ni . To Zr-Al-Cu, Ni-Al-Cu and Zr-Ni-Al, the lowest Gibbs energy locates in the composition range of X Zr 0.39-0.61, X Al = 0.38-0.61; X Ni = 0.39-0.61, X Al = 0.38-0.60 and X Zr = 0.32-0.67, X Al = 0.32-0.66, respectively. And to the Zr-Ni-Al-Cu system with 66.67% Zr, the lowest Gibbs energy is obtained in the region of X Al = 0.63-0.80, X Ni = 0.14-0.24

  3. Entropy Calculation of Reversible Mixing of Ideal Gases Shows Absence of Gibbs Paradox

    Directory of Open Access Journals (Sweden)

    Oleg Borodiouk

    1999-05-01

    Full Text Available Abstract: We consider the work of reversible mixing of ideal gases using a real process. Now assumptions were made concerning infinite shifts, infinite number of cycles and infinite work to provide an accurate calculation of entropy resulting from reversible mixing of ideal gases. We derived an equation showing the dependence of this entropy on the difference in potential of mixed gases, which is evidence for the absence of Gibbs' paradox.

  4. Influence of Wilbraham-Gibbs Phenomenon on Digital Stochastic Measurement of EEG Signal Over an Interval

    Directory of Open Access Journals (Sweden)

    Sovilj P.

    2014-10-01

    Full Text Available Measurement methods, based on the approach named Digital Stochastic Measurement, have been introduced, and several prototype and small-series commercial instruments have been developed based on these methods. These methods have been mostly investigated for various types of stationary signals, but also for non-stationary signals. This paper presents, analyzes and discusses digital stochastic measurement of electroencephalography (EEG signal in the time domain, emphasizing the problem of influence of the Wilbraham-Gibbs phenomenon. The increase of measurement error, related to the Wilbraham-Gibbs phenomenon, is found. If the EEG signal is measured and measurement interval is 20 ms wide, the average maximal error relative to the range of input signal is 16.84 %. If the measurement interval is extended to 2s, the average maximal error relative to the range of input signal is significantly lowered - down to 1.37 %. Absolute errors are compared with the error limit recommended by Organisation Internationale de Métrologie Légale (OIML and with the quantization steps of the advanced EEG instruments with 24-bit A/D conversion

  5. Ergodic time-reversible chaos for Gibbs' canonical oscillator

    International Nuclear Information System (INIS)

    Hoover, William Graham; Sprott, Julien Clinton; Patra, Puneet Kumar

    2015-01-01

    Nosé's pioneering 1984 work inspired a variety of time-reversible deterministic thermostats. Though several groups have developed successful doubly-thermostated models, single-thermostat models have failed to generate Gibbs' canonical distribution for the one-dimensional harmonic oscillator. A 2001 doubly-thermostated model, claimed to be ergodic, has a singly-thermostated version. Though neither of these models is ergodic this work has suggested a successful route toward singly-thermostated ergodicity. We illustrate both ergodicity and its lack for these models using phase-space cross sections and Lyapunov instability as diagnostic tools. - Highlights: • We develop cross-section and Lyapunov methods for diagnosing ergodicity. • We apply these methods to several thermostatted-oscillator problems. • We demonstrate the nonergodicity of previous work. • We find a novel family of ergodic thermostatted-oscillator problems.

  6. Unifying hydrotropy under Gibbs phase rule.

    Science.gov (United States)

    Shimizu, Seishi; Matubayasi, Nobuyuki

    2017-09-13

    The task of elucidating the mechanism of solubility enhancement using hydrotropes has been hampered by the wide variety of phase behaviour that hydrotropes can exhibit, encompassing near-ideal aqueous solution, self-association, micelle formation, and micro-emulsions. Instead of taking a field guide or encyclopedic approach to classify hydrotropes into different molecular classes, we take a rational approach aiming at constructing a unified theory of hydrotropy based upon the first principles of statistical thermodynamics. Achieving this aim can be facilitated by the two key concepts: (1) the Gibbs phase rule as the basis of classifying the hydrotropes in terms of the degrees of freedom and the number of variables to modulate the solvation free energy; (2) the Kirkwood-Buff integrals to quantify the interactions between the species and their relative contributions to the process of solubilization. We demonstrate that the application of the two key concepts can in principle be used to distinguish the different molecular scenarios at work under apparently similar solubility curves observed from experiments. In addition, a generalization of our previous approach to solutes beyond dilution reveals the unified mechanism of hydrotropy, driven by a strong solute-hydrotrope interaction which overcomes the apparent per-hydrotrope inefficiency due to hydrotrope self-clustering.

  7. Estimation of Genetic Parameters for Direct and Maternal Effects in Growth Traits of Sangsari Sheep Using Gibbs Sampling

    Directory of Open Access Journals (Sweden)

    Zohreh Yousefi

    2016-11-01

    Full Text Available Introduction Small ruminants, especially native breed types, play an important role in livelihoods of a considerable part of human population in the tropics from socio-economic aspects. Therefore, integrated attempt in terms of management and genetic improvement to enhance production is of crucial importance. Knowledge of genetic variation and co-variation among traits is required for both the design of effective sheep breeding programs and the accurate prediction of genetic progress from these programs. Body weight and growth traits are one of the economically important traits in sheep production, especially in Iran where lamb sale is the main source of income for sheep breeders while other products are in secondary importance. Although mutton is the most important source of protein in Iran, meat production from the sheep does not cover the increasing consumer demand. On the other hand, increase in sheep number to increase meat production has been limited by low quality and quantity of forage range. Therefore, enhancing meat production should be achieved by selecting the animals that have maximum genetic merit as next generation parents. To design an efficient improvement program and genetic evaluation system for maximization response to selection for economically important traits, accurate estimates of the genetic parameters and the genetic relationships between the traits are necessary. Studies of various sheep breeds have shown that both direct and maternal genetic influences are of importance for lamb growth. When growth traits are included in the breeding goal, both direct and maternal genetic effects should be taken into account in order to achieve optimum genetic progress. The objective of this study was to estimate the variance components and heritability, for growth traits, by fitting six animal models in the Sangsari sheep using Gibbs sampling. Material and Method Sangsari is a fat-tailed and relatively small sized breed of sheep

  8. Mars Sample Return - Launch and Detection Strategies for Orbital Rendezvous

    Science.gov (United States)

    Woolley, Ryan C.; Mattingly, Richard L.; Riedel, Joseph E.; Sturm, Erick J.

    2011-01-01

    This study sets forth conceptual mission design strategies for the ascent and rendezvous phase of the proposed NASA/ESA joint Mars Sample Return Campaign. The current notional mission architecture calls for the launch of an acquisition/cache rover in 2018, an orbiter with an Earth return vehicle in 2022, and a fetch rover and ascent vehicle in 2024. Strategies are presented to launch the sample into a coplanar orbit with the Orbiter which facilitate robust optical detection, orbit determination, and rendezvous. Repeating ground track orbits exist at 457 and 572 km which provide multiple launch opportunities with similar geometries for detection and rendezvous.

  9. Mars Sample Return: Launch and Detection Strategies for Orbital Rendezvous

    Science.gov (United States)

    Woolley, Ryan C.; Mattingly, Richard L.; Riedel, Joseph E.; Sturm, Erick J.

    2011-01-01

    This study sets forth conceptual mission design strategies for the ascent and rendezvous phase of the proposed NASA/ESA joint Mars Sample Return Campaign. The current notional mission architecture calls for the launch of an acquisition/ caching rover in 2018, an Earth return orbiter in 2022, and a fetch rover with ascent vehicle in 2024. Strategies are presented to launch the sample into a nearly coplanar orbit with the Orbiter which would facilitate robust optical detection, orbit determination, and rendezvous. Repeating ground track orbits existat 457 and 572 km which would provide multiple launch opportunities with similar geometries for detection and rendezvous.

  10. Random sampling or geostatistical modelling? Choosing between design-based and model-based sampling strategies for soil (with discussion)

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    1997-01-01

    Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based

  11. A simple approach to the solvent reorganisation Gibbs free energy in electron transfer reactions of redox metalloproteins

    DEFF Research Database (Denmark)

    Ulstrup, Jens

    1999-01-01

    We discuss a simple model for the environmental reorganisation Gibbs free energy, E-r, in electron transfer between a metalloprotein and a small reaction partner. The protein is represented as a dielectric globule with low dielectric constant, the metal centres as conducting spheres, all embedded...

  12. Solubility and Standard Gibb's energies of transfer of alkali metal perchlorates, tetramethyl- and tetraethylammonium from water to aqua-acetone solvents

    International Nuclear Information System (INIS)

    Kireev, A.A.; Pak, T.G.; Bezuglyj, V.D.

    1996-01-01

    Solubilities of KClO 4 , RbClO 4 , CsClO 4 , (CH 3 ) 4 NClO 4 , (C 2 M 5 ) 4 NClO 4 in water and water-acetone mixtures are determined by the method of isothermal saturation at 298.15 K. Dissociation constants of alkali metal perchlorates are found by conductometric method. Solubility products and standard Gibbs energies of transfer of corresponding electrolytes from water into water-acetone solvents are calculated. The character of transfer Gibbs energy dependence on solvent composition is explained by preferred solvation of cations by acetone molecules and anions-by water molecules. Features of tetraalkyl ammonium ions are explained by large changes in energy of cavity formation for these ions

  13. Adaptive sampling strategies with high-throughput molecular dynamics

    Science.gov (United States)

    Clementi, Cecilia

    Despite recent significant hardware and software developments, the complete thermodynamic and kinetic characterization of large macromolecular complexes by molecular simulations still presents significant challenges. The high dimensionality of these systems and the complexity of the associated potential energy surfaces (creating multiple metastable regions connected by high free energy barriers) does not usually allow to adequately sample the relevant regions of their configurational space by means of a single, long Molecular Dynamics (MD) trajectory. Several different approaches have been proposed to tackle this sampling problem. We focus on the development of ensemble simulation strategies, where data from a large number of weakly coupled simulations are integrated to explore the configurational landscape of a complex system more efficiently. Ensemble methods are of increasing interest as the hardware roadmap is now mostly based on increasing core counts, rather than clock speeds. The main challenge in the development of an ensemble approach for efficient sampling is in the design of strategies to adaptively distribute the trajectories over the relevant regions of the systems' configurational space, without using any a priori information on the system global properties. We will discuss the definition of smart adaptive sampling approaches that can redirect computational resources towards unexplored yet relevant regions. Our approaches are based on new developments in dimensionality reduction for high dimensional dynamical systems, and optimal redistribution of resources. NSF CHE-1152344, NSF CHE-1265929, Welch Foundation C-1570.

  14. Standard molar Gibbs free energy of formation of URh3(s)

    International Nuclear Information System (INIS)

    Prasad, Rajendra; Sayi, Y.S.; Radhakrishna, J.; Yadav, C.S.; Shankaran, P.S.; Chhapru, G.C.

    1992-01-01

    Equilibrium partial pressures of CO(g) over the system (UO 2 (s) + C(s) + Rh(s) + URh 3 (s)) were measured in the temperature range 1327 - 1438 K. Standard Gibbs molar free energy of formation of URh 3 (Δ f G o m ) in the above temperature range can be expressed as Δ f G o m (URh 3 ,s,T)+-3.0(kJ/mol)= -348.165 + 0.03144 T(K). The second and third law enthalpy of formation, ΔfH o m (URh 3 ,s,298.15 K) are (-318.4 +- 3.0) and (298.3 +- 2.5) kJ/mol respectively. (author). 7 refs., 3 tabs

  15. Coefficients of interphase distribution and Gibbs energy of the transfer of nicotinic acid from water into aqueous solutions of ethanol and dimethylsulfoxide

    Science.gov (United States)

    Grazhdan, K. V.; Gamov, G. A.; Dushina, S. V.; Sharnin, V. A.

    2012-11-01

    Coefficients of the interphase distribution of nicotinic acid are determined in aqueous solution systems of ethanol-hexane and DMSO-hexane at 25.0 ± 0.1°C. They are used to calculate the Gibbs energy of the transfer of nicotinic acid from water into aqueous solutions of ethanol and dimethylsulfoxide. The Gibbs energy values for the transfer of the molecular and zwitterionic forms of nicotinic acid are obtained by means of UV spectroscopy. The diametrically opposite effect of the composition of binary solvents on the transfer of the molecular and zwitterionic forms of nicotinic acid is noted.

  16. Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.

    Science.gov (United States)

    Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L

    2017-06-13

    λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.

  17. Assessment of sampling strategies for estimation of site mean concentrations of stormwater pollutants.

    Science.gov (United States)

    McCarthy, David T; Zhang, Kefeng; Westerlund, Camilla; Viklander, Maria; Bertrand-Krajewski, Jean-Luc; Fletcher, Tim D; Deletic, Ana

    2018-02-01

    The estimation of stormwater pollutant concentrations is a primary requirement of integrated urban water management. In order to determine effective sampling strategies for estimating pollutant concentrations, data from extensive field measurements at seven different catchments was used. At all sites, 1-min resolution continuous flow measurements, as well as flow-weighted samples, were taken and analysed for total suspend solids (TSS), total nitrogen (TN) and Escherichia coli (E. coli). For each of these parameters, the data was used to calculate the Event Mean Concentrations (EMCs) for each event. The measured Site Mean Concentrations (SMCs) were taken as the volume-weighted average of these EMCs for each parameter, at each site. 17 different sampling strategies, including random and fixed strategies were tested to estimate SMCs, which were compared with the measured SMCs. The ratios of estimated/measured SMCs were further analysed to determine the most effective sampling strategies. Results indicate that the random sampling strategies were the most promising method in reproducing SMCs for TSS and TN, while some fixed sampling strategies were better for estimating the SMC of E. coli. The differences in taking one, two or three random samples were small (up to 20% for TSS, and 10% for TN and E. coli), indicating that there is little benefit in investing in collection of more than one sample per event if attempting to estimate the SMC through monitoring of multiple events. It was estimated that an average of 27 events across the studied catchments are needed for characterising SMCs of TSS with a 90% confidence interval (CI) width of 1.0, followed by E.coli (average 12 events) and TN (average 11 events). The coefficient of variation of pollutant concentrations was linearly and significantly correlated to the 90% confidence interval ratio of the estimated/measured SMCs (R 2  = 0.49; P sampling frequency needed to accurately estimate SMCs of pollutants. Crown

  18. Direct measurements of the Gibbs free energy of OH using a CW tunable laser

    Science.gov (United States)

    Killinger, D. K.; Wang, C. C.

    1979-01-01

    The paper describes an absorption measurement for determining the Gibbs free energy of OH generated in a mixture of water and oxygen vapor. These measurements afford a direct verification of the accuracy of thermochemical data of H2O at high temperatures and pressures. The results indicate that values for the heat capacity of H2O obtained through numerical computations are correct within an experimental uncertainty of 0.15 cal/mole K.

  19. Using self-consistent Gibbs free energy surfaces to calculate size distributions of neutral and charged clusters for the sulfuric acid-water binary system

    Science.gov (United States)

    Smith, J. A.; Froyd, K. D.; Toon, O. B.

    2012-12-01

    We construct tables of reaction enthalpies and entropies for the association reactions involving sulfuric acid vapor, water vapor, and the bisulfate ion. These tables are created from experimental measurements and quantum chemical calculations for molecular clusters and a classical thermodynamic model for larger clusters. These initial tables are not thermodynamically consistent. For example, the Gibbs free energy of associating a cluster consisting of one acid molecule and two water molecules depends on the order in which the cluster was assembled: add two waters and then the acid or add an acid and a water and then the second water. We adjust the values within the tables using the method of Lagrange multipliers to minimize the adjustments and produce self-consistent Gibbs free energy surfaces for the neutral clusters and the charged clusters. With the self-consistent Gibbs free energy surfaces, we calculate size distributions of neutral and charged clusters for a variety of atmospheric conditions. Depending on the conditions, nucleation can be dominated by growth along the neutral channel or growth along the ion channel followed by ion-ion recombination.

  20. Mendelian breeding units versus standard sampling strategies: mitochondrial DNA variation in southwest Sardinia

    Directory of Open Access Journals (Sweden)

    Daria Sanna

    2011-01-01

    Full Text Available We report a sampling strategy based on Mendelian Breeding Units (MBUs, representing an interbreeding group of individuals sharing a common gene pool. The identification of MBUs is crucial for case-control experimental design in association studies. The aim of this work was to evaluate the possible existence of bias in terms of genetic variability and haplogroup frequencies in the MBU sample, due to severe sample selection. In order to reach this goal, the MBU sampling strategy was compared to a standard selection of individuals according to their surname and place of birth. We analysed mitochondrial DNA variation (first hypervariable segment and coding region in unrelated healthy subjects from two different areas of Sardinia: the area around the town of Cabras and the western Campidano area. No statistically significant differences were observed when the two sampling methods were compared, indicating that the stringent sample selection needed to establish a MBU does not alter original genetic variability and haplogroup distribution. Therefore, the MBU sampling strategy can be considered a useful tool in association studies of complex traits.

  1. Gibbs energies of formation of zircon (ZrSiO4), thorite (ThSiO4), and phenacite (Be2SiO4)

    International Nuclear Information System (INIS)

    Schuiling, R.D.; Vergouwen, L.; Rijst, H. van der

    1976-01-01

    Zircon, thorite, and phenacite are very refractory compounds which do not yield to solution calorimetry. In In order to obtain approximate Gibbs energies of formation for these minerals, their reactions with a number of silica-undersaturated compounds (silicates or oxides) were studied. Conversely baddeleyite (ZrO 2 ), thorianite (ThO 2 ), and bromellite (BeO) were reacted with the appropriate silicates. As the Gibbs energies of reaction of the undersaturated compounds with SiO 2 are known, the experiments yield the following data: Δ G 298 , 1 /sub bar/ 0 = -459.02 +- 1.04 kcal for zircon, -489.67 +- 1.04 for thorite, and -480.20 +- 1.01 for phenacite

  2. Generalized Gibbs distribution and energy localization in the semiclassical FPU problem

    Science.gov (United States)

    Hipolito, Rafael; Danshita, Ippei; Oganesyan, Vadim; Polkovnikov, Anatoli

    2011-03-01

    We investigate dynamics of the weakly interacting quantum mechanical Fermi-Pasta-Ulam (qFPU) model in the semiclassical limit below the stochasticity threshold. Within this limit we find that initial quantum fluctuations lead to the damping of FPU oscillations and relaxation of the system to a slowly evolving steady state with energy localized within few momentum modes. We find that in large systems this state can be described by the generalized Gibbs ensemble (GGE), with the Lagrange multipliers being very weak functions of time. This ensembles gives accurate description of the instantaneous correlation functions, both quadratic and quartic. Based on these results we conjecture that GGE generically appears as a prethermalized state in weakly non-integrable systems.

  3. Dynamics of macro-observables and space-time inhomogeneous Gibbs ensembles

    International Nuclear Information System (INIS)

    Lanz, L.; Lupieri, G.

    1978-01-01

    The relationship between the classical description of a macro-system and quantum mechanics of its particles is considered within the framework recently developed by Ludwig. A procedure is given to define probability measures on the trajectory space of a macrosystem which yields a statistical description of the dynamics of a macrosystem. The basic tool in this treatment is a new concept of space-time inhomogeneous Gibbs ensemble, defined in N-body quantum mechanics. In the Gaussian approximation of the probabilities the results of Zubarev's theory based on the ''nonequilibrium statistical operator'' are recovered. The present ''embedding'' of the description of a macrosystem inside the N-body theory allows for a joint description of a macrosystem and a microsubsystem of it, and a ''macroscopical'' calculation of the statistical operator of the microsystem is indicated. (author)

  4. Sampling strategies for indoor radon investigations

    International Nuclear Information System (INIS)

    Prichard, H.M.

    1983-01-01

    Recent investigations prompted by concern about the environmental effects of residential energy conservation have produced many accounts of indoor radon concentrations far above background levels. In many instances time-normalized annual exposures exceeded the 4 WLM per year standard currently used for uranium mining. Further investigations of indoor radon exposures are necessary to judge the extent of the problem and to estimate the practicality of health effects studies. A number of trends can be discerned as more indoor surveys are reported. It is becoming increasingly clear that local geological factors play a major, if not dominant role in determining the distribution of indoor radon concentrations in a given area. Within a giving locale, indoor radon concentrations tend to be log-normally distributed, and sample means differ markedly from one region to another. The appreciation of geological factors and the general log-normality of radon distributions will improve the accuracy of population dose estimates and facilitate the design of preliminary health effects studies. The relative merits of grab samples, short and long term integrated samples, and more complicated dose assessment strategies are discussed in the context of several types of epidemiological investigations. A new passive radon sampler with a 24 hour integration time is described and evaluated as a tool for pilot investigations

  5. Sampling Methods for Wallenius' and Fisher's Noncentral Hypergeometric Distributions

    DEFF Research Database (Denmark)

    Fog, Agner

    2008-01-01

    the mode, ratio-of-uniforms rejection method, and rejection by sampling in the tau domain. Methods for the multivariate distributions include: simulation of urn experiments, conditional method, Gibbs sampling, and Metropolis-Hastings sampling. These methods are useful for Monte Carlo simulation of models...... of biased sampling and models of evolution and for calculating moments and quantiles of the distributions.......Several methods for generating variates with univariate and multivariate Wallenius' and Fisher's noncentral hypergeometric distributions are developed. Methods for the univariate distributions include: simulation of urn experiments, inversion by binary search, inversion by chop-down search from...

  6. Gibbs Measures Over Locally Tree-Like Graphs and Percolative Entropy Over Infinite Regular Trees

    Science.gov (United States)

    Austin, Tim; Podder, Moumanti

    2018-03-01

    Consider a statistical physical model on the d-regular infinite tree Td described by a set of interactions Φ . Let Gn be a sequence of finite graphs with vertex sets V_n that locally converge to Td. From Φ one can construct a sequence of corresponding models on the graphs G_n. Let μ_n be the resulting Gibbs measures. Here we assume that μ n converges to some limiting Gibbs measure μ on Td in the local weak^* sense, and study the consequences of this convergence for the specific entropies |V_n|^{-1}H(μ _n). We show that the limit supremum of |V_n|^{-1}H(μ _n) is bounded above by the percolative entropy H_{it{perc}}(μ ), a function of μ itself, and that |V_n|^{-1}H(μ _n) actually converges to H_{it{perc}}(μ ) in case Φ exhibits strong spatial mixing on T_d. When it is known to exist, the limit of |V_n|^{-1}H(μ _n) is most commonly shown to be given by the Bethe ansatz. Percolative entropy gives a different formula, and we do not know how to connect it to the Bethe ansatz directly. We discuss a few examples of well-known models for which the latter result holds in the high temperature regime.

  7. Chapter 2: Sampling strategies in forest hydrology and biogeochemistry

    Science.gov (United States)

    Roger C. Bales; Martha H. Conklin; Branko Kerkez; Steven Glaser; Jan W. Hopmans; Carolyn T. Hunsaker; Matt Meadows; Peter C. Hartsough

    2011-01-01

    Many aspects of forest hydrology have been based on accurate but not necessarily spatially representative measurements, reflecting the measurement capabilities that were traditionally available. Two developments are bringing about fundamental changes in sampling strategies in forest hydrology and biogeochemistry: (a) technical advances in measurement capability, as is...

  8. Gibbs free energy of formation of UPb(s) compound

    International Nuclear Information System (INIS)

    Samui, Pradeep; Agarwal, Renu; Mishra, Ratikanta

    2012-01-01

    Liquid lead and lead-bismuth eutectic (LBE) are being explored as primary candidates for coolants in accelerator driven systems and in advanced nuclear reactors due to their favorable thermo-physical and chemical properties. They are also proposed to be used as spallation neutron source in ADS Reactor Systems. However, corrosion of structural materials (i.e. steel) presents a critical challenge for the use of liquid lead or LBE in advanced nuclear reactors. The interactions of liquid lead or LBE with clad and fuel is of great scientific and technological importance in the development of advanced nuclear reactors. Clad failure/breach can lead to reaction of coolant elements with fuel components. Thus the study of fuel-coolant interaction of U with Pb/Bi is important. The paper deals with the determination of Gibbs free energy of formation of U-rich phase i.e. UPb in Pb-U system, employing Knudsen effusion mass loss technique

  9. Sampling strategy to develop a primary core collection of apple ...

    African Journals Online (AJOL)

    PRECIOUS

    2010-01-11

    Jan 11, 2010 ... Physiology and Molecular Biology for Fruit, Tree, Beijing 100193, China. ... analyzed on genetic diversity to ensure their represen- .... strategy, cluster and random sampling. .... on isozyme data―A simulation study, Theor.

  10. ASTEM, Evaluation of Gibbs, Helmholtz and Saturation Line Function for Thermodynamics Calculation

    International Nuclear Information System (INIS)

    Moore, K.V.; Burgess, M.P.; Fuller, G.L.; Kaiser, A.H.; Jaeger, D.L.

    1974-01-01

    1 - Description of problem or function: ASTEM is a modular set of FORTRAN IV subroutines to evaluate the Gibbs, Helmholtz, and saturation line functions as published by the American Society of Mechanical Engineers (1967). Any thermodynamic quantity including derivative properties can be obtained from these routines by a user-supplied main program. PROPS is an auxiliary routine available for the IBM360 version which makes it easier to apply the ASTEM routines to power station models. 2 - Restrictions on the complexity of the problem: Unless re-dimensioned by the user, the highest derivative allowed is order 9. All arrays within ASTEM are one-dimensional to save storage area

  11. Development and Demonstration of a Method to Evaluate Bio-Sampling Strategies Using Building Simulation and Sample Planning Software.

    Science.gov (United States)

    Dols, W Stuart; Persily, Andrew K; Morrow, Jayne B; Matzke, Brett D; Sego, Landon H; Nuffer, Lisa L; Pulsipher, Brent A

    2010-01-01

    In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by virtually examining a wide variety of release and dispersion scenarios using computer simulations. This research effort demonstrates the use of two software tools, CONTAM, developed by the National Institute of Standards and Technology (NIST), and Visual Sample Plan (VSP), developed by Pacific Northwest National Laboratory (PNNL). The CONTAM modeling software was used to virtually contaminate a model of the INL test building under various release and dissemination scenarios as well as a range of building design and operation parameters. The results of these CONTAM simulations were then used to investigate the relevance and performance of various sampling strategies using VSP. One of the fundamental outcomes of this project was the demonstration of how CONTAM and VSP can be used together to effectively develop sampling plans to support the various stages of response to an airborne chemical, biological, radiological, or nuclear event. Following such an event (or prior to an event), incident details and the conceptual site model could be used to create an ensemble of CONTAM simulations which model contaminant dispersion within a building. These predictions could then be used to identify priority area zones within the building and then sampling designs and strategies could be developed based on those zones.

  12. Variación de la energía libre de Gibbs de la caolinita en función de la cristalinidad y tamaño de partícula

    Directory of Open Access Journals (Sweden)

    La Iglesia, A.

    1989-12-01

    Full Text Available The effect of grinding on crystallinity, particle size and solubility of two samples of kaolinite was studied. The standard Gibbs free energies of formation of different ground samples were calculated from solubility measurements, and show a direct relationship between Gibbs free energy and particle size-crystallinity variation. Values of -3752.2 and -3776.4 KJ/mol. were determinated for ΔGºl (am and ΔGºl (crys of kaolinite, respectively. A new thermodinamic equation that relates ΔGºl to particle size is proposed. This equation can probably be extended to clay mineals.Se estudia el efecto de la molienda prolongada sobre la cristalinidad, tamaño de partícula y solubilidad de dos muestras de caolinita. Se ha calculado la energía libre estandar de formación del mineral a partir de medidas de solubilidad, encontrando una relación directa entre ΔGºl, y las variaciones de tamaño de partícula y cristalinidad de las muestras. Por extrapolación, se han obtenido los valores de -3752,0 y -3776,4 KJ/mol. para ΔGºl caolinita amorfa y cristalina. Se propone una ecuación termodinámica que relaciona ΔGºl y el tamaño de partícula de la caolinita; esta ecuación puede aplicarse también a otros minerales de la arcilla.

  13. Sampling strategy for a large scale indoor radiation survey - a pilot project

    International Nuclear Information System (INIS)

    Strand, T.; Stranden, E.

    1986-01-01

    Optimisation of a stratified random sampling strategy for large scale indoor radiation surveys is discussed. It is based on the results from a small scale pilot project where variances in dose rates within different categories of houses were assessed. By selecting a predetermined precision level for the mean dose rate in a given region, the number of measurements needed can be optimised. The results of a pilot project in Norway are presented together with the development of the final sampling strategy for a planned large scale survey. (author)

  14. Fast Markov chain Monte Carlo sampling for sparse Bayesian inference in high-dimensional inverse problems using L1-type priors

    International Nuclear Information System (INIS)

    Lucka, Felix

    2012-01-01

    Sparsity has become a key concept for solving of high-dimensional inverse problems using variational regularization techniques. Recently, using similar sparsity-constraints in the Bayesian framework for inverse problems by encoding them in the prior distribution has attracted attention. Important questions about the relation between regularization theory and Bayesian inference still need to be addressed when using sparsity promoting inversion. A practical obstacle for these examinations is the lack of fast posterior sampling algorithms for sparse, high-dimensional Bayesian inversion. Accessing the full range of Bayesian inference methods requires being able to draw samples from the posterior probability distribution in a fast and efficient way. This is usually done using Markov chain Monte Carlo (MCMC) sampling algorithms. In this paper, we develop and examine a new implementation of a single component Gibbs MCMC sampler for sparse priors relying on L1-norms. We demonstrate that the efficiency of our Gibbs sampler increases when the level of sparsity or the dimension of the unknowns is increased. This property is contrary to the properties of the most commonly applied Metropolis–Hastings (MH) sampling schemes. We demonstrate that the efficiency of MH schemes for L1-type priors dramatically decreases when the level of sparsity or the dimension of the unknowns is increased. Practically, Bayesian inversion for L1-type priors using MH samplers is not feasible at all. As this is commonly believed to be an intrinsic feature of MCMC sampling, the performance of our Gibbs sampler also challenges common beliefs about the applicability of sample based Bayesian inference. (paper)

  15. Local thermodynamics and the generalized Gibbs-Duhem equation in systems with long-range interactions.

    Science.gov (United States)

    Latella, Ivan; Pérez-Madrid, Agustín

    2013-10-01

    The local thermodynamics of a system with long-range interactions in d dimensions is studied using the mean-field approximation. Long-range interactions are introduced through pair interaction potentials that decay as a power law in the interparticle distance. We compute the local entropy, Helmholtz free energy, and grand potential per particle in the microcanonical, canonical, and grand canonical ensembles, respectively. From the local entropy per particle we obtain the local equation of state of the system by using the condition of local thermodynamic equilibrium. This local equation of state has the form of the ideal gas equation of state, but with the density depending on the potential characterizing long-range interactions. By volume integration of the relation between the different thermodynamic potentials at the local level, we find the corresponding equation satisfied by the potentials at the global level. It is shown that the potential energy enters as a thermodynamic variable that modifies the global thermodynamic potentials. As a result, we find a generalized Gibbs-Duhem equation that relates the potential energy to the temperature, pressure, and chemical potential. For the marginal case where the power of the decaying interaction potential is equal to the dimension of the space, the usual Gibbs-Duhem equation is recovered. As examples of the application of this equation, we consider spatially uniform interaction potentials and the self-gravitating gas. We also point out a close relationship with the thermodynamics of small systems.

  16. Recruitment of hard-to-reach population subgroups via adaptations of the snowball sampling strategy.

    Science.gov (United States)

    Sadler, Georgia Robins; Lee, Hau-Chen; Lim, Rod Seung-Hwan; Fullerton, Judith

    2010-09-01

    Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author's program of research are provided to demonstrate how adaptations of snowball sampling can be used effectively in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more-vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or for research studies when the recruitment of a population-based sample is not essential.

  17. Sampling strategies for the analysis of reactive low-molecular weight compounds in air

    NARCIS (Netherlands)

    Henneken, H.

    2006-01-01

    Within this thesis, new sampling and analysis strategies for the determination of airborne workplace contaminants have been developed. Special focus has been directed towards the development of air sampling methods that involve diffusive sampling. In an introductory overview, the current

  18. A Geostatistical Approach to Indoor Surface Sampling Strategies

    DEFF Research Database (Denmark)

    Schneider, Thomas; Petersen, Ole Holm; Nielsen, Allan Aasbjerg

    1990-01-01

    Particulate surface contamination is of concern in production industries such as food processing, aerospace, electronics and semiconductor manufacturing. There is also an increased awareness that surface contamination should be monitored in industrial hygiene surveys. A conceptual and theoretical...... framework for designing sampling strategies is thus developed. The distribution and spatial correlation of surface contamination can be characterized using concepts from geostatistical science, where spatial applications of statistics is most developed. The theory is summarized and particulate surface...... contamination, sampled from small areas on a table, have been used to illustrate the method. First, the spatial correlation is modelled and the parameters estimated from the data. Next, it is shown how the contamination at positions not measured can be estimated with kriging, a minimum mean square error method...

  19. Impact of sampling strategy on stream load estimates in till landscape of the Midwest

    Science.gov (United States)

    Vidon, P.; Hubbard, L.E.; Soyeux, E.

    2009-01-01

    Accurately estimating various solute loads in streams during storms is critical to accurately determine maximum daily loads for regulatory purposes. This study investigates the impact of sampling strategy on solute load estimates in streams in the US Midwest. Three different solute types (nitrate, magnesium, and dissolved organic carbon (DOC)) and three sampling strategies are assessed. Regardless of the method, the average error on nitrate loads is higher than for magnesium or DOC loads, and all three methods generally underestimate DOC loads and overestimate magnesium loads. Increasing sampling frequency only slightly improves the accuracy of solute load estimates but generally improves the precision of load calculations. This type of investigation is critical for water management and environmental assessment so error on solute load calculations can be taken into account by landscape managers, and sampling strategies optimized as a function of monitoring objectives. ?? 2008 Springer Science+Business Media B.V.

  20. Perspectives on land snails - sampling strategies for isotopic analyses

    Science.gov (United States)

    Kwiecien, Ola; Kalinowski, Annika; Kamp, Jessica; Pellmann, Anna

    2017-04-01

    Since the seminal works of Goodfriend (1992), several substantial studies confirmed a relation between the isotopic composition of land snail shells (d18O, d13C) and environmental parameters like precipitation amount, moisture source, temperature and vegetation type. This relation, however, is not straightforward and site dependent. The choice of sampling strategy (discrete or bulk sampling) and cleaning procedure (several methods can be used, but comparison of their effects in an individual shell has yet not been achieved) further complicate the shell analysis. The advantage of using snail shells as environmental archive lies in the snails' limited mobility, and therefore an intrinsic aptitude of recording local and site-specific conditions. Also, snail shells are often found at dated archaeological sites. An obvious drawback is that shell assemblages rarely make up a continuous record, and a single shell is only a snapshot of the environmental setting at a given time. Shells from archaeological sites might represent a dietary component and cooking would presumably alter the isotopic signature of aragonite material. Consequently, a proper sampling strategy is of great importance and should be adjusted to the scientific question. Here, we compare and contrast different sampling approaches using modern shells collected in Morocco, Spain and Germany. The bulk shell approach (fine-ground material) yields information on mean environmental parameters within the life span of analyzed individuals. However, despite homogenization, replicate measurements of bulk shell material returned results with a variability greater than analytical precision (up to 2‰ for d18O, and up to 1‰ for d13C), calling for caution analyzing only single individuals. Horizontal high-resolution sampling (single drill holes along growth lines) provides insights into the amplitude of seasonal variability, while vertical high-resolution sampling (multiple drill holes along the same growth line

  1. Sample preparation strategies for food and biological samples prior to nanoparticle detection and imaging

    DEFF Research Database (Denmark)

    Larsen, Erik Huusfeldt; Löschner, Katrin

    2014-01-01

    microscopy (TEM) proved to be necessary for trouble shooting of results obtained from AFFF-LS-ICP-MS. Aqueous and enzymatic extraction strategies were tested for thorough sample preparation aiming at degrading the sample matrix and to liberate the AgNPs from chicken meat into liquid suspension. The resulting...... AFFF-ICP-MS fractograms, which corresponded to the enzymatic digests, showed a major nano-peak (about 80 % recovery of AgNPs spiked to the meat) plus new smaller peaks that eluted close to the void volume of the fractograms. Small, but significant shifts in retention time of AFFF peaks were observed...... for the meat sample extracts and the corresponding neat AgNP suspension, and rendered sizing by way of calibration with AgNPs as sizing standards inaccurate. In order to gain further insight into the sizes of the separated AgNPs, or their possible dissolved state, fractions of the AFFF eluate were collected...

  2. Sampling strategy for estimating human exposure pathways to consumer chemicals

    NARCIS (Netherlands)

    Papadopoulou, Eleni; Padilla-Sanchez, Juan A.; Collins, Chris D.; Cousins, Ian T.; Covaci, Adrian; de Wit, Cynthia A.; Leonards, Pim E.G.; Voorspoels, Stefan; Thomsen, Cathrine; Harrad, Stuart; Haug, Line S.

    2016-01-01

    Human exposure to consumer chemicals has become a worldwide concern. In this work, a comprehensive sampling strategy is presented, to our knowledge being the first to study all relevant exposure pathways in a single cohort using multiple methods for assessment of exposure from each exposure pathway.

  3. Analysis of common SHOX gene sequence variants and ∼4.9-kb ...

    Indian Academy of Sciences (India)

    [Solc R., Hirschfeldova K., Kebrdlova V. and Baxova A. 2014 Analysis of common SHOX gene sequence variants ... based on a Gibbs sampling strategy were done using .... SHOX (short stature homeobox) are an important cause of growth.

  4. Dried blood spot measurement: application in tacrolimus monitoring using limited sampling strategy and abbreviated AUC estimation.

    Science.gov (United States)

    Cheung, Chi Yuen; van der Heijden, Jaques; Hoogtanders, Karin; Christiaans, Maarten; Liu, Yan Lun; Chan, Yiu Han; Choi, Koon Shing; van de Plas, Afke; Shek, Chi Chung; Chau, Ka Foon; Li, Chun Sang; van Hooff, Johannes; Stolk, Leo

    2008-02-01

    Dried blood spot (DBS) sampling and high-performance liquid chromatography tandem-mass spectrometry have been developed in monitoring tacrolimus levels. Our center favors the use of limited sampling strategy and abbreviated formula to estimate the area under concentration-time curve (AUC(0-12)). However, it is inconvenient for patients because they have to wait in the center for blood sampling. We investigated the application of DBS method in tacrolimus level monitoring using limited sampling strategy and abbreviated AUC estimation approach. Duplicate venous samples were obtained at each time point (C(0), C(2), and C(4)). To determine the stability of blood samples, one venous sample was sent to our laboratory immediately. The other duplicate venous samples, together with simultaneous fingerprick blood samples, were sent to the University of Maastricht in the Netherlands. Thirty six patients were recruited and 108 sets of blood samples were collected. There was a highly significant relationship between AUC(0-12), estimated from venous blood samples, and fingerprick blood samples (r(2) = 0.96, P AUC(0-12) strategy as drug monitoring.

  5. Effect of self-interaction on the phase diagram of a Gibbs-like measure derived by a reversible Probabilistic Cellular Automata

    International Nuclear Information System (INIS)

    Cirillo, Emilio N.M.; Louis, Pierre-Yves; Ruszel, Wioletta M.; Spitoni, Cristian

    2014-01-01

    Cellular Automata are discrete-time dynamical systems on a spatially extended discrete space which provide paradigmatic examples of nonlinear phenomena. Their stochastic generalizations, i.e., Probabilistic Cellular Automata (PCA), are discrete time Markov chains on lattice with finite single-cell states whose distinguishing feature is the parallel character of the updating rule. We study the ground states of the Hamiltonian and the low-temperature phase diagram of the related Gibbs measure naturally associated with a class of reversible PCA, called the cross PCA. In such a model the updating rule of a cell depends indeed only on the status of the five cells forming a cross centered at the original cell itself. In particular, it depends on the value of the center spin (self-interaction). The goal of the paper is that of investigating the role played by the self-interaction parameter in connection with the ground states of the Hamiltonian and the low-temperature phase diagram of the Gibbs measure associated with this particular PCA

  6. Extrapolation procedures for calculating high-temperature gibbs free energies of aqueous electrolytes

    International Nuclear Information System (INIS)

    Tremaine, P.R.

    1979-01-01

    Methods for calculating high-temprature Gibbs free energies of mononuclear cations and anions from room-temperature data are reviewed. Emphasis is given to species required for oxide solubility calculations relevant to mass transport situations in the nuclear industry. Free energies predicted by each method are compared to selected values calculated from recently reported solubility studies and other literature data. Values for monatomic ions estimated using the assumption anti C 0 p(T) = anti C 0 p(298) agree best with experiment to 423 K. From 423 K to 523 K, free energies from an electrostatic model for ion hydration are more accurate. Extrapolations for hydrolyzed species are limited by a lack of room-temperature entropy data and expressions for estimating these entropies are discussed. (orig.) [de

  7. Statistical sampling strategies

    International Nuclear Information System (INIS)

    Andres, T.H.

    1987-01-01

    Systems assessment codes use mathematical models to simulate natural and engineered systems. Probabilistic systems assessment codes carry out multiple simulations to reveal the uncertainty in values of output variables due to uncertainty in the values of the model parameters. In this paper, methods are described for sampling sets of parameter values to be used in a probabilistic systems assessment code. Three Monte Carlo parameter selection methods are discussed: simple random sampling, Latin hypercube sampling, and sampling using two-level orthogonal arrays. Three post-selection transformations are also described: truncation, importance transformation, and discretization. Advantages and disadvantages of each method are summarized

  8. Advanced Markov chain Monte Carlo methods learning from past samples

    CERN Document Server

    Liang, Faming; Carrol, Raymond J

    2010-01-01

    This book provides comprehensive coverage of simulation of complex systems using Monte Carlo methods. Developing algorithms that are immune to the local trap problem has long been considered as the most important topic in MCMC research. Various advanced MCMC algorithms which address this problem have been developed include, the modified Gibbs sampler, the methods based on auxiliary variables and the methods making use of past samples. The focus of this book is on the algorithms that make use of past samples. This book includes the multicanonical algorithm, dynamic weighting, dynamically weight

  9. Global, exact cosmic microwave background data analysis using Gibbs sampling

    International Nuclear Information System (INIS)

    Wandelt, Benjamin D.; Larson, David L.; Lakshminarayanan, Arun

    2004-01-01

    We describe an efficient and exact method that enables global Bayesian analysis of cosmic microwave background (CMB) data. The method reveals the joint posterior density (or likelihood for flat priors) of the power spectrum C l and the CMB signal. Foregrounds and instrumental parameters can be simultaneously inferred from the data. The method allows the specification of a wide range of foreground priors. We explicitly show how to propagate the non-Gaussian dependency structure of the C l posterior through to the posterior density of the parameters. If desired, the analysis can be coupled to theoretical (cosmological) priors and can yield the posterior density of cosmological parameter estimates directly from the time-ordered data. The method does not hinge on special assumptions about the survey geometry or noise properties, etc., It is based on a Monte Carlo approach and hence parallelizes trivially. No trace or determinant evaluations are necessary. The feasibility of this approach rests on the ability to solve the systems of linear equations which arise. These are of the same size and computational complexity as the map-making equations. We describe a preconditioned conjugate gradient technique that solves this problem and demonstrate in a numerical example that the computational time required for each Monte Carlo sample scales as n p 3/2 with the number of pixels n p . We use our method to analyze the data from the Differential Microwave Radiometer on the Cosmic Background Explorer and explore the non-Gaussian joint posterior density of the C l from the Differential Microwave Radiometer on the Cosmic Background Explorer in several projections

  10. A proposal of optimal sampling design using a modularity strategy

    Science.gov (United States)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  11. Recruiting hard-to-reach United States population sub-groups via adaptations of snowball sampling strategy

    Science.gov (United States)

    Sadler, Georgia Robins; Lee, Hau-Chen; Seung-Hwan Lim, Rod; Fullerton, Judith

    2011-01-01

    Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author’s program of research are provided to demonstrate how adaptations of snowball sampling can be effectively used in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or subjects for research studies when recruitment of a population based sample is not essential. PMID:20727089

  12. Gibbs free-energy difference between the glass and crystalline phases of a Ni-Zr alloy

    Science.gov (United States)

    Ohsaka, K.; Trinh, E. H.; Holzer, J. C.; Johnson, W. L.

    1993-01-01

    The heats of eutectic melting and devitrification, and the specific heats of the crystalline, glass, and liquid phases have been measured for a Ni24Zr76 alloy. The data are used to calculate the Gibbs free-energy difference, Delta G(AC), between the real glass and the crystal on an assumption that the liquid-glass transition is second order. The result shows that Delta G(AC) continuously increases as the temperature decreases in contrast to the ideal glass case where Delta G(AC) is assumed to be independent of temperature.

  13. Sampling and analyte enrichment strategies for ambient mass spectrometry.

    Science.gov (United States)

    Li, Xianjiang; Ma, Wen; Li, Hongmei; Ai, Wanpeng; Bai, Yu; Liu, Huwei

    2018-01-01

    Ambient mass spectrometry provides great convenience for fast screening, and has showed promising potential in analytical chemistry. However, its relatively low sensitivity seriously restricts its practical utility in trace compound analysis. In this review, we summarize the sampling and analyte enrichment strategies coupled with nine modes of representative ambient mass spectrometry (desorption electrospray ionization, paper vhspray ionization, wooden-tip spray ionization, probe electrospray ionization, coated blade spray ionization, direct analysis in real time, desorption corona beam ionization, dielectric barrier discharge ionization, and atmospheric-pressure solids analysis probe) that have dramatically increased the detection sensitivity. We believe that these advances will promote routine use of ambient mass spectrometry. Graphical abstract Scheme of sampling stretagies for ambient mass spectrometry.

  14. Observing System Simulation Experiments for the assessment of temperature sampling strategies in the Mediterranean Sea

    Directory of Open Access Journals (Sweden)

    F. Raicich

    2003-01-01

    Full Text Available For the first time in the Mediterranean Sea various temperature sampling strategies are studied and compared to each other by means of the Observing System Simulation Experiment technique. Their usefulness in the framework of the Mediterranean Forecasting System (MFS is assessed by quantifying their impact in a Mediterranean General Circulation Model in numerical twin experiments via univariate data assimilation of temperature profiles in summer and winter conditions. Data assimilation is performed by means of the optimal interpolation algorithm implemented in the SOFA (System for Ocean Forecasting and Analysis code. The sampling strategies studied here include various combinations of eXpendable BathyThermograph (XBT profiles collected along Volunteer Observing Ship (VOS tracks, Airborne XBTs (AXBTs and sea surface temperatures. The actual sampling strategy adopted in the MFS Pilot Project during the Targeted Operational Period (TOP, winter-spring 2000 is also studied. The data impact is quantified by the error reduction relative to the free run. The most effective sampling strategies determine 25–40% error reduction, depending on the season, the geographic area and the depth range. A qualitative relationship can be recognized in terms of the spread of information from the data positions, between basin circulation features and spatial patterns of the error reduction fields, as a function of different spatial and seasonal characteristics of the dynamics. The largest error reductions are observed when samplings are characterized by extensive spatial coverages, as in the cases of AXBTs and the combination of XBTs and surface temperatures. The sampling strategy adopted during the TOP is characterized by little impact, as a consequence of a sampling frequency that is too low. Key words. Oceanography: general (marginal and semi-enclosed seas; numerical modelling

  15. Observing System Simulation Experiments for the assessment of temperature sampling strategies in the Mediterranean Sea

    Directory of Open Access Journals (Sweden)

    F. Raicich

    Full Text Available For the first time in the Mediterranean Sea various temperature sampling strategies are studied and compared to each other by means of the Observing System Simulation Experiment technique. Their usefulness in the framework of the Mediterranean Forecasting System (MFS is assessed by quantifying their impact in a Mediterranean General Circulation Model in numerical twin experiments via univariate data assimilation of temperature profiles in summer and winter conditions. Data assimilation is performed by means of the optimal interpolation algorithm implemented in the SOFA (System for Ocean Forecasting and Analysis code. The sampling strategies studied here include various combinations of eXpendable BathyThermograph (XBT profiles collected along Volunteer Observing Ship (VOS tracks, Airborne XBTs (AXBTs and sea surface temperatures. The actual sampling strategy adopted in the MFS Pilot Project during the Targeted Operational Period (TOP, winter-spring 2000 is also studied.

    The data impact is quantified by the error reduction relative to the free run. The most effective sampling strategies determine 25–40% error reduction, depending on the season, the geographic area and the depth range. A qualitative relationship can be recognized in terms of the spread of information from the data positions, between basin circulation features and spatial patterns of the error reduction fields, as a function of different spatial and seasonal characteristics of the dynamics. The largest error reductions are observed when samplings are characterized by extensive spatial coverages, as in the cases of AXBTs and the combination of XBTs and surface temperatures. The sampling strategy adopted during the TOP is characterized by little impact, as a consequence of a sampling frequency that is too low.

    Key words. Oceanography: general (marginal and semi-enclosed seas; numerical modelling

  16. Thermodynamics of Micellar Systems : Comparison of Mass Action and Phase Equilibrium Models for the Calculation of Standard Gibbs Energies of Micelle Formation

    NARCIS (Netherlands)

    Blandamer, Michael J.; Cullis, Paul M.; Soldi, L. Giorgio; Engberts, Jan B.F.N.; Kacperska, Anna; Os, Nico M. van

    1995-01-01

    Micellar colloids are distinguished from other colloids by their association-dissociation equilibrium in solution between monomers, counter-ions and micelles. According to classical thermodynamics, the standard Gibbs energy of formation of micelles at fixed temperature and pressure can be related to

  17. [Study of spatial stratified sampling strategy of Oncomelania hupensis snail survey based on plant abundance].

    Science.gov (United States)

    Xun-Ping, W; An, Z

    2017-07-27

    Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.

  18. A course on large deviations with an introduction to Gibbs measures

    CERN Document Server

    Rassoul-Agha, Firas

    2015-01-01

    This is an introductory course on the methods of computing asymptotics of probabilities of rare events: the theory of large deviations. The book combines large deviation theory with basic statistical mechanics, namely Gibbs measures with their variational characterization and the phase transition of the Ising model, in a text intended for a one semester or quarter course. The book begins with a straightforward approach to the key ideas and results of large deviation theory in the context of independent identically distributed random variables. This includes Cramér's theorem, relative entropy, Sanov's theorem, process level large deviations, convex duality, and change of measure arguments. Dependence is introduced through the interactions potentials of equilibrium statistical mechanics. The phase transition of the Ising model is proved in two different ways: first in the classical way with the Peierls argument, Dobrushin's uniqueness condition, and correlation inequalities and then a second time through the ...

  19. Gibbs Measures of Nonlinear Schrödinger Equations as Limits of Many-Body Quantum States in Dimensions {d ≤slant 3}

    Science.gov (United States)

    Fröhlich, Jürg; Knowles, Antti; Schlein, Benjamin; Sohinger, Vedran

    2017-12-01

    We prove that Gibbs measures of nonlinear Schrödinger equations arise as high-temperature limits of thermal states in many-body quantum mechanics. Our results hold for defocusing interactions in dimensions {d =1,2,3}. The many-body quantum thermal states that we consider are the grand canonical ensemble for d = 1 and an appropriate modification of the grand canonical ensemble for {d =2,3}. In dimensions d = 2, 3, the Gibbs measures are supported on singular distributions, and a renormalization of the chemical potential is necessary. On the many-body quantum side, the need for renormalization is manifested by a rapid growth of the number of particles. We relate the original many-body quantum problem to a renormalized version obtained by solving a counterterm problem. Our proof is based on ideas from field theory, using a perturbative expansion in the interaction, organized by using a diagrammatic representation, and on Borel resummation of the resulting series.

  20. Estimação de parâmetros genéticos em suínos usando Amostrador de Gibbs Estimation of genetic parameters for growth and backfat thickness of Large White pigs using the Gibbs Sampler

    Directory of Open Access Journals (Sweden)

    Leandro Barbosa

    2008-07-01

    Full Text Available Um total de 38.865 registros de animais da raça Large White foi usado para estimar componentes de co-variância e parâmetros genéticos das características idade ao atingir 100 kg de peso vivo (IDA e espessura de toucinho ajustada para 100 kg de peso vivo (ET, em análises bicaracterísticas. Para obtenção dos componentes de co-variância, foi utilizado o Amostrador de Gibbs por meio do programa MTGSAM. O modelo misto utilizado continha efeito fixo de grupo contemporâneo e os seguintes efeitos aleatórios: efeito genético aditivo direto, efeito genético aditivo materno, efeito comum de leitegada e efeito residual. As médias das estimativas de herdabilidade aditivas diretas foram 0,33 e 0,44 para IDA e ET, respectivamente. As médias das estimativas do efeito comum de leitegada foram 0,09 e 0,02 para IDA e ET, respectivamente. A estimativa de correlação genética aditiva entre as características foi próxima de zero (-0,015. As herdabilidades obtidas para as características de desempenho avaliadas indicam que ganhos genéticos satisfatórios podem ser obtidos no melhoramento de suínos da raça Large White para essas características e que a seleção simultânea para ambas as características pode ser realizada, uma vez que é baixa a correlação genética aditiva direta.Data consisting of 38,865 records of Large White pigs were used to estimate genetic parameters for days to 100 kg (DAYS and backfat thickness adjusted to 100 kg (BF. Covariance components were estimated by a bivariate mixed model including the fixed effect of contemporary group and the direct and maternal additive genetic, common litter and residual random effects using the Gibbs Sampling algorithm of the MTGSAM program. Estimates of direct and common litter effects for DAYS and BF were 0.33 and 0.44 and 0.09 and 0.02, respectively. Additive genetic correlation between DAYS and BF was close to zero (-0.015. The heritability estimates indicate that genetic gains may

  1. Gibbs free energy of transfer of a methylene group on {UCON + (sodium or potassium) phosphate salts} aqueous two-phase systems: Hydrophobicity effects

    International Nuclear Information System (INIS)

    Silverio, Sara C.; Rodriguez, Oscar; Teixeira, Jose A.; Macedo, Eugenia A.

    2010-01-01

    The Gibbs free energy of transfer of a suitable hydrophobic probe can be regarded as a measure of the relative hydrophobicity of the different phases. The methylene group (CH 2 ) can be considered hydrophobic, and thus be a suitable probe for hydrophobicity. In this work, the partition coefficients of a series of five dinitrophenylated-amino acids were experimentally determined, at 23 o C, in three different tie-lines of the biphasic systems: (UCON + K 2 HPO 4 ), (UCON + potassium phosphate buffer, pH 7), (UCON + KH 2 PO 4 ), (UCON + Na 2 HPO 4 ), (UCON + sodium phosphate buffer, pH 7), and (UCON + NaH 2 PO 4 ). The Gibbs free energy of transfer of CH 2 units were calculated from the partition coefficients and used to compare the relative hydrophobicity of the equilibrium phases. The largest relative hydrophobicity was found for the ATPS formed by dihydrogen phosphate salts.

  2. Are electrostatic potentials between regions of different chemical composition measurable? The Gibbs-Guggenheim Principle reconsidered, extended and its consequences revisited.

    Science.gov (United States)

    Pethica, Brian A

    2007-12-21

    As indicated by Gibbs and made explicit by Guggenheim, the electrical potential difference between two regions of different chemical composition cannot be measured. The Gibbs-Guggenheim Principle restricts the use of classical electrostatics in electrochemical theories as thermodynamically unsound with some few approximate exceptions, notably for dilute electrolyte solutions and concomitant low potentials where the linear limit for the exponential of the relevant Boltzmann distribution applies. The Principle invalidates the widespread use of forms of the Poisson-Boltzmann equation which do not include the non-electrostatic components of the chemical potentials of the ions. From a thermodynamic analysis of the parallel plate electrical condenser, employing only measurable electrical quantities and taking into account the chemical potentials of the components of the dielectric and their adsorption at the surfaces of the condenser plates, an experimental procedure to provide exceptions to the Principle has been proposed. This procedure is now reconsidered and rejected. No other related experimental procedures circumvent the Principle. Widely-used theoretical descriptions of electrolyte solutions, charged surfaces and colloid dispersions which neglect the Principle are briefly discussed. MD methods avoid the limitations of the Poisson-Bolzmann equation. Theoretical models which include the non-electrostatic components of the inter-ion and ion-surface interactions in solutions and colloid systems assume the additivity of dispersion and electrostatic forces. An experimental procedure to test this assumption is identified from the thermodynamics of condensers at microscopic plate separations. The available experimental data from Kelvin probe studies are preliminary, but tend against additivity. A corollary to the Gibbs-Guggenheim Principle is enunciated, and the Principle is restated that for any charged species, neither the difference in electrostatic potential nor the

  3. A sampling strategy for estimating plot average annual fluxes of chemical elements from forest soils

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.; Vries, de W.

    2010-01-01

    A sampling strategy for estimating spatially averaged annual element leaching fluxes from forest soils is presented and tested in three Dutch forest monitoring plots. In this method sampling locations and times (days) are selected by probability sampling. Sampling locations were selected by

  4. Sample preparation composite and replicate strategy for assay of solid oral drug products.

    Science.gov (United States)

    Harrington, Brent; Nickerson, Beverly; Guo, Michele Xuemei; Barber, Marc; Giamalva, David; Lee, Carlos; Scrivens, Garry

    2014-12-16

    In pharmaceutical analysis, the results of drug product assay testing are used to make decisions regarding the quality, efficacy, and stability of the drug product. In order to make sound risk-based decisions concerning drug product potency, an understanding of the uncertainty of the reportable assay value is required. Utilizing the most restrictive criteria in current regulatory documentation, a maximum variability attributed to method repeatability is defined for a drug product potency assay. A sampling strategy that reduces the repeatability component of the assay variability below this predefined maximum is demonstrated. The sampling strategy consists of determining the number of dosage units (k) to be prepared in a composite sample of which there may be a number of equivalent replicate (r) sample preparations. The variability, as measured by the standard error (SE), of a potency assay consists of several sources such as sample preparation and dosage unit variability. A sampling scheme that increases the number of sample preparations (r) and/or number of dosage units (k) per sample preparation will reduce the assay variability and thus decrease the uncertainty around decisions made concerning the potency of the drug product. A maximum allowable repeatability component of the standard error (SE) for the potency assay is derived using material in current regulatory documents. A table of solutions for the number of dosage units per sample preparation (r) and number of replicate sample preparations (k) is presented for any ratio of sample preparation and dosage unit variability.

  5. Development and Demonstration of a Method to Evaluate Bio-Sampling Strategies Using Building Simulation and Sample Planning Software

    OpenAIRE

    Dols, W. Stuart; Persily, Andrew K.; Morrow, Jayne B.; Matzke, Brett D.; Sego, Landon H.; Nuffer, Lisa L.; Pulsipher, Brent A.

    2010-01-01

    In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by vir...

  6. Metabolomic analysis of urine samples by UHPLC-QTOF-MS: Impact of normalization strategies.

    Science.gov (United States)

    Gagnebin, Yoric; Tonoli, David; Lescuyer, Pierre; Ponte, Belen; de Seigneux, Sophie; Martin, Pierre-Yves; Schappler, Julie; Boccard, Julien; Rudaz, Serge

    2017-02-22

    Among the various biological matrices used in metabolomics, urine is a biofluid of major interest because of its non-invasive collection and its availability in large quantities. However, significant sources of variability in urine metabolomics based on UHPLC-MS are related to the analytical drift and variation of the sample concentration, thus requiring normalization. A sequential normalization strategy was developed to remove these detrimental effects, including: (i) pre-acquisition sample normalization by individual dilution factors to narrow the concentration range and to standardize the analytical conditions, (ii) post-acquisition data normalization by quality control-based robust LOESS signal correction (QC-RLSC) to correct for potential analytical drift, and (iii) post-acquisition data normalization by MS total useful signal (MSTUS) or probabilistic quotient normalization (PQN) to prevent the impact of concentration variability. This generic strategy was performed with urine samples from healthy individuals and was further implemented in the context of a clinical study to detect alterations in urine metabolomic profiles due to kidney failure. In the case of kidney failure, the relation between creatinine/osmolality and the sample concentration is modified, and relying only on these measurements for normalization could be highly detrimental. The sequential normalization strategy was demonstrated to significantly improve patient stratification by decreasing the unwanted variability and thus enhancing data quality. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Gibbs free energy of reactions involving SiC, Si3N4, H2, and H2O as a function of temperature and pressure

    Science.gov (United States)

    Isham, M. A.

    1992-01-01

    Silicon carbide and silicon nitride are considered for application as structural materials and coating in advanced propulsion systems including nuclear thermal. Three-dimensional Gibbs free energy were constructed for reactions involving these materials in H2 and H2/H2O. Free energy plots are functions of temperature and pressure. Calculations used the definition of Gibbs free energy where the spontaneity of reactions is calculated as a function of temperature and pressure. Silicon carbide decomposes to Si and CH4 in pure H2 and forms a SiO2 scale in a wet atmosphere. Silicon nitride remains stable under all conditions. There was no apparent difference in reaction thermodynamics between ideal and Van der Waals treatment of gaseous species.

  8. Boiling point determination using adiabatic Gibbs ensemble Monte Carlo simulations: application to metals described by embedded-atom potentials.

    Science.gov (United States)

    Gelb, Lev D; Chakraborty, Somendra Nath

    2011-12-14

    The normal boiling points are obtained for a series of metals as described by the "quantum-corrected Sutton Chen" (qSC) potentials [S.-N. Luo, T. J. Ahrens, T. Çağın, A. Strachan, W. A. Goddard III, and D. C. Swift, Phys. Rev. B 68, 134206 (2003)]. Instead of conventional Monte Carlo simulations in an isothermal or expanded ensemble, simulations were done in the constant-NPH adabatic variant of the Gibbs ensemble technique as proposed by Kristóf and Liszi [Chem. Phys. Lett. 261, 620 (1996)]. This simulation technique is shown to be a precise tool for direct calculation of boiling temperatures in high-boiling fluids, with results that are almost completely insensitive to system size or other arbitrary parameters as long as the potential truncation is handled correctly. Results obtained were validated using conventional NVT-Gibbs ensemble Monte Carlo simulations. The qSC predictions for boiling temperatures are found to be reasonably accurate, but substantially underestimate the enthalpies of vaporization in all cases. This appears to be largely due to the systematic overestimation of dimer binding energies by this family of potentials, which leads to an unsatisfactory description of the vapor phase. © 2011 American Institute of Physics

  9. Demonstration and resolution of the Gibbs paradox of the first kind

    International Nuclear Information System (INIS)

    Peters, Hjalmar

    2014-01-01

    The Gibbs paradox of the first kind (GP1) refers to the false increase in entropy which, in statistical mechanics, is calculated from the process of combining two gas systems S1 and S2 consisting of distinguishable particles. Presented in a somewhat modified form, the GP1 manifests as a contradiction to the second law of thermodynamics. Contrary to popular belief, this contradiction affects not only classical but also quantum statistical mechanics. This paper resolves the GP1 by considering two effects. (i) The uncertainty about which particles are located in S1 and which in S2 contributes to the entropies of S1 and S2. (ii) S1 and S2 are correlated by the fact that if a certain particle is located in one system, it cannot be located in the other. As a consequence, the entropy of the total system consisting of S1 and S2 is not the sum of the entropies of S1 and S2. (paper)

  10. The osmotic second virial coefficient and the Gibbs-McMillan-Mayer framework

    DEFF Research Database (Denmark)

    Mollerup, J.M.; Breil, Martin Peter

    2009-01-01

    The osmotic second virial coefficient is a key parameter in light scattering, protein crystallisation. self-interaction chromatography, and osmometry. The interpretation of the osmotic second virial coefficient depends on the set of independent variables. This commonly includes the independent...... variables associated with the Kirkwood-Buff, the McMillan-Mayer, and the Lewis-Randall solution theories. In this paper we analyse the osmotic second virial coefficient using a Gibbs-McMillan-Mayer framework which is similar to the McMillan-Mayer framework with the exception that pressure rather than volume...... is an independent variable. A Taylor expansion is applied to the osmotic pressure of a solution where one of the solutes is a small molecule, a salt for instance, that equilibrates between the two phases. Other solutes are retained. Solvents are small molecules that equilibrate between the two phases...

  11. Sampling and analysis strategies to support waste form qualification

    International Nuclear Information System (INIS)

    Westsik, J.H. Jr.; Pulsipher, B.A.; Eggett, D.L.; Kuhn, W.L.

    1989-04-01

    As part of the waste acceptance process, waste form producers will be required to (1) demonstrate that their glass waste form will meet minimum specifications, (2) show that the process can be controlled to consistently produce an acceptable waste form, and (3) provide documentation that the waste form produced meets specifications. Key to the success of these endeavors is adequate sampling and chemical and radiochemical analyses of the waste streams from the waste tanks through the process to the final glass product. This paper suggests sampling and analysis strategies for meeting specific statistical objectives of (1) detection of compositions outside specification limits, (2) prediction of final glass product composition, and (3) estimation of composition in process vessels for both reporting and guiding succeeding process steps. 2 refs., 1 fig., 3 tabs

  12. A cache-friendly sampling strategy for texture-based volume rendering on GPU

    Directory of Open Access Journals (Sweden)

    Junpeng Wang

    2017-06-01

    Full Text Available The texture-based volume rendering is a memory-intensive algorithm. Its performance relies heavily on the performance of the texture cache. However, most existing texture-based volume rendering methods blindly map computational resources to texture memory and result in incoherent memory access patterns, causing low cache hit rates in certain cases. The distance between samples taken by threads of an atomic scheduling unit (e.g. a warp of 32 threads in CUDA of the GPU is a crucial factor that affects the texture cache performance. Based on this fact, we present a new sampling strategy, called Warp Marching, for the ray-casting algorithm of texture-based volume rendering. The effects of different sample organizations and different thread-pixel mappings in the ray-casting algorithm are thoroughly analyzed. Also, a pipeline manner color blending approach is introduced and the power of warp-level GPU operations is leveraged to improve the efficiency of parallel executions on the GPU. In addition, the rendering performance of the Warp Marching is view-independent, and it outperforms existing empty space skipping techniques in scenarios that need to render large dynamic volumes in a low resolution image. Through a series of micro-benchmarking and real-life data experiments, we rigorously analyze our sampling strategies and demonstrate significant performance enhancements over existing sampling methods.

  13. Strategy for thermo-gravimetric analysis of K East fuel samples

    International Nuclear Information System (INIS)

    Lawrence, L.A.

    1997-01-01

    A strategy was developed for the Thermo-Gravimetric Analysis (TGA) testing of K East fuel samples for oxidation rate determinations. Tests will first establish if there are any differences for dry air oxidation between the K West and K East fuel. These tests will be followed by moist inert gas oxidation rate measurements. The final series of tests will consider pure water vapor i.e., steam

  14. A census-weighted, spatially-stratified household sampling strategy for urban malaria epidemiology

    Directory of Open Access Journals (Sweden)

    Slutsker Laurence

    2008-02-01

    Full Text Available Abstract Background Urban malaria is likely to become increasingly important as a consequence of the growing proportion of Africans living in cities. A novel sampling strategy was developed for urban areas to generate a sample simultaneously representative of population and inhabited environments. Such a strategy should facilitate analysis of important epidemiological relationships in this ecological context. Methods Census maps and summary data for Kisumu, Kenya, were used to create a pseudo-sampling frame using the geographic coordinates of census-sampled structures. For every enumeration area (EA designated as urban by the census (n = 535, a sample of structures equal to one-tenth the number of households was selected. In EAs designated as rural (n = 32, a geographically random sample totalling one-tenth the number of households was selected from a grid of points at 100 m intervals. The selected samples were cross-referenced to a geographic information system, and coordinates transferred to handheld global positioning units. Interviewers found the closest eligible household to the sampling point and interviewed the caregiver of a child aged Results 4,336 interviews were completed in 473 of the 567 study area EAs from June 2002 through February 2003. EAs without completed interviews were randomly distributed, and non-response was approximately 2%. Mean distance from the assigned sampling point to the completed interview was 74.6 m, and was significantly less in urban than rural EAs, even when controlling for number of households. The selected sample had significantly more children and females of childbearing age than the general population, and fewer older individuals. Conclusion This method selected a sample that was simultaneously population-representative and inclusive of important environmental variation. The use of a pseudo-sampling frame and pre-programmed handheld GPS units is more efficient and may yield a more complete sample than

  15. Use of linear free energy relationship to predict Gibbs free energies of formation of pyrochlore phases (CaMTi2O7)

    International Nuclear Information System (INIS)

    Xu, H.; Wang, Y.

    1999-01-01

    In this letter, a linear free energy relationship is used to predict the Gibbs free energies of formation of crystalline phases of pyrochlore and zirconolite families with stoichiometry of MCaTi 2 O 7 (or, CaMTi 2 O 7 ,) from the known thermodynamic properties of aqueous tetravalent cations (M 4+ ). The linear free energy relationship for tetravalent cations is expressed as ΔG f,M v X 0 =a M v X ΔG n,M 4+ 0 +b M v X +β M v X r M 4+ , where the coefficients a M v X , b M v X , and β M v X characterize a particular structural family of M v X, r M 4+ is the ionic radius of M 4+ cation, ΔG f,M v X 0 is the standard Gibbs free energy of formation of M v X, and ΔG n,M 4+ 0 is the standard non-solvation energy of cation M 4+ . The coefficients for the structural family of zirconolite with the stoichiometry of M 4+ CaTi 2 O 7 are estimated to be: a M v X =0.5717, b M v X =-4284.67 (kJ/mol), and β M v X =27.2 (kJ/mol nm). The coefficients for the structural family of pyrochlore with the stoichiometry of M 4+ CaTi 2 O 7 are estimated to be: a M v X =0.5717, b M v X =-4174.25 (kJ/mol), and β M v X =13.4 (kJ/mol nm). Using the linear free energy relationship, the Gibbs free energies of formation of various zirconolite and pyrochlore phases are calculated. (orig.)

  16. About the choice of Gibbs' potential for modelling of FCC ↔ HCP transformation in FeMnSi-based shape memory alloys

    Science.gov (United States)

    Evard, Margarita E.; Volkov, Aleksandr E.; Belyaev, Fedor S.; Ignatova, Anna D.

    2018-05-01

    The choice of Gibbs' potential for microstructural modeling of FCC ↔ HCP martensitic transformation in FeMn-based shape memory alloys is discussed. Threefold symmetry of the HCP phase is taken into account on specifying internal variables characterizing volume fractions of martensite variants. Constraints imposed on model constants by thermodynamic equilibrium conditions are formulated.

  17. The Role of Shearing Energy and Interfacial Gibbs Free Energy in the Emulsification Mechanism of Waxy Crude Oil

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2017-05-01

    Full Text Available Crude oil is generally produced with water, and the water cut produced by oil wells is increasingly common over their lifetime, so it is inevitable to create emulsions during oil production. However, the formation of emulsions presents a costly problem in surface process particularly, both in terms of transportation energy consumption and separation efficiency. To deal with the production and operational problems which are related to crude oil emulsions, especially to ensure the separation and transportation of crude oil-water systems, it is necessary to better understand the emulsification mechanism of crude oil under different conditions from the aspects of bulk and interfacial properties. The concept of shearing energy was introduced in this study to reveal the driving force for emulsification. The relationship between shearing stress in the flow field and interfacial tension (IFT was established, and the correlation between shearing energy and interfacial Gibbs free energy was developed. The potential of the developed correlation model was validated using the experimental and field data on emulsification behavior. It was also shown how droplet deformation could be predicted from a random deformation degree and orientation angle. The results indicated that shearing energy as the energy produced by shearing stress working in the flow field is the driving force activating the emulsification behavior. The deformation degree and orientation angle of dispersed phase droplet are associated with the interfacial properties, rheological properties and the experienced turbulence degree. The correlation between shearing stress and IFT can be quantified if droplet deformation degree vs. droplet orientation angle data is available. When the water cut is close to the inversion point of waxy crude oil emulsion, the interfacial Gibbs free energy change decreased and the shearing energy increased. This feature is also presented in the special regions where

  18. The thermodynamic approach to boron chemical vapour deposition based on a computer minimization of the total Gibbs free energy

    International Nuclear Information System (INIS)

    Naslain, R.; Thebault, J.; Hagenmuller, P.; Bernard, C.

    1979-01-01

    A thermodynamic approach based on the minimization of the total Gibbs free energy of the system is used to study the chemical vapour deposition (CVD) of boron from BCl 3 -H 2 or BBr 3 -H 2 mixtures on various types of substrates (at 1000 < T< 1900 K and 1 atm). In this approach it is assumed that states close to equilibrium are reached in the boron CVD apparatus. (Auth.)

  19. Size and shape dependent Gibbs free energy and phase stability of titanium and zirconium nanoparticles

    International Nuclear Information System (INIS)

    Xiong Shiyun; Qi Weihong; Huang Baiyun; Wang Mingpu; Li Yejun

    2010-01-01

    The Debye model of Helmholtz free energy for bulk material is generalized to Gibbs free energy (GFE) model for nanomaterial, while a shape factor is introduced to characterize the shape effect on GFE. The structural transitions of Ti and Zr nanoparticles are predicted based on GFE. It is further found that GFE decreases with the shape factor and increases with decreasing of the particle size. The critical size of structural transformation for nanoparticles goes up as temperature increases in the absence of change in shape factor. For specified temperature, the critical size climbs up with the increase of shape factor. The present predictions agree well with experiment values.

  20. Standard enthalpy, entropy and Gibbs free energy of formation of «A» type carbonate phosphocalcium hydroxyapatites

    International Nuclear Information System (INIS)

    Jebri, Sonia; Khattech, Ismail; Jemal, Mohamed

    2017-01-01

    Highlights: • A-type carbonate hydroxyapatites with 0 ⩽ x ⩽ 1 were prepared and characterized by DRX, IR spectroscopy and CHN analysis. • The heat of solution was measured in 9 wt% HNO 3 using an isoperibol calorimeter. • The standard enthalpy of formation was determined by thermochemical cycle. • Gibbs free energy has been deduced by estimating standard entropy of formation. • Carbonatation increases the stability till x = 0.6 mol. - Abstract: « A » type carbonate phosphocalcium hydroxyapatites having the general formula Ca 10 (PO 4 ) 6 (OH) (2-2x) (CO 3 ) x with 0 ⩽ x ⩽ 1, were prepared by solid gas reaction in the temperature range of 700–1000 °C. The obtained materials were characterized by X-ray diffraction and infrared spectroscopy. The carbonate content has been determined by C–H–N analysis. The heat of solution of these products was measured at T = 298 K in 9 wt% nitric acid solution using an isoperibol calorimeter. A thermochemical cycle was proposed and complementary experiences were performed in order to access to the standard enthalpies of formation of these phosphates. The results were compared to those previously obtained on apatites containing strontium and barium and show a decrease with the carbonate amount introduced in the lattice. This quantity becomes more negative as the ratio of substitution increases. Estimation of the entropy of formation allowed the determination of standard Gibbs free energy of formation of these compounds. The study showed that the substitution of hydroxyl by carbonate ions contributes to the stabilisation of the apatite structure.

  1. Recommended Immunological Strategies to Screen for Botulinum Neurotoxin-Containing Samples

    Directory of Open Access Journals (Sweden)

    Stéphanie Simon

    2015-11-01

    Full Text Available Botulinum neurotoxins (BoNTs cause the life-threatening neurological illness botulism in humans and animals and are divided into seven serotypes (BoNT/A–G, of which serotypes A, B, E, and F cause the disease in humans. BoNTs are classified as “category A” bioterrorism threat agents and are relevant in the context of the Biological Weapons Convention. An international proficiency test (PT was conducted to evaluate detection, quantification and discrimination capabilities of 23 expert laboratories from the health, food and security areas. Here we describe three immunological strategies that proved to be successful for the detection and quantification of BoNT/A, B, and E considering the restricted sample volume (1 mL distributed. To analyze the samples qualitatively and quantitatively, the first strategy was based on sensitive immunoenzymatic and immunochromatographic assays for fast qualitative and quantitative analyses. In the second approach, a bead-based suspension array was used for screening followed by conventional ELISA for quantification. In the third approach, an ELISA plate format assay was used for serotype specific immunodetection of BoNT-cleaved substrates, detecting the activity of the light chain, rather than the toxin protein. The results provide guidance for further steps in quality assurance and highlight problems to address in the future.

  2. Recommended Immunological Strategies to Screen for Botulinum Neurotoxin-Containing Samples.

    Science.gov (United States)

    Simon, Stéphanie; Fiebig, Uwe; Liu, Yvonne; Tierney, Rob; Dano, Julie; Worbs, Sylvia; Endermann, Tanja; Nevers, Marie-Claire; Volland, Hervé; Sesardic, Dorothea; Dorner, Martin B

    2015-11-26

    Botulinum neurotoxins (BoNTs) cause the life-threatening neurological illness botulism in humans and animals and are divided into seven serotypes (BoNT/A-G), of which serotypes A, B, E, and F cause the disease in humans. BoNTs are classified as "category A" bioterrorism threat agents and are relevant in the context of the Biological Weapons Convention. An international proficiency test (PT) was conducted to evaluate detection, quantification and discrimination capabilities of 23 expert laboratories from the health, food and security areas. Here we describe three immunological strategies that proved to be successful for the detection and quantification of BoNT/A, B, and E considering the restricted sample volume (1 mL) distributed. To analyze the samples qualitatively and quantitatively, the first strategy was based on sensitive immunoenzymatic and immunochromatographic assays for fast qualitative and quantitative analyses. In the second approach, a bead-based suspension array was used for screening followed by conventional ELISA for quantification. In the third approach, an ELISA plate format assay was used for serotype specific immunodetection of BoNT-cleaved substrates, detecting the activity of the light chain, rather than the toxin protein. The results provide guidance for further steps in quality assurance and highlight problems to address in the future.

  3. Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models

    Science.gov (United States)

    Debasish Saha; Armen R. Kemanian; Benjamin M. Rau; Paul R. Adler; Felipe Montes

    2017-01-01

    Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (...

  4. Evaluating sampling strategies for larval cisco (Coregonus artedi)

    Science.gov (United States)

    Myers, J.T.; Stockwell, J.D.; Yule, D.L.; Black, J.A.

    2008-01-01

    To improve our ability to assess larval cisco (Coregonus artedi) populations in Lake Superior, we conducted a study to compare several sampling strategies. First, we compared density estimates of larval cisco concurrently captured in surface waters with a 2 x 1-m paired neuston net and a 0.5-m (diameter) conical net. Density estimates obtained from the two gear types were not significantly different, suggesting that the conical net is a reasonable alternative to the more cumbersome and costly neuston net. Next, we assessed the effect of tow pattern (sinusoidal versus straight tows) to examine if propeller wash affected larval density. We found no effect of propeller wash on the catchability of larval cisco. Given the availability of global positioning systems, we recommend sampling larval cisco using straight tows to simplify protocols and facilitate straightforward measurements of volume filtered. Finally, we investigated potential trends in larval cisco density estimates by sampling four time periods during the light period of a day at individual sites. Our results indicate no significant trends in larval density estimates during the day. We conclude estimates of larval cisco density across space are not confounded by time at a daily timescale. Well-designed, cost effective surveys of larval cisco abundance will help to further our understanding of this important Great Lakes forage species.

  5. The Relationship of Dynamical Heterogeneity to the Adam-Gibbs and Random First-Order Transition Theories of Glass Formation

    OpenAIRE

    Starr, Francis W.; Douglas, Jack F.; Sastry, Srikanth

    2013-01-01

    We carefully examine common measures of dynamical heterogeneity for a model polymer melt and test how these scales compare with those hypothesized by the Adam and Gibbs (AG) and random first-order transition (RFOT) theories of relaxation in glass-forming liquids. To this end, we first analyze clusters of highly mobile particles, the string-like collective motion of these mobile particles, and clusters of relative low mobility. We show that the time scale of the high-mobility clusters and stri...

  6. Population Pharmacokinetics and Optimal Sampling Strategy for Model-Based Precision Dosing of Melphalan in Patients Undergoing Hematopoietic Stem Cell Transplantation.

    Science.gov (United States)

    Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A

    2018-05-01

    High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2  = 0.98; p strategy promises to achieve the target area under the curve as part of precision dosing.

  7. A Gibbs potential expansion with a quantic system made up of a large number of particles; Un developpement du potentiel de Gibbs d'un systeme compose d'un grand nombre de particules

    Energy Technology Data Exchange (ETDEWEB)

    Bloch, Claude; Dominicis, Cyrano de [Commissariat a l' energie atomique et aux energies alternatives - CEA, Centre d' Etudes Nucleaires de Saclay, Gif-sur-Yvette (France)

    1959-07-01

    Starting from an expansion derived in a previous work, we study the contribution to the Gibbs potential of the two-body dynamical correlations, taking into account the statistical correlations. Such a contribution is of interest for low density systems at low temperature. In the zero density limit, it reduces to the Beth Uhlenbeck expression of the second virial coefficient. For a system of fermions in the zero temperature limit, it yields the contribution of the Brueckner reaction matrix to the ground state energy, plus, under certain conditions, additional terms of the form exp. (β |Δ|), where the Δ are the binding energies of 'bound states' of the type first discussed by L. Cooper. Finally, we study the wave function of two particles immersed in a medium (defined by its temperature and chemical potential). lt satisfies an equation generalizing the Bethe Goldstone equation for an arbitrary temperature. Reprint of a paper published in 'Nuclear Physics' 10, 1959, p. 181-196 [French] Partant d'un developpement extrait d'un precedent travail, nous etudions la contribution au potentiel de Gibbs des relations dynamiques du systeme de deux corps, en tenant compte des relations statistiques. Une telle contribution presente de l'interet pour les systemes a densite faible et a basse temperature. A la densite limite zero, elle se ramene a l'expression de Beth Uhlenbeck du second coefficient virial. Pour un systeme de fermions a la temperature limite zero, il produit la contribution de la matrice de reaction de Brueckner au niveau fondamental, plus, dans certaines conditions, des termes additionnels de la forme exp. (β |Δ|), ou les Δ sont les energies de liaison des 'etats lies' du premier type, discutes auparavant par L. Cooper. Finalement, on etudie la fonction d'onde de deux particules immerges dans un milieu (definie par sa temperature et son potentiel chimique). Il satisfait a une equation generalisant l'equation de Bethe Goldstone pour une temperature arbitraire

  8. Exact sampling of the unobserved covariates in Bayesian spline models for measurement error problems.

    Science.gov (United States)

    Bhadra, Anindya; Carroll, Raymond J

    2016-07-01

    In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.

  9. Comparison of active and passive sampling strategies for the monitoring of pesticide contamination in streams

    Science.gov (United States)

    Assoumani, Azziz; Margoum, Christelle; Guillemain, Céline; Coquery, Marina

    2014-05-01

    The monitoring of water bodies regarding organic contaminants, and the determination of reliable estimates of concentrations are challenging issues, in particular for the implementation of the Water Framework Directive. Several strategies can be applied to collect water samples for the determination of their contamination level. Grab sampling is fast, easy, and requires little logistical and analytical needs in case of low frequency sampling campaigns. However, this technique lacks of representativeness for streams with high variations of contaminant concentrations, such as pesticides in rivers located in small agricultural watersheds. Increasing the representativeness of this sampling strategy implies greater logistical needs and higher analytical costs. Average automated sampling is therefore a solution as it allows, in a single analysis, the determination of more accurate and more relevant estimates of concentrations. Two types of automatic samplings can be performed: time-related sampling allows the assessment of average concentrations, whereas flow-dependent sampling leads to average flux concentrations. However, the purchase and the maintenance of automatic samplers are quite expensive. Passive sampling has recently been developed as an alternative to grab or average automated sampling, to obtain at lower cost, more realistic estimates of the average concentrations of contaminants in streams. These devices allow the passive accumulation of contaminants from large volumes of water, resulting in ultratrace level detection and smoothed integrative sampling over periods ranging from days to weeks. They allow the determination of time-weighted average (TWA) concentrations of the dissolved fraction of target contaminants, but they need to be calibrated in controlled conditions prior to field applications. In other words, the kinetics of the uptake of the target contaminants into the sampler must be studied in order to determine the corresponding sampling rate

  10. Gibbs energy of the resolvation of glycylglycine and its anion in aqueous solutions of dimethylsulfoxide at 298.15 K

    Science.gov (United States)

    Naumov, V. V.; Isaeva, V. A.; Kuzina, E. N.; Sharnin, V. A.

    2012-12-01

    Gibbs energies for the transfer of glycylglycine and glycylglycinate ions from water to water-dimethylsulfoxide solvents are determined from the interface distribution of substances between immiscible phases in the composition range of 0.00 to 0.20 molar fractions of DMSO at 298.15 K. It is shown that with a rise in the concentration of nonaqueous components in solution, we observe the solvation of dipeptide and its anion, due mainly to the destabilization of the carboxyl group.

  11. Equilibrium statistical mechanics for self-gravitating systems: local ergodicity and extended Boltzmann-Gibbs/White-Narayan statistics

    Science.gov (United States)

    He, Ping

    2012-01-01

    The long-standing puzzle surrounding the statistical mechanics of self-gravitating systems has not yet been solved successfully. We formulate a systematic theoretical framework of entropy-based statistical mechanics for spherically symmetric collisionless self-gravitating systems. We use an approach that is very different from that of the conventional statistical mechanics of short-range interaction systems. We demonstrate that the equilibrium states of self-gravitating systems consist of both mechanical and statistical equilibria, with the former characterized by a series of velocity-moment equations and the latter by statistical equilibrium equations, which should be derived from the entropy principle. The velocity-moment equations of all orders are derived from the steady-state collisionless Boltzmann equation. We point out that the ergodicity is invalid for the whole self-gravitating system, but it can be re-established locally. Based on the local ergodicity, using Fermi-Dirac-like statistics, with the non-degenerate condition and the spatial independence of the local microstates, we rederive the Boltzmann-Gibbs entropy. This is consistent with the validity of the collisionless Boltzmann equation, and should be the correct entropy form for collisionless self-gravitating systems. Apart from the usual constraints of mass and energy conservation, we demonstrate that the series of moment or virialization equations must be included as additional constraints on the entropy functional when performing the variational calculus; this is an extension to the original prescription by White & Narayan. Any possible velocity distribution can be produced by the statistical-mechanical approach that we have developed with the extended Boltzmann-Gibbs/White-Narayan statistics. Finally, we discuss the questions of negative specific heat and ensemble inequivalence for self-gravitating systems.

  12. Sampling strategies to measure the prevalence of common recurrent infections in longitudinal studies

    Directory of Open Access Journals (Sweden)

    Luby Stephen P

    2010-08-01

    Full Text Available Abstract Background Measuring recurrent infections such as diarrhoea or respiratory infections in epidemiological studies is a methodological challenge. Problems in measuring the incidence of recurrent infections include the episode definition, recall error, and the logistics of close follow up. Longitudinal prevalence (LP, the proportion-of-time-ill estimated by repeated prevalence measurements, is an alternative measure to incidence of recurrent infections. In contrast to incidence which usually requires continuous sampling, LP can be measured at intervals. This study explored how many more participants are needed for infrequent sampling to achieve the same study power as frequent sampling. Methods We developed a set of four empirical simulation models representing low and high risk settings with short or long episode durations. The model was used to evaluate different sampling strategies with different assumptions on recall period and recall error. Results The model identified three major factors that influence sampling strategies: (1 the clustering of episodes in individuals; (2 the duration of episodes; (3 the positive correlation between an individual's disease incidence and episode duration. Intermittent sampling (e.g. 12 times per year often requires only a slightly larger sample size compared to continuous sampling, especially in cluster-randomized trials. The collection of period prevalence data can lead to highly biased effect estimates if the exposure variable is associated with episode duration. To maximize study power, recall periods of 3 to 7 days may be preferable over shorter periods, even if this leads to inaccuracy in the prevalence estimates. Conclusion Choosing the optimal approach to measure recurrent infections in epidemiological studies depends on the setting, the study objectives, study design and budget constraints. Sampling at intervals can contribute to making epidemiological studies and trials more efficient, valid

  13. OUTPACE long duration stations: physical variability, context of biogeochemical sampling, and evaluation of sampling strategy

    Directory of Open Access Journals (Sweden)

    A. de Verneil

    2018-04-01

    Full Text Available Research cruises to quantify biogeochemical fluxes in the ocean require taking measurements at stations lasting at least several days. A popular experimental design is the quasi-Lagrangian drifter, often mounted with in situ incubations or sediment traps that follow the flow of water over time. After initial drifter deployment, the ship tracks the drifter for continuing measurements that are supposed to represent the same water environment. An outstanding question is how to best determine whether this is true. During the Oligotrophy to UlTra-oligotrophy PACific Experiment (OUTPACE cruise, from 18 February to 3 April 2015 in the western tropical South Pacific, three separate stations of long duration (five days over the upper 500 m were conducted in this quasi-Lagrangian sampling scheme. Here we present physical data to provide context for these three stations and to assess whether the sampling strategy worked, i.e., that a single body of water was sampled. After analyzing tracer variability and local water circulation at each station, we identify water layers and times where the drifter risks encountering another body of water. While almost no realization of this sampling scheme will be truly Lagrangian, due to the presence of vertical shear, the depth-resolved observations during the three stations show most layers sampled sufficiently homogeneous physical environments during OUTPACE. By directly addressing the concerns raised by these quasi-Lagrangian sampling platforms, a protocol of best practices can begin to be formulated so that future research campaigns include the complementary datasets and analyses presented here to verify the appropriate use of the drifter platform.

  14. Comparison of sampling strategies for tobacco retailer inspections to maximize coverage in vulnerable areas and minimize cost.

    Science.gov (United States)

    Lee, Joseph G L; Shook-Sa, Bonnie E; Bowling, J Michael; Ribisl, Kurt M

    2017-06-23

    In the United States, tens of thousands of inspections of tobacco retailers are conducted each year. Various sampling choices can reduce travel costs, emphasize enforcement in areas with greater non-compliance, and allow for comparability between states and over time. We sought to develop a model sampling strategy for state tobacco retailer inspections. Using a 2014 list of 10,161 North Carolina tobacco retailers, we compared results from simple random sampling; stratified, clustered at the ZIP code sampling; and, stratified, clustered at the census tract sampling. We conducted a simulation of repeated sampling and compared approaches for their comparative level of precision, coverage, and retailer dispersion. While maintaining an adequate design effect and statistical precision appropriate for a public health enforcement program, both stratified, clustered ZIP- and tract-based approaches were feasible. Both ZIP and tract strategies yielded improvements over simple random sampling, with relative improvements, respectively, of average distance between retailers (reduced 5.0% and 1.9%), percent Black residents in sampled neighborhoods (increased 17.2% and 32.6%), percent Hispanic residents in sampled neighborhoods (reduced 2.2% and increased 18.3%), percentage of sampled retailers located near schools (increased 61.3% and 37.5%), and poverty rate in sampled neighborhoods (increased 14.0% and 38.2%). States can make retailer inspections more efficient and targeted with stratified, clustered sampling. Use of statistically appropriate sampling strategies like these should be considered by states, researchers, and the Food and Drug Administration to improve program impact and allow for comparisons over time and across states. The authors present a model tobacco retailer sampling strategy for promoting compliance and reducing costs that could be used by U.S. states and the Food and Drug Administration (FDA). The design is feasible to implement in North Carolina. Use of

  15. Size Fluctuations of Near Critical Nuclei and Gibbs Free Energy for Nucleation of BDA on Cu(001)

    Science.gov (United States)

    Schwarz, Daniel; van Gastel, Raoul; Zandvliet, Harold J. W.; Poelsema, Bene

    2012-07-01

    We present a low-energy electron microscopy study of nucleation and growth of BDA on Cu(001) at low supersaturation. At sufficiently high coverage, a dilute BDA phase coexists with c(8×8) crystallites. The real-time microscopic information allows a direct visualization of near-critical nuclei, determination of the supersaturation and the line tension of the crystallites, and, thus, derivation of the Gibbs free energy for nucleation. The resulting critical nucleus size nicely agrees with the measured value. Nuclei up to 4-6 times larger still decay with finite probability, urging reconsideration of the classic perception of a critical nucleus.

  16. On the temperature dependence of the Adam-Gibbs equation around the crossover region in the glass transition

    Science.gov (United States)

    Duque, Michel; Andraca, Adriana; Goldstein, Patricia; del Castillo, Luis Felipe

    2018-04-01

    The Adam-Gibbs equation has been used for more than five decades, and still a question remains unanswered on the temperature dependence of the chemical potential it includes. Nowadays, it is a well-known fact that in fragile glass formers, actually the behavior of the system depends on the temperature region it is being studied. Transport coefficients change due to the appearance of heterogeneity in the liquid as it is supercooled. Using the different forms for the logarithmic shift factor and the form of the configurational entropy, we evaluate this temperature dependence and present a discussion on our results.

  17. Sampling strategies for tropical forest nutrient cycling studies: a case study in São Paulo, Brazil

    Directory of Open Access Journals (Sweden)

    G. Sparovek

    1997-12-01

    Full Text Available The precise sampling of soil, biological or micro climatic attributes in tropical forests, which are characterized by a high diversity of species and complex spatial variability, is a difficult task. We found few basic studies to guide sampling procedures. The objective of this study was to define a sampling strategy and data analysis for some parameters frequently used in nutrient cycling studies, i. e., litter amount, total nutrient amounts in litter and its composition (Ca, Mg, Κ, Ν and P, and soil attributes at three depths (organic matter, Ρ content, cation exchange capacity and base saturation. A natural remnant forest in the West of São Paulo State (Brazil was selected as study area and samples were collected in July, 1989. The total amount of litter and its total nutrient amounts had a high spatial independent variance. Conversely, the variance of litter composition was lower and the spatial dependency was peculiar to each nutrient. The sampling strategy for the estimation of litter amounts and the amount of nutrient in litter should be different than the sampling strategy for nutrient composition. For the estimation of litter amounts and the amount of nutrients in litter (related to quantity a large number of randomly distributed determinations are needed. Otherwise, for the estimation of litter nutrient composition (related to quality a smaller amount of spatially located samples should be analyzed. The determination of sampling for soil attributes differed according to the depth. Overall, surface samples (0-5 cm showed high short distance spatial dependent variance, whereas, subsurface samples exhibited spatial dependency in longer distances. Short transects with sampling interval of 5-10 m are recommended for surface sampling. Subsurface samples must also be spatially located, but with transects or grids with longer distances between sampling points over the entire area. Composite soil samples would not provide a complete

  18. A procedure to compute equilibrium concentrations in multicomponent systems by Gibbs energy minimization on spreadsheets

    International Nuclear Information System (INIS)

    Lima da Silva, Aline; Heck, Nestor Cesar

    2003-01-01

    Equilibrium concentrations are traditionally calculated with the help of equilibrium constant equations from selected reactions. This procedure, however, is only useful for simpler problems. Analysis of the equilibrium state in a multicomponent and multiphase system necessarily involves solution of several simultaneous equations, and, as the number of system components grows, the required computation becomes more complex and tedious. A more direct and general method for solving the problem is the direct minimization of the Gibbs energy function. The solution for the nonlinear problem consists in minimizing the objective function (Gibbs energy of the system) subjected to the constraints of the elemental mass-balance. To solve it, usually a computer code is developed, which requires considerable testing and debugging efforts. In this work, a simple method to predict equilibrium composition in multicomponent systems is presented, which makes use of an electronic spreadsheet. The ability to carry out these calculations within a spreadsheet environment shows several advantages. First, spreadsheets are available 'universally' on nearly all personal computers. Second, the input and output capabilities of spreadsheets can be effectively used to monitor calculated results. Third, no additional systems or programs need to be learned. In this way, spreadsheets can be as suitable in computing equilibrium concentrations as well as to be used as teaching and learning aids. This work describes, therefore, the use of the Solver tool, contained in the Microsoft Excel spreadsheet package, on computing equilibrium concentrations in a multicomponent system, by the method of direct Gibbs energy minimization. The four phases Fe-Cr-O-C-Ni system is used as an example to illustrate the method proposed. The pure stoichiometric phases considered in equilibrium calculations are: Cr 2 O 3 (s) and FeO C r 2 O 3 (s). The atmosphere consists of O 2 , CO e CO 2 constituents. The liquid iron

  19. Clinical usefulness of limited sampling strategies for estimating AUC of proton pump inhibitors.

    Science.gov (United States)

    Niioka, Takenori

    2011-03-01

    Cytochrome P450 (CYP) 2C19 (CYP2C19) genotype is regarded as a useful tool to predict area under the blood concentration-time curve (AUC) of proton pump inhibitors (PPIs). In our results, however, CYP2C19 genotypes had no influence on AUC of all PPIs during fluvoxamine treatment. These findings suggest that CYP2C19 genotyping is not always a good indicator for estimating AUC of PPIs. Limited sampling strategies (LSS) were developed to estimate AUC simply and accurately. It is important to minimize the number of blood samples because of patient's acceptance. This article reviewed the usefulness of LSS for estimating AUC of three PPIs (omeprazole: OPZ, lansoprazole: LPZ and rabeprazole: RPZ). The best prediction formulas in each PPI were AUC(OPZ)=9.24 x C(6h)+2638.03, AUC(LPZ)=12.32 x C(6h)+3276.09 and AUC(RPZ)=1.39 x C(3h)+7.17 x C(6h)+344.14, respectively. In order to optimize the sampling strategy of LPZ, we tried to establish LSS for LPZ using a time point within 3 hours through the property of pharmacokinetics of its enantiomers. The best prediction formula using the fewest sampling points (one point) was AUC(racemic LPZ)=6.5 x C(3h) of (R)-LPZ+13.7 x C(3h) of (S)-LPZ-9917.3 x G1-14387.2×G2+7103.6 (G1: homozygous extensive metabolizer is 1 and the other genotypes are 0; G2: heterozygous extensive metabolizer is 1 and the other genotypes are 0). Those strategies, plasma concentration monitoring at one or two time-points, might be more suitable for AUC estimation than reference to CYP2C19 genotypes, particularly in the case of coadministration of CYP mediators.

  20. Origin of the correlation between the standard Gibbs energies of ion transfer from water to a hydrophobic ionic liquid and to a molecular solvent

    Czech Academy of Sciences Publication Activity Database

    Langmaier, Jan; Záliš, Stanislav; Samec, Zdeněk; Bovtun, Viktor; Kempa, Martin

    2013-01-01

    Roč. 87, JAN 2013 (2013), s. 591-598 ISSN 0013-4686 R&D Projects: GA ČR GAP206/11/0707 Institutional support: RVO:61388955 ; RVO:68378271 Keywords : ionic liquid s * cyclic voltammetry * standard Gibbs energy of ion transfer Subject RIV: CG - Electrochemistry Impact factor: 4.086, year: 2013

  1. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    Science.gov (United States)

    Mitra, Abhishek; Skrzypczak, Magdalena; Ginalski, Krzysztof; Rowicka, Maga

    2015-01-01

    Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding). Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants). Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol) that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively, we discuss how

  2. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    Directory of Open Access Journals (Sweden)

    Abhishek Mitra

    Full Text Available Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding. Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants. Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively

  3. Sample design and gamma-ray counting strategy of neutron activation system for triton burnup measurements in KSTAR

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Jungmin [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of); Cheon, Mun Seong [ITER Korea, National Fusion Research Institute, Daejeon (Korea, Republic of); Chung, Kyoung-Jae, E-mail: jkjlsh1@snu.ac.kr [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of); Hwang, Y.S. [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of)

    2016-11-01

    Highlights: • Sample design for triton burnup ratio measurement is carried out. • Samples for 14.1 MeV neutron measurements are selected for KSTAR. • Si and Cu are the most suitable materials for d-t neutron measurements. • Appropriate γ-ray counting strategies for each selected sample are established. - Abstract: On the purpose of triton burnup measurements in Korea Superconducting Tokamak Advanced Research (KSTAR) deuterium plasmas, appropriate neutron activation system (NAS) samples for 14.1 MeV d-t neutron measurements have been designed and gamma-ray counting strategy is established. Neutronics calculations are performed with the MCNP5 neutron transport code for the KSTAR neutral beam heated deuterium plasma discharges. Based on those calculations and the assumed d-t neutron yield, the activities induced by d-t neutrons are estimated with the inventory code FISPACT-2007 for candidate sample materials: Si, Cu, Al, Fe, Nb, Co, Ti, and Ni. It is found that Si, Cu, Al, and Fe are suitable for the KSATR NAS in terms of the minimum detectable activity (MDA) calculated based on the standard deviation of blank measurements. Considering background gamma-rays radiated from surrounding structures activated by thermalized fusion neutrons, appropriate gamma-ray counting strategy for each selected sample is established.

  4. Females' sampling strategy to comparatively evaluate prospective mates in the peacock blenny Salaria pavo

    Science.gov (United States)

    Locatello, Lisa; Rasotto, Maria B.

    2017-08-01

    Emerging evidence suggests the occurrence of comparative decision-making processes in mate choice, questioning the traditional idea of female choice based on rules of absolute preference. In such a scenario, females are expected to use a typical best-of- n sampling strategy, being able to recall previous sampled males based on memory of their quality and location. Accordingly, the quality of preferred mate is expected to be unrelated to both the number and the sequence of female visits. We found support for these predictions in the peacock blenny, Salaria pavo, a fish where females have the opportunity to evaluate the attractiveness of many males in a short time period and in a restricted spatial range. Indeed, even considering the variability in preference among females, most of them returned to previous sampled males for further evaluations; thus, the preferred male did not represent the last one in the sequence of visited males. Moreover, there was no relationship between the attractiveness of the preferred male and the number of further visits assigned to the other males. Our results suggest the occurrence of a best-of- n mate sampling strategy in the peacock blenny.

  5. Chemical Disequilibria and Sources of Gibbs Free Energy Inside Enceladus

    Science.gov (United States)

    Zolotov, M. Y.

    2010-12-01

    Non-photosynthetic organisms use chemical disequilibria in the environment to gain metabolic energy from enzyme catalyzed oxidation-reduction (redox) reactions. The presence of carbon dioxide, ammonia, formaldehyde, methanol, methane and other hydrocarbons in the eruptive plume of Enceladus [1] implies diverse redox disequilibria in the interior. In the history of the moon, redox disequilibria could have been activated through melting of a volatile-rich ice and following water-rock-organic interactions. Previous and/or present aqueous processes are consistent with the detection of NaCl and Na2CO3/NaHCO3-bearing grains emitted from Enceladus [2]. A low K/Na ratio in the grains [2] and a low upper limit for N2 in the plume [3] indicate low temperature (possibly enzymes if organisms were (are) present. The redox conditions in aqueous systems and amounts of available Gibbs free energy should have been affected by the production, consumption and escape of hydrogen. Aqueous oxidation of minerals (Fe-Ni metal, Fe-Ni phosphides, etc.) accreted on Enceladus should have led to H2 production, which is consistent with H2 detection in the plume [1]. Numerical evaluations based on concentrations of plume gases [1] reveal sufficient energy sources available to support metabolically diverse life at a wide range of activities (a) of dissolved H2 (log aH2 from 0 to -10). Formaldehyde, carbon dioxide [c.f. 4], HCN (if it is present), methanol, acetylene and other hydrocarbons have the potential to react with H2 to form methane. Aqueous hydrogenations of acetylene, HCN and formaldehyde to produce methanol are energetically favorable as well. Both favorable hydrogenation and hydration of HCN lead to formation of ammonia. Condensed organic species could also participate in redox reactions. Methane and ammonia are the final products of these putative redox transformations. Sulfates may have not formed in cold and/or short-term aqueous environments with a limited H2 escape. In contrast to

  6. Reduction efficiency prediction of CENIBRA's recovery boiler by direct minimization of gibbs free energy

    Directory of Open Access Journals (Sweden)

    W. L. Silva

    2008-09-01

    Full Text Available The reduction efficiency is an important variable during the black liquor burning process in the Kraft recovery boiler. This variable value is obtained by slow experimental routines and the delay of this measure disturbs the pulp and paper industry customary control. This paper describes an optimization approach for the reduction efficiency determination in the furnace bottom of the recovery boiler based on the minimization of the Gibbs free energy. The industrial data used in this study were directly obtained from CENIBRA's data acquisition system. The resulting approach is able to predict the steady state behavior of the chemical composition of the furnace recovery boiler, - especially the reduction efficiency when different operational conditions are used. This result confirms the potential of this approach in the analysis of the daily operation of the recovery boiler.

  7. Comment on "Inference with minimal Gibbs free energy in information field theory".

    Science.gov (United States)

    Iatsenko, D; Stefanovska, A; McClintock, P V E

    2012-03-01

    Enßlin and Weig [Phys. Rev. E 82, 051112 (2010)] have introduced a "minimum Gibbs free energy" (MGFE) approach for estimation of the mean signal and signal uncertainty in Bayesian inference problems: it aims to combine the maximum a posteriori (MAP) and maximum entropy (ME) principles. We point out, however, that there are some important questions to be clarified before the new approach can be considered fully justified, and therefore able to be used with confidence. In particular, after obtaining a Gaussian approximation to the posterior in terms of the MGFE at some temperature T, this approximation should always be raised to the power of T to yield a reliable estimate. In addition, we show explicitly that MGFE indeed incorporates the MAP principle, as well as the MDI (minimum discrimination information) approach, but not the well-known ME principle of Jaynes [E.T. Jaynes, Phys. Rev. 106, 620 (1957)]. We also illuminate some related issues and resolve apparent discrepancies. Finally, we investigate the performance of MGFE estimation for different values of T, and we discuss the advantages and shortcomings of the approach.

  8. Thermodynamic description of the Al–Mg–Si system using a new formulation for the temperature dependence of the excess Gibbs energy

    International Nuclear Information System (INIS)

    Tang, Ying; Du, Yong; Zhang, Lijun; Yuan, Xiaoming; Kaptay, George

    2012-01-01

    Highlights: ► An exponential formulation to describe ternary excess Gibbs energy is proposed. ► Theoretical analysis is performed to verify stability of phase using new formulation. ► Al–Mg–Si system and its boundary binaries have been assessed by the new formulation. ► Present calculations for Al–Mg–Si system are more reasonable than previous ones. - Abstract: An exponential formulation was proposed to replace the linear interaction parameter in the Redlich–Kister (R–K) polynomial for the excess Gibbs energy of ternary solution phase. The theoretical analysis indicates that the proposed new exponential formulation can not only avoid the artificial miscibility gap at high temperatures but also describe the ternary system well. A thermodynamic description for the Al–Mg–Si system and its boundary binaries was then performed by using both R–K linear and exponential formulations. The inverted miscibility gaps occurring in the Mg–Si and the Al–Mg–Si systems at high temperatures due to the use of R–K linear polynomials are avoided by using the new formulation. Besides, the thermodynamic properties predicted with the new formulation confirm the general thermodynamic belief that the solution phase approaches to the ideal solution at infinite temperatures, which cannot be described with the traditional R–K linear polynomials.

  9. Limited sampling strategy models for estimating the AUC of gliclazide in Chinese healthy volunteers.

    Science.gov (United States)

    Huang, Ji-Han; Wang, Kun; Huang, Xiao-Hui; He, Ying-Chun; Li, Lu-Jin; Sheng, Yu-Cheng; Yang, Juan; Zheng, Qing-Shan

    2013-06-01

    The aim of this work is to reduce the cost of required sampling for the estimation of the area under the gliclazide plasma concentration versus time curve within 60 h (AUC0-60t ). The limited sampling strategy (LSS) models were established and validated by the multiple regression model within 4 or fewer gliclazide concentration values. Absolute prediction error (APE), root of mean square error (RMSE) and visual prediction check were used as criterion. The results of Jack-Knife validation showed that 10 (25.0 %) of the 40 LSS based on the regression analysis were not within an APE of 15 % using one concentration-time point. 90.2, 91.5 and 92.4 % of the 40 LSS models were capable of prediction using 2, 3 and 4 points, respectively. Limited sampling strategies were developed and validated for estimating AUC0-60t of gliclazide. This study indicates that the implementation of an 80 mg dosage regimen enabled accurate predictions of AUC0-60t by the LSS model. This study shows that 12, 6, 4, 2 h after administration are the key sampling times. The combination of (12, 2 h), (12, 8, 2 h) or (12, 8, 4, 2 h) can be chosen as sampling hours for predicting AUC0-60t in practical application according to requirement.

  10. Sample preparation composite and replicate strategy case studies for assay of solid oral drug products.

    Science.gov (United States)

    Nickerson, Beverly; Harrington, Brent; Li, Fasheng; Guo, Michele Xuemei

    2017-11-30

    Drug product assay is one of several tests required for new drug products to ensure the quality of the product at release and throughout the life cycle of the product. Drug product assay testing is typically performed by preparing a composite sample of multiple dosage units to obtain an assay value representative of the batch. In some cases replicate composite samples may be prepared and the reportable assay value is the average value of all the replicates. In previously published work by Harrington et al. (2014) [5], a sample preparation composite and replicate strategy for assay was developed to provide a systematic approach which accounts for variability due to the analytical method and dosage form with a standard error of the potency assay criteria based on compendia and regulatory requirements. In this work, this sample preparation composite and replicate strategy for assay is applied to several case studies to demonstrate the utility of this approach and its application at various stages of pharmaceutical drug product development. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Bayesian Estimation of Fish Disease Prevalence from Pooled Samples Incorporating Sensitivity and Specificity

    Science.gov (United States)

    Williams, Christopher J.; Moffitt, Christine M.

    2003-03-01

    An important emerging issue in fisheries biology is the health of free-ranging populations of fish, particularly with respect to the prevalence of certain pathogens. For many years, pathologists focused on captive populations and interest was in the presence or absence of certain pathogens, so it was economically attractive to test pooled samples of fish. Recently, investigators have begun to study individual fish prevalence from pooled samples. Estimation of disease prevalence from pooled samples is straightforward when assay sensitivity and specificity are perfect, but this assumption is unrealistic. Here we illustrate the use of a Bayesian approach for estimating disease prevalence from pooled samples when sensitivity and specificity are not perfect. We also focus on diagnostic plots to monitor the convergence of the Gibbs-sampling-based Bayesian analysis. The methods are illustrated with a sample data set.

  12. A preliminary evaluation of comminution and sampling strategies for radioactive cemented waste

    Energy Technology Data Exchange (ETDEWEB)

    Bilodeau, M.; Lastra, R.; Bouzoubaa, N. [Natural Resources Canada, Ottawa, ON (Canada); Chapman, M. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2011-07-01

    Lixiviation of Hg, U and Cs contaminants and micro-encapsulation of cemented radioactive waste (CRW) are the two main components of a CRW stabilization research project carried out at Natural Resources Canada in collaboration with Atomic Energy of Canada Limited. Unmolding CRW from the storage pail, its fragmentation into a size range suitable for both processes and the collection of a representative sample are three essential steps for providing optimal material conditions for the two studies. Separation of wires, metals and plastic incorporated into CRW samples is also required. A comminution and sampling strategy was developed to address all those needs. Dust emissions and other health and safety concerns were given full consideration. Surrogate cemented waste (SCW) was initially used for this comminution study where Cu was used as a substitute for U and Hg. SCW was characterized as a friable material through the measurement of the Bond work index of 7.7 kWh/t. A mineralogical investigation and the calibration of material heterogeneity parameters of the sampling error model showed that Cu, Hg and Cs are finely disseminated in the cement matrix. A sampling strategy was built from the model and successfully validated with radioactive waste. A larger than expected sampling error was observed with U due to the formation of large U solid phases, which were not observed with the Cu tracer. SCW samples were crushed and ground under different rock fragmentation mechanisms: compression (jaw and cone crushers, rod mill), impact (ball mill), attrition, high voltage disintegration and high pressure water (and liquid nitrogen) jetting. Cryogenic grinding was also tested with the attrition mill. Crushing and grinding technologies were assessed against criteria that were gathered from literature surveys, experiential know-how and discussion with the client and field experts. Water jetting and its liquid nitrogen variant were retained for pail cutting and waste unmolding while

  13. A preliminary evaluation of comminution and sampling strategies for radioactive cemented waste

    International Nuclear Information System (INIS)

    Bilodeau, M.; Lastra, R.; Bouzoubaa, N.; Chapman, M.

    2011-01-01

    Lixiviation of Hg, U and Cs contaminants and micro-encapsulation of cemented radioactive waste (CRW) are the two main components of a CRW stabilization research project carried out at Natural Resources Canada in collaboration with Atomic Energy of Canada Limited. Unmolding CRW from the storage pail, its fragmentation into a size range suitable for both processes and the collection of a representative sample are three essential steps for providing optimal material conditions for the two studies. Separation of wires, metals and plastic incorporated into CRW samples is also required. A comminution and sampling strategy was developed to address all those needs. Dust emissions and other health and safety concerns were given full consideration. Surrogate cemented waste (SCW) was initially used for this comminution study where Cu was used as a substitute for U and Hg. SCW was characterized as a friable material through the measurement of the Bond work index of 7.7 kWh/t. A mineralogical investigation and the calibration of material heterogeneity parameters of the sampling error model showed that Cu, Hg and Cs are finely disseminated in the cement matrix. A sampling strategy was built from the model and successfully validated with radioactive waste. A larger than expected sampling error was observed with U due to the formation of large U solid phases, which were not observed with the Cu tracer. SCW samples were crushed and ground under different rock fragmentation mechanisms: compression (jaw and cone crushers, rod mill), impact (ball mill), attrition, high voltage disintegration and high pressure water (and liquid nitrogen) jetting. Cryogenic grinding was also tested with the attrition mill. Crushing and grinding technologies were assessed against criteria that were gathered from literature surveys, experiential know-how and discussion with the client and field experts. Water jetting and its liquid nitrogen variant were retained for pail cutting and waste unmolding while

  14. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    Science.gov (United States)

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  15. Limited sampling strategy for determining metformin area under the plasma concentration-time curve

    DEFF Research Database (Denmark)

    Santoro, Ana Beatriz; Stage, Tore Bjerregaard; Struchiner, Claudio José

    2016-01-01

    AIM: The aim was to develop and validate limited sampling strategy (LSS) models to predict the area under the plasma concentration-time curve (AUC) for metformin. METHODS: Metformin plasma concentrations (n = 627) at 0-24 h after a single 500 mg dose were used for LSS development, based on all su...

  16. Use of linear free energy relationship to predict Gibbs free energies of formation of zirconolite phases (MZrTi2O7 and MHfTi2O7)

    International Nuclear Information System (INIS)

    Xu, H.

    1999-01-01

    In this letter, the Sverjensky-Molling equation derived from a linear free energy relationship is used to calculate the Gibbs free energies of formation of zirconolite crystalline phases (MZrTi 2 O 7 and MHfTi 2 O 7 ) from the known thermodynamic properties of the corresponding aqueous divalent cations (M 2+ ). Sverjensky-Molling equation is expressed as ΔG 0 f,M v X =a M v X ΔG 0 n,M 2+ +b M v X +β M v X r M 2+ , where the coefficients a M v X , b M v X , and β M v X characterize a particular structural family of M v X, r M 2+ is the ionic radius of M 2+ cation, ΔG f,M v X 0 is the standard Gibbs free energy of formation of M v X, and ΔG 0 n,M 2+ is the standard non-solvation energy of cation M 2+ . This relationship can be used to predict the Gibbs free energies of formation of various fictive phases (such as BaZrTi 2 O 7 , SrZrTi 2 O 7 , PbZrTi 2 O 7 , etc.) that may form solid solution with CaZrTi 2 O 7 in actual Synroc-based nuclear waste forms. Based on obtained linear free energy relationships, it is predicted that large cations (e.g., Ba and Ra) prefer to be in perovskite structure, and small cations (e.g., Ca, Zn, and Cd) prefer to be in zirconolite structure. (orig.)

  17. Population pharmacokinetic analysis of clopidogrel in healthy Jordanian subjects with emphasis optimal sampling strategy.

    Science.gov (United States)

    Yousef, A M; Melhem, M; Xue, B; Arafat, T; Reynolds, D K; Van Wart, S A

    2013-05-01

    Clopidogrel is metabolized primarily into an inactive carboxyl metabolite (clopidogrel-IM) or to a lesser extent an active thiol metabolite. A population pharmacokinetic (PK) model was developed using NONMEM(®) to describe the time course of clopidogrel-IM in plasma and to design a sparse-sampling strategy to predict clopidogrel-IM exposures for use in characterizing anti-platelet activity. Serial blood samples from 76 healthy Jordanian subjects administered a single 75 mg oral dose of clopidogrel were collected and assayed for clopidogrel-IM using reverse phase high performance liquid chromatography. A two-compartment (2-CMT) PK model with first-order absorption and elimination plus an absorption lag-time was evaluated, as well as a variation of this model designed to mimic enterohepatic recycling (EHC). Optimal PK sampling strategies (OSS) were determined using WinPOPT based upon collection of 3-12 post-dose samples. A two-compartment model with EHC provided the best fit and reduced bias in C(max) (median prediction error (PE%) of 9.58% versus 12.2%) relative to the basic two-compartment model, AUC(0-24) was similar for both models (median PE% = 1.39%). The OSS for fitting the two-compartment model with EHC required the collection of seven samples (0.25, 1, 2, 4, 5, 6 and 12 h). Reasonably unbiased and precise exposures were obtained when re-fitting this model to a reduced dataset considering only these sampling times. A two-compartment model considering EHC best characterized the time course of clopidogrel-IM in plasma. Use of the suggested OSS will allow for the collection of fewer PK samples when assessing clopidogrel-IM exposures. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Methodology Series Module 5: Sampling Strategies.

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.

  19. Methodology series module 5: Sampling strategies

    Directory of Open Access Journals (Sweden)

    Maninder Singh Setia

    2016-01-01

    Full Text Available Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the 'Sampling Method'. There are essentially two types of sampling methods: 1 probability sampling – based on chance events (such as random numbers, flipping a coin etc.; and 2 non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term 'random sample' when the researcher has used convenience sample. The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the 'generalizability' of these results. In such a scenario, the researcher may want to use 'purposive sampling' for the study.

  20. [Strategies for biobank networks. Classification of different approaches for locating samples and an outlook on the future within the BBMRI-ERIC].

    Science.gov (United States)

    Lablans, Martin; Kadioglu, Dennis; Mate, Sebastian; Leb, Ines; Prokosch, Hans-Ulrich; Ückert, Frank

    2016-03-01

    Medical research projects often require more biological material than can be supplied by a single biobank. For this reason, a multitude of strategies support locating potential research partners with matching material without requiring centralization of sample storage. Classification of different strategies for biobank networks, in particular for locating suitable samples. Description of an IT infrastructure combining these strategies. Existing strategies can be classified according to three criteria: (a) granularity of sample data: coarse bank-level data (catalogue) vs. fine-granular sample-level data, (b) location of sample data: central (central search service) vs. decentral storage (federated search services), and (c) level of automation: automatic (query-based, federated search service) vs. semi-automatic (inquiry-based, decentral search). All mentioned search services require data integration. Metadata help to overcome semantic heterogeneity. The "Common Service IT" in BBMRI-ERIC (Biobanking and BioMolecular Resources Research Infrastructure) unites a catalogue, the decentral search and metadata in an integrated platform. As a result, researchers receive versatile tools to search suitable biomaterial, while biobanks retain a high degree of data sovereignty. Despite their differences, the presented strategies for biobank networks do not rule each other out but can complement and even benefit from each other.

  1. Methodology Series Module 5: Sampling Strategies

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438

  2. Comparison of sampling strategies for object-based classification of urban vegetation from Very High Resolution satellite images

    Science.gov (United States)

    Rougier, Simon; Puissant, Anne; Stumpf, André; Lachiche, Nicolas

    2016-09-01

    Vegetation monitoring is becoming a major issue in the urban environment due to the services they procure and necessitates an accurate and up to date mapping. Very High Resolution satellite images enable a detailed mapping of the urban tree and herbaceous vegetation. Several supervised classifications with statistical learning techniques have provided good results for the detection of urban vegetation but necessitate a large amount of training data. In this context, this study proposes to investigate the performances of different sampling strategies in order to reduce the number of examples needed. Two windows based active learning algorithms from state-of-art are compared to a classical stratified random sampling and a third combining active learning and stratified strategies is proposed. The efficiency of these strategies is evaluated on two medium size French cities, Strasbourg and Rennes, associated to different datasets. Results demonstrate that classical stratified random sampling can in some cases be just as effective as active learning methods and that it should be used more frequently to evaluate new active learning methods. Moreover, the active learning strategies proposed in this work enables to reduce the computational runtime by selecting multiple windows at each iteration without increasing the number of windows needed.

  3. A comparison of sample preparation strategies for biological tissues and subsequent trace element analysis using LA-ICP-MS.

    Science.gov (United States)

    Bonta, Maximilian; Török, Szilvia; Hegedus, Balazs; Döme, Balazs; Limbeck, Andreas

    2017-03-01

    Laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) is one of the most commonly applied methods for lateral trace element distribution analysis in medical studies. Many improvements of the technique regarding quantification and achievable lateral resolution have been achieved in the last years. Nevertheless, sample preparation is also of major importance and the optimal sample preparation strategy still has not been defined. While conventional histology knows a number of sample pre-treatment strategies, little is known about the effect of these approaches on the lateral distributions of elements and/or their quantities in tissues. The technique of formalin fixation and paraffin embedding (FFPE) has emerged as the gold standard in tissue preparation. However, the potential use for elemental distribution studies is questionable due to a large number of sample preparation steps. In this work, LA-ICP-MS was used to examine the applicability of the FFPE sample preparation approach for elemental distribution studies. Qualitative elemental distributions as well as quantitative concentrations in cryo-cut tissues as well as FFPE samples were compared. Results showed that some metals (especially Na and K) are severely affected by the FFPE process, whereas others (e.g., Mn, Ni) are less influenced. Based on these results, a general recommendation can be given: FFPE samples are completely unsuitable for the analysis of alkaline metals. When analyzing transition metals, FFPE samples can give comparable results to snap-frozen tissues. Graphical abstract Sample preparation strategies for biological tissues are compared with regard to the elemental distributions and average trace element concentrations.

  4. Gibbs free energy difference between the undercooled liquid and the beta phase of a Ti-Cr alloy

    Science.gov (United States)

    Ohsaka, K.; Trinh, E. H.; Holzer, J. C.; Johnson, W. L.

    1992-01-01

    The heat of fusion and the specific heats of the solid and liquid have been experimentally determined for a Ti60Cr40 alloy. The data are used to evaluate the Gibbs free energy difference, delta-G, between the liquid and the beta phase as a function of temperature to verify a reported spontaneous vitrification (SV) of the beta phase in Ti-Cr alloys. The results show that SV of an undistorted beta phase in the Ti60Cr40 alloy at 873 K is not feasible because delta-G is positive at the temperature. However, delta-G may become negative with additional excess free energy to the beta phase in the form of defects.

  5. Optimization of Region-of-Interest Sampling Strategies for Hepatic MRI Proton Density Fat Fraction Quantification

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z.; Schlein, Alexandra N.; Hooker, Jonathan C.; Dehkordy, Soudabeh Fazeli; Hamilton, Gavin; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.

    2017-01-01

    BACKGROUND Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. PURPOSE To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. STUDY TYPE Retrospective secondary analysis of prospectively acquired clinical research data. POPULATION A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. FIELD STRENGTH/SEQUENCE Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradientrecalled echo technique. ASSESSMENT An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. STATISTICAL TESTING Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland–Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland–Altman analyses. RESULTS The study population’s mean whole-liver PDFF was 10.1±8.9% (range: 1.1–44.1%). Although there was no significant difference in average segmental (P=0.452) or lobar (P=0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥ 4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. DATA CONCLUSION Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. Level of

  6. Optimization of region-of-interest sampling strategies for hepatic MRI proton density fat fraction quantification.

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z; Schlein, Alexandra N; Hooker, Jonathan C; Fazeli Dehkordy, Soudabeh; Hamilton, Gavin; Reeder, Scott B; Loomba, Rohit; Sirlin, Claude B

    2018-04-01

    Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. Retrospective secondary analysis of prospectively acquired clinical research data. A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradient-recalled echo technique. An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland-Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland-Altman analyses. The study population's mean whole-liver PDFF was 10.1 ± 8.9% (range: 1.1-44.1%). Although there was no significant difference in average segmental (P = 0.452) or lobar (P = 0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:988-994. © 2017 International Society for Magnetic Resonance

  7. A systematic examination of a random sampling strategy for source apportionment calculations.

    Science.gov (United States)

    Andersson, August

    2011-12-15

    Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Analytical sample preparation strategies for the determination of antimalarial drugs in human whole blood, plasma and urine

    DEFF Research Database (Denmark)

    Casas, Monica Escolà; Hansen, Martin; Krogh, Kristine A

    2014-01-01

    the available sample preparation strategies combined with liquid chromatographic (LC) analysis to determine antimalarials in whole blood, plasma and urine published over the last decade. Sample preparation can be done by protein precipitation, solid-phase extraction, liquid-liquid extraction or dilution. After...

  9. Current advances and strategies towards fully automated sample preparation for regulated LC-MS/MS bioanalysis.

    Science.gov (United States)

    Zheng, Naiyu; Jiang, Hao; Zeng, Jianing

    2014-09-01

    Robotic liquid handlers (RLHs) have been widely used in automated sample preparation for liquid chromatography-tandem mass spectrometry (LC-MS/MS) bioanalysis. Automated sample preparation for regulated bioanalysis offers significantly higher assay efficiency, better data quality and potential bioanalytical cost-savings. For RLHs that are used for regulated bioanalysis, there are additional requirements, including 21 CFR Part 11 compliance, software validation, system qualification, calibration verification and proper maintenance. This article reviews recent advances in automated sample preparation for regulated bioanalysis in the last 5 years. Specifically, it covers the following aspects: regulated bioanalysis requirements, recent advances in automation hardware and software development, sample extraction workflow simplification, strategies towards fully automated sample extraction, and best practices in automated sample preparation for regulated bioanalysis.

  10. New sampling strategy using a Bayesian approach to assess iohexol clearance in kidney transplant recipients.

    Science.gov (United States)

    Benz-de Bretagne, I; Le Guellec, C; Halimi, J M; Gatault, P; Barbet, C; Alnajjar, A; Büchler, M; Lebranchu, Y; Andres, Christian Robert; Vourcʼh, P; Blasco, H

    2012-06-01

    Glomerular filtration rate (GFR) measurement is a major issue in kidney transplant recipients for clinicians. GFR can be determined by estimating the plasma clearance of iohexol, a nonradiolabeled compound. For practical and convenient application for patients and caregivers, it is important that a minimal number of samples are drawn. The aim of this study was to develop and validate a Bayesian model with fewer samples for reliable prediction of GFR in kidney transplant recipients. Iohexol plasma concentration-time curves from 95 patients were divided into an index (n = 63) and a validation set (n = 32). Samples (n = 4-6 per patient) were obtained during the elimination phase, that is, between 120 and 270 minutes. Individual reference values of iohexol clearance (CL(iohexol)) were calculated from k (elimination slope) and V (volume of distribution from intercept). Individual CL(iohexol) values were then introduced into the Bröchner-Mortensen equation to obtain the GFR (reference value). A population pharmacokinetic model was developed from the index set and validated using standard methods. For the validation set, we tested various combinations of 1, 2, or 3 sampling time to estimate CL(iohexol). According to the different combinations tested, a maximum a posteriori Bayesian estimation of CL(iohexol) was obtained from population parameters. Individual estimates of GFR were compared with individual reference values through analysis of bias and precision. A capability analysis allowed us to determine the best sampling strategy for Bayesian estimation. A 1-compartment model best described our data. Covariate analysis showed that uremia, serum creatinine, and age were significantly associated with k(e), and weight with V. The strategy, including samples drawn at 120 and 270 minutes, allowed accurate prediction of GFR (mean bias: -3.71%, mean imprecision: 7.77%). With this strategy, about 20% of individual predictions were outside the bounds of acceptance set at ± 10

  11. A simulation approach to assessing sampling strategies for insect pests: an example with the balsam gall midge.

    Directory of Open Access Journals (Sweden)

    R Drew Carleton

    Full Text Available Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with "pre-sampling" data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n ∼ 100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand was the most efficient, with sample means converging on true mean density for sample sizes of n ∼ 25-40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods.

  12. Methodology Series Module 5: Sampling Strategies

    OpenAIRE

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ? Sampling Method?. There are essentially two types of sampling methods: 1) probability sampling ? based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling ? based on researcher's choice, population that accessible & available. Some of the non-probabilit...

  13. Sampling strategy for estimating human exposure pathways to consumer chemicals

    Directory of Open Access Journals (Sweden)

    Eleni Papadopoulou

    2016-03-01

    Full Text Available Human exposure to consumer chemicals has become a worldwide concern. In this work, a comprehensive sampling strategy is presented, to our knowledge being the first to study all relevant exposure pathways in a single cohort using multiple methods for assessment of exposure from each exposure pathway. The selected groups of chemicals to be studied are consumer chemicals whose production and use are currently in a state of transition and are; per- and polyfluorinated alkyl substances (PFASs, traditional and “emerging” brominated flame retardants (BFRs and EBFRs, organophosphate esters (OPEs and phthalate esters (PEs. Information about human exposure to these contaminants is needed due to existing data gaps on human exposure intakes from multiple exposure pathways and relationships between internal and external exposure. Indoor environment, food and biological samples were collected from 61 participants and their households in the Oslo area (Norway on two consecutive days, during winter 2013-14. Air, dust, hand wipes, and duplicate diet (food and drink samples were collected as indicators of external exposure, and blood, urine, blood spots, hair, nails and saliva as indicators of internal exposure. A food diary, food frequency questionnaire (FFQ and indoor environment questionnaire were also implemented. Approximately 2000 samples were collected in total and participant views on their experiences of this campaign were collected via questionnaire. While 91% of our participants were positive about future participation in a similar project, some tasks were viewed as problematic. Completing the food diary and collection of duplicate food/drink portions were the tasks most frequent reported as “hard”/”very hard”. Nevertheless, a strong positive correlation between the reported total mass of food/drinks in the food record and the total weight of the food/drinks in the collection bottles was observed, being an indication of accurate performance

  14. Activity coefficients and excess Gibbs' free energy of some binary mixtures formed by p-cresol at 95.23 kPa

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, T.E. Vittal [Properties Group, Chemical Engineering Laboratory, Indian Institute of Chemical Technology, Hyderabad 500 007 (India); Venkanna, N. [Swamy Ramanandateertha Institute of Science and Technology, Hyderabad 508 004 (India); Kumar, Y. Naveen [Swamy Ramanandateertha Institute of Science and Technology, Hyderabad 508 004 (India); Ashok, K. [Swamy Ramanandateertha Institute of Science and Technology, Hyderabad 508 004 (India); Sirisha, N.M. [Swamy Ramanandateertha Institute of Science and Technology, Hyderabad 508 004 (India); Prasad, D.H.L. [Properties Group, Chemical Engineering Laboratory, Indian Institute of Chemical Technology, Hyderabad 500 007 (India)]. E-mail: dasika@iict.res.in

    2007-07-15

    Bubble point temperatures at 95.23 kPa, over the entire composition range are measured for the binary mixtures formed by p-cresol with 1,2-dichloroethane, 1,1,2,2-tetrachloroethane trichloroethylene, tetrachloroethylene, and o- , m- , and p-xylenes, making use of a Swietoslawski-type ebulliometer. Liquid phase mole fraction (x {sub 1}) versus bubble point temperature (T) measurements are found to be well represented by the Wilson model. The optimum Wilson parameters are used to calculate the vapor phase composition, activity coefficients, and excess Gibbs free energy. The results are discussed.

  15. Planning schistosomiasis control: investigation of alternative sampling strategies for Schistosoma mansoni to target mass drug administration of praziquantel in East Africa.

    Science.gov (United States)

    Sturrock, Hugh J W; Gething, Pete W; Ashton, Ruth A; Kolaczinski, Jan H; Kabatereine, Narcis B; Brooker, Simon

    2011-09-01

    In schistosomiasis control, there is a need to geographically target treatment to populations at high risk of morbidity. This paper evaluates alternative sampling strategies for surveys of Schistosoma mansoni to target mass drug administration in Kenya and Ethiopia. Two main designs are considered: lot quality assurance sampling (LQAS) of children from all schools; and a geostatistical design that samples a subset of schools and uses semi-variogram analysis and spatial interpolation to predict prevalence in the remaining unsurveyed schools. Computerized simulations are used to investigate the performance of sampling strategies in correctly classifying schools according to treatment needs and their cost-effectiveness in identifying high prevalence schools. LQAS performs better than geostatistical sampling in correctly classifying schools, but at a cost with a higher cost per high prevalence school correctly classified. It is suggested that the optimal surveying strategy for S. mansoni needs to take into account the goals of the control programme and the financial and drug resources available.

  16. Phase relations and gibbs energies in the system Mn-Rh-O

    Science.gov (United States)

    Jacob, K. T.; Sriram, M. V.

    1994-07-01

    Phase relations in the system Mn-Rh-O are established at 1273 K by equilibrating different compositions either in evacuated quartz ampules or in pure oxygen at a pressure of 1.01 × 105 Pa. The quenched samples are examined by optical microscopy, X-ray diffraction, and energy-dispersive X-ray analysis (EDAX). The alloys and intermetallics in the binary Mn-Rh system are found to be in equilibrium with MnO. There is only one ternary compound, MnRh2O4, with normal spinel structure in the system. The compound Mn3O4 has a tetragonal structure at 1273 K. A solid solution is formed between MnRh2O4 and Mn3O4. The solid solution has the cubic structure over a large range of composition and coexists with metallic rhodium. The partial pressure of oxygen corresponding to this two-phase equilibrium is measured as a function of the composition of the spinel solid solution and temperature. A new solid-state cell, with three separate electrode compartments, is designed to measure accurately the chemical potential of oxygen in the two-phase mixture, Rh + Mn3-2xRh2xO4, which has 1 degree of freedom at constant temperature. From the electromotive force (emf), thermodynamic mixing properties of the Mn3O4-MnRh2O4 solid solution and Gibbs energy of formation of MnRh2O4 are deduced. The activities exhibit negative deviations from Raoult’s law for most of the composition range, except near Mn3O4, where a two-phase region exists. In the cubic phase, the entropy of mixing of the two Rh3+ and Mn3+ ions on the octahedral site of the spinel is ideal, and the enthalpy of mixing is positive and symmetric with respect to composition. For the formation of the spinel (sp) from component oxides with rock salt (rs) and orthorhombic (orth) structures according to the reaction, MnO (rs) + Rh2O3 (orth) → MnRh2O4 (sp), ΔG° = -49,680 + 1.56T (±500) J mol-1 The oxygen potentials corresponding to MnO + Mn3O4 and Rh + Rh2O3 equilibria are also obtained from potentiometric measurements on galvanic

  17. Hypothesis testing in genetic linkage analysis via Gibbs sampling ( )

    African Journals Online (AJOL)

    hope&shola

    2010-12-06

    Dec 6, 2010 ... The existing theory assumes an asymptotic normality for score statistics which is violated on boundary ... Monte Carlo approach is proposed to overcome this problem. ... probability, that is, the probability that an individual with.

  18. Optimal sampling strategy for data mining

    International Nuclear Information System (INIS)

    Ghaffar, A.; Shahbaz, M.; Mahmood, W.

    2013-01-01

    Latest technology like Internet, corporate intranets, data warehouses, ERP's, satellites, digital sensors, embedded systems, mobiles networks all are generating such a massive amount of data that it is getting very difficult to analyze and understand all these data, even using data mining tools. Huge datasets are becoming a difficult challenge for classification algorithms. With increasing amounts of data, data mining algorithms are getting slower and analysis is getting less interactive. Sampling can be a solution. Using a fraction of computing resources, Sampling can often provide same level of accuracy. The process of sampling requires much care because there are many factors involved in the determination of correct sample size. The approach proposed in this paper tries to find a solution to this problem. Based on a statistical formula, after setting some parameters, it returns a sample size called s ufficient sample size , which is then selected through probability sampling. Results indicate the usefulness of this technique in coping with the problem of huge datasets. (author)

  19. Social orientation, sexual role, and moral judgment: a comparison of two brazilian and one norwegian sample / Orientação social, papel sexual e julgamento moral: uma comparação entre duas amostras brasileiras e uma norueguesa

    Directory of Open Access Journals (Sweden)

    Angela Biaggio

    2005-01-01

    Full Text Available Thirty female and 30 male university students each from Joao Pessoa and Porto Alegre were compared to a comparable Norwegian sample of 60 female and 60 male students. Except for a suggestion of differences in women's cultural orientation, comparisons on Gibbs' test of justice morality, the ECI test for ethic of care, Bem's sex role inventory, and Triandis' test for cultural orientations showed that all differences were between the Norwegian sample and the Brazilian samples as a unit. Brazilians showed a differentiation of sex roles, which was not shown in Norwegians, and higher scores on the collectivism cultural orientation. Norwegians showed higher scores ECI, which might be because of a culture bias in the test. No difference was shown for individualism cultural orientation, and on Gibbs' test. Men scored higher on the total individualism measure, and women on vertical collectivism. JP women scored as more hedonistic and individual than the PA women, who scores as more traditional than the JP women.

  20. Using Linked Survey Paradata to Improve Sampling Strategies in the Medical Expenditure Panel Survey

    Directory of Open Access Journals (Sweden)

    Mirel Lisa B.

    2017-06-01

    Full Text Available Using paradata from a prior survey that is linked to a new survey can help a survey organization develop more effective sampling strategies. One example of this type of linkage or subsampling is between the National Health Interview Survey (NHIS and the Medical Expenditure Panel Survey (MEPS. MEPS is a nationally representative sample of the U.S. civilian, noninstitutionalized population based on a complex multi-stage sample design. Each year a new sample is drawn as a subsample of households from the prior year’s NHIS. The main objective of this article is to examine how paradata from a prior survey can be used in developing a sampling scheme in a subsequent survey. A framework for optimal allocation of the sample in substrata formed for this purpose is presented and evaluated for the relative effectiveness of alternative substratification schemes. The framework is applied, using real MEPS data, to illustrate how utilizing paradata from the linked survey offers the possibility of making improvements to the sampling scheme for the subsequent survey. The improvements aim to reduce the data collection costs while maintaining or increasing effective responding sample sizes and response rates for a harder to reach population.

  1. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    Science.gov (United States)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-02-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults.

  2. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    International Nuclear Information System (INIS)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-01-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults. (paper)

  3. Measuring strategies for learning regulation in medical education: scale reliability and dimensionality in a Swedish sample.

    Science.gov (United States)

    Edelbring, Samuel

    2012-08-15

    The degree of learners' self-regulated learning and dependence on external regulation influence learning processes in higher education. These regulation strategies are commonly measured by questionnaires developed in other settings than in which they are being used, thereby requiring renewed validation. The aim of this study was to psychometrically evaluate the learning regulation strategy scales from the Inventory of Learning Styles with Swedish medical students (N = 206). The regulation scales were evaluated regarding their reliability, scale dimensionality and interrelations. The primary evaluation focused on dimensionality and was performed with Mokken scale analysis. To assist future scale refinement, additional item analysis, such as item-to-scale correlations, was performed. Scale scores in the Swedish sample displayed good reliability in relation to published results: Cronbach's alpha: 0.82, 0.72, and 0.65 for self-regulation, external regulation and lack of regulation scales respectively. The dimensionalities in scales were adequate for self-regulation and its subscales, whereas external regulation and lack of regulation displayed less unidimensionality. The established theoretical scales were largely replicated in the exploratory analysis. The item analysis identified two items that contributed little to their respective scales. The results indicate that these scales have an adequate capacity for detecting the three theoretically proposed learning regulation strategies in the medical education sample. Further construct validity should be sought by interpreting scale scores in relation to specific learning activities. Using established scales for measuring students' regulation strategies enables a broad empirical base for increasing knowledge on regulation strategies in relation to different disciplinary settings and contributes to theoretical development.

  4. Catch, effort and sampling strategies in the highly variable sardine fisheries around East Java, Indonesia.

    NARCIS (Netherlands)

    Pet, J.S.; Densen, van W.L.T.; Machiels, M.A.M.; Sukkel, M.; Setyohady, D.; Tumuljadi, A.

    1997-01-01

    Temporal and spatial patterns in the fishery for Sardinella spp. around East Java, Indonesia, were studied in an attempt to develop an efficient catch and effort sampling strategy for this highly variable fishery. The inter-annual and monthly variation in catch, effort and catch per unit of effort

  5. Sampling strategies for subsampled segmented EPI PRF thermometry in MR guided high intensity focused ultrasound

    Science.gov (United States)

    Odéen, Henrik; Todd, Nick; Diakite, Mahamadou; Minalga, Emilee; Payne, Allison; Parker, Dennis L.

    2014-01-01

    Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemes utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes

  6. Field screening sampling and analysis strategy and methodology for the 183-H Solar Evaporation Basins: Phase 2, Soils

    International Nuclear Information System (INIS)

    Antipas, A.; Hopkins, A.M.; Wasemiller, M.A.; McCain, R.G.

    1996-01-01

    This document provides a sampling/analytical strategy and methodology for Resource Conservation and Recovery Act (RCRA) closure of the 183-H Solar Evaporation Basins within the boundaries and requirements identified in the initial Phase II Sampling and Analysis Plan for RCRA Closure of the 183-H Solar Evaporation Basins

  7. Modelling of in-stream nitrogen and phosphorus concentrations using different sampling strategies for calibration data

    Science.gov (United States)

    Jomaa, Seifeddine; Jiang, Sanyuan; Yang, Xiaoqiang; Rode, Michael

    2016-04-01

    It is known that a good evaluation and prediction of surface water pollution is mainly limited by the monitoring strategy and the capability of the hydrological water quality model to reproduce the internal processes. To this end, a compromise sampling frequency, which can reflect the dynamical behaviour of leached nutrient fluxes responding to changes in land use, agriculture practices and point sources, and appropriate process-based water quality model are required. The objective of this study was to test the identification of hydrological water quality model parameters (nitrogen and phosphorus) under two different monitoring strategies: (1) regular grab-sampling approach and (2) regular grab-sampling with additional monitoring during the hydrological events using automatic samplers. First, the semi-distributed hydrological water quality HYPE (Hydrological Predictions for the Environment) model was successfully calibrated (1994-1998) for discharge (NSE = 0.86), nitrate-N (lowest NSE for nitrate-N load = 0.69), particulate phosphorus and soluble phosphorus in the Selke catchment (463 km2, central Germany) for the period 1994-1998 using regular grab-sampling approach (biweekly to monthly for nitrogen and phosphorus concentrations). Second, the model was successfully validated during the period 1999-2010 for discharge, nitrate-N, particulate-phosphorus and soluble-phosphorus (lowest NSE for soluble phosphorus load = 0.54). Results, showed that when additional sampling during the events with random grab-sampling approach was used (period 2011-2013), the hydrological model could reproduce only the nitrate-N and soluble phosphorus concentrations reasonably well. However, when additional sampling during the hydrological events was considered, the HYPE model could not represent the measured particulate phosphorus. This reflects the importance of suspended sediment during the hydrological events increasing the concentrations of particulate phosphorus. The HYPE model could

  8. Measuring strategies for learning regulation in medical education: Scale reliability and dimensionality in a Swedish sample

    Directory of Open Access Journals (Sweden)

    Edelbring Samuel

    2012-08-01

    Full Text Available Abstract Background The degree of learners’ self-regulated learning and dependence on external regulation influence learning processes in higher education. These regulation strategies are commonly measured by questionnaires developed in other settings than in which they are being used, thereby requiring renewed validation. The aim of this study was to psychometrically evaluate the learning regulation strategy scales from the Inventory of Learning Styles with Swedish medical students (N = 206. Methods The regulation scales were evaluated regarding their reliability, scale dimensionality and interrelations. The primary evaluation focused on dimensionality and was performed with Mokken scale analysis. To assist future scale refinement, additional item analysis, such as item-to-scale correlations, was performed. Results Scale scores in the Swedish sample displayed good reliability in relation to published results: Cronbach’s alpha: 0.82, 0.72, and 0.65 for self-regulation, external regulation and lack of regulation scales respectively. The dimensionalities in scales were adequate for self-regulation and its subscales, whereas external regulation and lack of regulation displayed less unidimensionality. The established theoretical scales were largely replicated in the exploratory analysis. The item analysis identified two items that contributed little to their respective scales. Discussion The results indicate that these scales have an adequate capacity for detecting the three theoretically proposed learning regulation strategies in the medical education sample. Further construct validity should be sought by interpreting scale scores in relation to specific learning activities. Using established scales for measuring students’ regulation strategies enables a broad empirical base for increasing knowledge on regulation strategies in relation to different disciplinary settings and contributes to theoretical development.

  9. Standard Gibbs free energies of reactions of ozone with free radicals in aqueous solution: quantum-chemical calculations.

    Science.gov (United States)

    Naumov, Sergej; von Sonntag, Clemens

    2011-11-01

    Free radicals are common intermediates in the chemistry of ozone in aqueous solution. Their reactions with ozone have been probed by calculating the standard Gibbs free energies of such reactions using density functional theory (Jaguar 7.6 program). O(2) reacts fast and irreversibly only with simple carbon-centered radicals. In contrast, ozone also reacts irreversibly with conjugated carbon-centered radicals such as bisallylic (hydroxycylohexadienyl) radicals, with conjugated carbon/oxygen-centered radicals such as phenoxyl radicals, and even with nitrogen- oxygen-, sulfur-, and halogen-centered radicals. In these reactions, further ozone-reactive radicals are generated. Chain reactions may destroy ozone without giving rise to products other than O(2). This may be of importance when ozonation is used in pollution control, and reactions of free radicals with ozone have to be taken into account in modeling such processes.

  10. Practical experiences with an extended screening strategy for genetically modified organisms (GMOs) in real-life samples.

    Science.gov (United States)

    Scholtens, Ingrid; Laurensse, Emile; Molenaar, Bonnie; Zaaijer, Stephanie; Gaballo, Heidi; Boleij, Peter; Bak, Arno; Kok, Esther

    2013-09-25

    Nowadays most animal feed products imported into Europe have a GMO (genetically modified organism) label. This means that they contain European Union (EU)-authorized GMOs. For enforcement of these labeling requirements, it is necessary, with the rising number of EU-authorized GMOs, to perform an increasing number of analyses. In addition to this, it is necessary to test products for the potential presence of EU-unauthorized GMOs. Analysis for EU-authorized and -unauthorized GMOs in animal feed has thus become laborious and expensive. Initial screening steps may reduce the number of GMO identification methods that need to be applied, but with the increasing diversity also screening with GMO elements has become more complex. For the present study, the application of an informative detailed 24-element screening and subsequent identification strategy was applied in 50 animal feed samples. Almost all feed samples were labeled as containing GMO-derived materials. The main goal of the study was therefore to investigate if a detailed screening strategy would reduce the number of subsequent identification analyses. An additional goal was to test the samples in this way for the potential presence of EU-unauthorized GMOs. Finally, to test the robustness of the approach, eight of the samples were tested in a concise interlaboratory study. No significant differences were found between the results of the two laboratories.

  11. Digital Content Strategies

    OpenAIRE

    Halbheer, Daniel; Stahl, Florian; Koenigsberg, Oded; Lehmann, Donald R

    2013-01-01

    This paper studies content strategies for online publishers of digital information goods. It examines sampling strategies and compares their performance to paid content and free content strategies. A sampling strategy, where some of the content is offered for free and consumers are charged for access to the rest, is known as a "metered model" in the newspaper industry. We analyze optimal decisions concerning the size of the sample and the price of the paid content when sampling serves the dua...

  12. Towards an optimal sampling strategy for assessing genetic variation within and among white clover (Trifolium repens L. cultivars using AFLP

    Directory of Open Access Journals (Sweden)

    Khosro Mehdi Khanlou

    2011-01-01

    Full Text Available Cost reduction in plant breeding and conservation programs depends largely on correctly defining the minimal sample size required for the trustworthy assessment of intra- and inter-cultivar genetic variation. White clover, an important pasture legume, was chosen for studying this aspect. In clonal plants, such as the aforementioned, an appropriate sampling scheme eliminates the redundant analysis of identical genotypes. The aim was to define an optimal sampling strategy, i.e., the minimum sample size and appropriate sampling scheme for white clover cultivars, by using AFLP data (283 loci from three popular types. A grid-based sampling scheme, with an interplant distance of at least 40 cm, was sufficient to avoid any excess in replicates. Simulations revealed that the number of samples substantially influenced genetic diversity parameters. When using less than 15 per cultivar, the expected heterozygosity (He and Shannon diversity index (I were greatly underestimated, whereas with 20, more than 95% of total intra-cultivar genetic variation was covered. Based on AMOVA, a 20-cultivar sample was apparently sufficient to accurately quantify individual genetic structuring. The recommended sampling strategy facilitates the efficient characterization of diversity in white clover, for both conservation and exploitation.

  13. Development of Bi-phase sodium-oxygen-hydrogen chemical equilibrium calculation program (BISHOP) using Gibbs free energy minimization method

    International Nuclear Information System (INIS)

    Okano, Yasushi

    1999-08-01

    In order to analyze the reaction heat and compounds due to sodium combustion, the multiphase chemical equilibrium calculation program for chemical reaction among sodium, oxygen and hydrogen is developed in this study. The developed numerical program is named BISHOP; which denotes Bi-Phase, Sodium - Oxygen - Hydrogen, Chemical Equilibrium Calculation Program'. Gibbs free energy minimization method is used because of the special merits that easily add and change chemical species, and generally deal many thermochemical reaction systems in addition to constant temperature and pressure one. Three new methods are developed for solving multi-phase sodium reaction system in this study. One is to construct equation system by simplifying phase, and the other is to expand the Gibbs free energy minimization method into multi-phase system, and the last is to establish the effective searching method for the minimum value. Chemical compounds by the combustion of sodium in the air are calculated using BISHOP. The Calculated temperature and moisture conditions where sodium-oxide and hydroxide are formed qualitatively agree with the experiments. Deformation of sodium hydride is calculated by the program. The estimated result of the relationship between the deformation temperature and pressure closely agree with the well known experimental equation of Roy and Rodgers. It is concluded that BISHOP can be used for evaluated the combustion and deformation behaviors of sodium and its compounds. Hydrogen formation condition of the dump-tank room at the sodium leak event of FBR is quantitatively evaluated by BISHOP. It can be concluded that to keep the temperature of dump-tank room lower is effective method to suppress the formation of hydrogen. In case of choosing the lower inflammability limit of 4.1 mol% as the hydrogen concentration criterion, formation reaction of sodium hydride from sodium and hydrogen is facilitated below the room temperature of 800 K, and concentration of hydrogen

  14. Serum sample containing endogenous antibodies interfering with multiple hormone immunoassays. Laboratory strategies to detect interference

    Directory of Open Access Journals (Sweden)

    Elena García-González

    2016-04-01

    Full Text Available Objectives: Endogenous antibodies (EA may interfere with immunoassays, causing erroneous results for hormone analyses. As (in most cases this interference arises from the assay format and most immunoassays, even from different manufacturers, are constructed in a similar way, it is possible for a single type of EA to interfere with different immunoassays. Here we describe the case of a patient whose serum sample contains EA that interfere several hormones tests. We also discuss the strategies deployed to detect interference. Subjects and methods: Over a period of four years, a 30-year-old man was subjected to a plethora of laboratory and imaging diagnostic procedures as a consequence of elevated hormone results, mainly of pituitary origin, which did not correlate with the overall clinical picture. Results: Once analytical interference was suspected, the best laboratory approaches to investigate it were sample reanalysis on an alternative platform and sample incubation with antibody blocking tubes. Construction of an in-house ‘nonsense’ sandwich assay was also a valuable strategy to confirm interference. In contrast, serial sample dilutions were of no value in our case, while polyethylene glycol (PEG precipitation gave inconclusive results, probably due to the use of inappropriate PEG concentrations for several of the tests assayed. Conclusions: Clinicians and laboratorians must be aware of the drawbacks of immunometric assays, and alert to the possibility of EA interference when results do not fit the clinical pattern. Keywords: Endogenous antibodies, Immunoassay, Interference, Pituitary hormones, Case report

  15. [Identification of Systemic Contaminations with Legionella Spec. in Drinking Water Plumbing Systems: Sampling Strategies and Corresponding Parameters].

    Science.gov (United States)

    Völker, S; Schreiber, C; Müller, H; Zacharias, N; Kistemann, T

    2017-05-01

    After the amendment of the Drinking Water Ordinance in 2011, the requirements for the hygienic-microbiological monitoring of drinking water installations have increased significantly. In the BMBF-funded project "Biofilm Management" (2010-2014), we examined the extent to which established sampling strategies in practice can uncover drinking water plumbing systems systemically colonized with Legionella. Moreover, we investigated additional parameters that might be suitable for detecting systemic contaminations. We subjected the drinking water plumbing systems of 8 buildings with known microbial contamination (Legionella) to an intensive hygienic-microbiological sampling with high spatial and temporal resolution. A total of 626 drinking hot water samples were analyzed with classical culture-based methods. In addition, comprehensive hygienic observations were conducted in each building and qualitative interviews with operators and users were applied. Collected tap-specific parameters were quantitatively analyzed by means of sensitivity and accuracy calculations. The systemic presence of Legionella in drinking water plumbing systems has a high spatial and temporal variability. Established sampling strategies were only partially suitable to detect long-term Legionella contaminations in practice. In particular, the sampling of hot water at the calorifier and circulation re-entrance showed little significance in terms of contamination events. To detect the systemic presence of Legionella,the parameters stagnation (qualitatively assessed) and temperature (compliance with the 5K-rule) showed better results. © Georg Thieme Verlag KG Stuttgart · New York.

  16. Limit order book and its modeling in terms of Gibbs Grand-Canonical Ensemble

    Science.gov (United States)

    Bicci, Alberto

    2016-12-01

    In the domain of so called Econophysics some attempts have been already made for applying the theory of thermodynamics and statistical mechanics to economics and financial markets. In this paper a similar approach is made from a different perspective, trying to model the limit order book and price formation process of a given stock by the Grand-Canonical Gibbs Ensemble for the bid and ask orders. The application of the Bose-Einstein statistics to this ensemble allows then to derive the distribution of the sell and buy orders as a function of price. As a consequence we can define in a meaningful way expressions for the temperatures of the ensembles of bid orders and of ask orders, which are a function of minimum bid, maximum ask and closure prices of the stock as well as of the exchanged volume of shares. It is demonstrated that the difference between the ask and bid orders temperatures can be related to the VAO (Volume Accumulation Oscillator), an indicator empirically defined in Technical Analysis of stock markets. Furthermore the derived distributions for aggregate bid and ask orders can be subject to well defined validations against real data, giving a falsifiable character to the model.

  17. A Gibbs potential expansion with a quantic system made up of a large number of particles

    International Nuclear Information System (INIS)

    Bloch, Claude; Dominicis, Cyrano de

    1959-01-01

    Starting from an expansion derived in a previous work, we study the contribution to the Gibbs potential of the two-body dynamical correlations, taking into account the statistical correlations. Such a contribution is of interest for low density systems at low temperature. In the zero density limit, it reduces to the Beth Uhlenbeck expression of the second virial coefficient. For a system of fermions in the zero temperature limit, it yields the contribution of the Brueckner reaction matrix to the ground state energy, plus, under certain conditions, additional terms of the form exp. (β |Δ|), where the Δ are the binding energies of 'bound states' of the type first discussed by L. Cooper. Finally, we study the wave function of two particles immersed in a medium (defined by its temperature and chemical potential). lt satisfies an equation generalizing the Bethe Goldstone equation for an arbitrary temperature. Reprint of a paper published in 'Nuclear Physics' 10, 1959, p. 181-196 [fr

  18. A sampling strategy to establish existing plant configuration baselines

    International Nuclear Information System (INIS)

    Buchanan, L.P.

    1995-01-01

    The Department of Energy's Gaseous Diffusion Plants (DOEGDP) are undergoing a Safety Analysis Update Program. As part of this program, critical existing structures are being reevaluated for Natural Phenomena Hazards (NPH) based on the recommendations of UCRL-15910. The Department of Energy has specified that current plant configurations be used in the performance of these reevaluations. This paper presents the process and results of a walkdown program implemented at DOEGDP to establish the current configuration baseline for these existing critical structures for use in subsequent NPH evaluations. These structures are classified as moderate hazard facilities and were constructed in the early 1950's. The process involved a statistical sampling strategy to determine the validity of critical design information as represented on the original design drawings such as member sizes, orientation, connection details and anchorage. A floor load inventory of the dead load of the equipment, both permanently attached and spare, was also performed as well as a walkthrough inspection of the overall structure to identify any other significant anomalies

  19. Quantum Metropolis sampling.

    Science.gov (United States)

    Temme, K; Osborne, T J; Vollbrecht, K G; Poulin, D; Verstraete, F

    2011-03-03

    The original motivation to build a quantum computer came from Feynman, who imagined a machine capable of simulating generic quantum mechanical systems--a task that is believed to be intractable for classical computers. Such a machine could have far-reaching applications in the simulation of many-body quantum physics in condensed-matter, chemical and high-energy systems. Part of Feynman's challenge was met by Lloyd, who showed how to approximately decompose the time evolution operator of interacting quantum particles into a short sequence of elementary gates, suitable for operation on a quantum computer. However, this left open the problem of how to simulate the equilibrium and static properties of quantum systems. This requires the preparation of ground and Gibbs states on a quantum computer. For classical systems, this problem is solved by the ubiquitous Metropolis algorithm, a method that has basically acquired a monopoly on the simulation of interacting particles. Here we demonstrate how to implement a quantum version of the Metropolis algorithm. This algorithm permits sampling directly from the eigenstates of the Hamiltonian, and thus evades the sign problem present in classical simulations. A small-scale implementation of this algorithm should be achievable with today's technology.

  20. Gibbs-Thomson Law for Singular Step Segments: Thermodynamics Versus Kinetics

    Science.gov (United States)

    Chernov, A. A.

    2003-01-01

    Classical Burton-Cabrera-Frank theory presumes that thermal fluctuations are so fast that at any time density of kinks on a step is comparable with the reciprocal intermolecular distance, so that the step rate is about isotropic within the crystal plane. Such azimuthal isotropy is, however, often not the case: Kink density may be much lower. In particular, it was recently found on the (010) face of orthorhombic lysozyme that interkink distance may exceed 500-600 intermolecular distances. Under such conditions, Gibbs-Thomson law (GTL) may not be applicable: On a straight step segment between two corners, communication between the comers occurs exclusively by kink exchange. Annihilation between kinks of opposite sign generated at the comers results in the grain in step energy entering GTL. If the step segment length l much greater than D/v, where D and v are the kink diffusivity and propagation rate, respectively, the opposite kinks have practically no chance to annihilate and GTL is not applicable. The opposite condition of the GTL applicability, l much less than D/v, is equivalent to the requirement that relative supersaturation Delta(sub mu)/kT much less than alpha/l, where alpha is molecular size. Thus, GTL may be applied to a segment of 10(exp 3)alpha approx. 3 x 10(exp -5)cm approx 0.3 micron only if supersaturation is less than 0.1%, while practically used driving forces for crystallization are much larger. Relationships alternative to the GTL for different, but low, kink density have been discussed. They confirm experimental evidences that the Burton-Cabrera-Frank theory of spiral growth is growth rates twice as low as compared to the observed figures. Also, application of GTL results in unrealistic step energy while suggested kinetic law give reasonable figures.

  1. SUBLIMATION-DRIVEN ACTIVITY IN MAIN-BELT COMET 313P/GIBBS

    Energy Technology Data Exchange (ETDEWEB)

    Hsieh, Henry H. [Institute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 10617, Taiwan (China); Hainaut, Olivier [European Southern Observatory, Karl-Schwarzschild-Straße 2, D-85748 Garching bei München (Germany); Novaković, Bojan [Department of Astronomy, Faculty of Mathematics, University of Belgrade, Studentski trg 16, 11000 Belgrade (Serbia); Bolin, Bryce [Observatoire de la Côte d’Azur, Boulevard de l’Observatoire, B.P. 4229, F-06304 Nice Cedex 4 (France); Denneau, Larry; Haghighipour, Nader; Kleyna, Jan; Meech, Karen J.; Schunova, Eva; Wainscoat, Richard J. [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States); Fitzsimmons, Alan [Astrophysics Research Centre, Queens University Belfast, Belfast BT7 1NN (United Kingdom); Kokotanekova, Rosita; Snodgrass, Colin [Planetary and Space Sciences, Department of Physical Sciences, The Open University, Milton Keynes MK7 6AA (United Kingdom); Lacerda, Pedro [Max Planck Institute for Solar System Research, Justus-von-Liebig-Weg 3, D-37077 Göttingen (Germany); Micheli, Marco [ESA SSA NEO Coordination Centre, Frascati, RM (Italy); Moskovitz, Nick; Wasserman, Lawrence [Lowell Observatory, 1400 W. Mars Hill Road, Flagstaff, AZ 86001 (United States); Waszczak, Adam, E-mail: hhsieh@asiaa.sinica.edu.tw [Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125 (United States)

    2015-02-10

    We present an observational and dynamical study of newly discovered main-belt comet 313P/Gibbs. We find that the object is clearly active both in observations obtained in 2014 and in precovery observations obtained in 2003 by the Sloan Digital Sky Survey, strongly suggesting that its activity is sublimation-driven. This conclusion is supported by a photometric analysis showing an increase in the total brightness of the comet over the 2014 observing period, and dust modeling results showing that the dust emission persists over at least three months during both active periods, where we find start dates for emission no later than 2003 July 24 ± 10 for the 2003 active period and 2014 July 28 ± 10 for the 2014 active period. From serendipitous observations by the Subaru Telescope in 2004 when the object was apparently inactive, we estimate that the nucleus has an absolute R-band magnitude of H{sub R} = 17.1 ± 0.3, corresponding to an effective nucleus radius of r{sub e} ∼ 1.00 ± 0.15 km. The object’s faintness at that time means we cannot rule out the presence of activity, and so this computed radius should be considered an upper limit. We find that 313P’s orbit is intrinsically chaotic, having a Lyapunov time of T{sub l} = 12,000 yr and being located near two three-body mean-motion resonances with Jupiter and Saturn, 11J-1S-5A and 10J+12S-7A, yet appears stable over >50 Myr in an apparent example of stable chaos. We furthermore find that 313P is the second main-belt comet, after P/2012 T1 (PANSTARRS), to belong to the ∼155 Myr old Lixiaohua asteroid family.

  2. Thermodynamic analysis of ethanol/water system in a fuel cell reformer with the Gibbs energy minimization method

    International Nuclear Information System (INIS)

    Lima da Silva, Aline; De Fraga Malfatti, Celia; Heck, Nestor Cesar

    2003-01-01

    The use of fuel cells is a promising technology in the conversion of chemical to electrical energy. Due to environmental concerns related to the reduction of atmospheric pollution and greenhouse gases emissions such as CO 2 , NO x and hydrocarbons, there have been many researches about fuel cells using hydrogen as fuel. Hydrogen gas can be produced by several routes; a promising one is the steam reforming of ethanol. This route may become an important industrial process, especially for sugarcane producing countries. Ethanol is renewable energy and presents several advantages over other sources related to natural availability, storage and handling safety. In order to contribute to the understanding of the steam reforming of ethanol inside the reformer, this work displays a detailed thermodynamic analysis of the ethanol/water system, in the temperature range of 500-1200K, considering different H 2 O/ethanol reforming ratios. The equilibrium determinations were done with the help of the Gibbs energy minimization method using the Generalized Reduced Gradient algorithm (GRG). Based on literature data, the species considered in calculations were: H 2 , H 2 O, CO, CO 2 , CH 4 , C 2 H 4 , CH 3 CHO, C 2 H 5 OH (gas phase) and C gr . (graphite phase). The thermodynamic conditions for carbon deposition (probably soot) on catalyst during gas reforming were analyzed, in order to establish temperature ranges and H 2 O/ethanol ratios where carbon precipitation is not thermodynamically feasible. Experimental results from literature show that carbon deposition causes catalyst deactivation during reforming. This deactivation is due to encapsulating carbon that covers active phases on a catalyst substrate, e.g. Ni over Al 2 O 3 . In the present study, a mathematical relationship between Lagrange multipliers and the carbon activity (with reference to the graphite phase) was deduced, unveiling the carbon activity in the reformer atmosphere. From this, it is possible to foreseen if soot

  3. The standard Gibbs free energy of formation of lithium manganese oxides at the temperatures of (680, 740 and 800) K

    International Nuclear Information System (INIS)

    Rog, G.; Kucza, W.; Kozlowska-Rog, A.

    2004-01-01

    The standard Gibbs free energy of formation of LiMnO 2 and LiMn 2 O 4 at the temperatures of (680, 740 and 800) K has been determined with the help of the solid-state galvanic cells involving lithium-β-alumina electrolyte. The equilibrium electrical potentials of cathode containing Li x Mn 2 O 4 spinel, in the composition ranges 0≤x≤1 and 1≤x≤2, vs. metallic lithium in the reversible intercalation galvanic cell have been calculated. The existence of two-voltage plateaus which appeared during charging and discharging processes in reversible intercalation of lithium into Li x Mn 2 O 4 spinel, has been discussed

  4. Rats track odour trails accurately using a multi-layered strategy with near-optimal sampling.

    Science.gov (United States)

    Khan, Adil Ghani; Sarangi, Manaswini; Bhalla, Upinder Singh

    2012-02-28

    Tracking odour trails is a crucial behaviour for many animals, often leading to food, mates or away from danger. It is an excellent example of active sampling, where the animal itself controls how to sense the environment. Here we show that rats can track odour trails accurately with near-optimal sampling. We trained rats to follow odour trails drawn on paper spooled through a treadmill. By recording local field potentials (LFPs) from the olfactory bulb, and sniffing rates, we find that sniffing but not LFPs differ between tracking and non-tracking conditions. Rats can track odours within ~1 cm, and this accuracy is degraded when one nostril is closed. Moreover, they show path prediction on encountering a fork, wide 'casting' sweeps on encountering a gap and detection of reappearance of the trail in 1-2 sniffs. We suggest that rats use a multi-layered strategy, and achieve efficient sampling and high accuracy in this complex task.

  5. The analytical calibration in (bio)imaging/mapping of the metallic elements in biological samples--definitions, nomenclature and strategies: state of the art.

    Science.gov (United States)

    Jurowski, Kamil; Buszewski, Bogusław; Piekoszewski, Wojciech

    2015-01-01

    Nowadays, studies related to the distribution of metallic elements in biological samples are one of the most important issues. There are many articles dedicated to specific analytical atomic spectrometry techniques used for mapping/(bio)imaging the metallic elements in various kinds of biological samples. However, in such literature, there is a lack of articles dedicated to reviewing calibration strategies, and their problems, nomenclature, definitions, ways and methods used to obtain quantitative distribution maps. The aim of this article was to characterize the analytical calibration in the (bio)imaging/mapping of the metallic elements in biological samples including (1) nomenclature; (2) definitions, and (3) selected and sophisticated, examples of calibration strategies with analytical calibration procedures applied in the different analytical methods currently used to study an element's distribution in biological samples/materials such as LA ICP-MS, SIMS, EDS, XRF and others. The main emphasis was placed on the procedures and methodology of the analytical calibration strategy. Additionally, the aim of this work is to systematize the nomenclature for the calibration terms: analytical calibration, analytical calibration method, analytical calibration procedure and analytical calibration strategy. The authors also want to popularize the division of calibration methods that are different than those hitherto used. This article is the first work in literature that refers to and emphasizes many different and complex aspects of analytical calibration problems in studies related to (bio)imaging/mapping metallic elements in different kinds of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. A Cost-Constrained Sampling Strategy in Support of LAI Product Validation in Mountainous Areas

    Directory of Open Access Journals (Sweden)

    Gaofei Yin

    2016-08-01

    Full Text Available Increasing attention is being paid on leaf area index (LAI retrieval in mountainous areas. Mountainous areas present extreme topographic variability, and are characterized by more spatial heterogeneity and inaccessibility compared with flat terrain. It is difficult to collect representative ground-truth measurements, and the validation of LAI in mountainous areas is still problematic. A cost-constrained sampling strategy (CSS in support of LAI validation was presented in this study. To account for the influence of rugged terrain on implementation cost, a cost-objective function was incorporated to traditional conditioned Latin hypercube (CLH sampling strategy. A case study in Hailuogou, Sichuan province, China was used to assess the efficiency of CSS. Normalized difference vegetation index (NDVI, land cover type, and slope were selected as auxiliary variables to present the variability of LAI in the study area. Results show that CSS can satisfactorily capture the variability across the site extent, while minimizing field efforts. One appealing feature of CSS is that the compromise between representativeness and implementation cost can be regulated according to actual surface heterogeneity and budget constraints, and this makes CSS flexible. Although the proposed method was only validated for the auxiliary variables rather than the LAI measurements, it serves as a starting point for establishing the locations of field plots and facilitates the preparation of field campaigns in mountainous areas.

  7. inverse gaussian model for small area estimation via gibbs sampling

    African Journals Online (AJOL)

    ADMIN

    1 Department of Decision Sciences and MIS, Concordia University, Montréal,. Québec ... method by application to household income survey data, comparing it against the usual lognormal ...... pensions, superannuation and annuities and other.

  8. Interactive Control System, Intended Strategy, Implemented Strategy dan Emergent Strategy

    OpenAIRE

    Tubagus Ismail; Darjat Sudrajat

    2012-01-01

    The purpose of this study was to examine the relationship between management control system (MCS) and strategy formation processes, namely: intended strategy, emergent strategy and impelemented strategy. The focus of MCS in this study was interactive control system. The study was based on Structural Equation Modeling (SEM) as its multivariate analyses instrument. The samples were upper middle managers of manufacturing company in Banten Province, DKI Jakarta Province and West Java Province. AM...

  9. Evaluation of 5-FU pharmacokinetics in cancer patients with DPD deficiency using a Bayesian limited sampling strategy

    NARCIS (Netherlands)

    Van Kuilenburg, A.; Hausler, P.; Schalhorn, A.; Tanck, M.; Proost, J.H.; Terborg, C.; Behnke, D.; Schwabe, W.; Jabschinsky, K.; Maring, J.G.

    Aims: Dihydropyrimidine dehydrogenase (DPD) is the initial enzyme in the catabolism of 5-fluorouracil (5FU) and DPD deficiency is an important pharmacogenetic syndrome. The main purpose of this study was to develop a limited sampling strategy to evaluate the pharmacokinetics of 5FU and to detect

  10. Comparative study of solute trapping and Gibbs free energy changes at the phase interface during alloy solidification under local nonequilibrium conditions

    Energy Technology Data Exchange (ETDEWEB)

    Sobolev, S. L., E-mail: sobolev@icp.ac.ru [Russian Academy of Sciences, Institute of Problems of Chemical Physics (Russian Federation)

    2017-03-15

    An analytical model has been developed to describe the influence of solute trapping during rapid alloy solidification on the components of the Gibbs free energy change at the phase interface with emphasis on the solute drag energy. For relatively low interface velocity V < V{sub D}, where V{sub D} is the characteristic diffusion velocity, all the components, namely mixing part, local nonequilibrium part, and solute drag, significantly depend on solute diffusion and partitioning. When V ≥ V{sub D}, the local nonequilibrium effects lead to a sharp transition to diffusionless solidification. The transition is accompanied by complete solute trapping and vanishing solute drag energy, i.e. partitionless and “dragless” solidification.

  11. Efficacy of independence sampling in replica exchange simulations of ordered and disordered proteins.

    Science.gov (United States)

    Lee, Kuo Hao; Chen, Jianhan

    2017-11-15

    Recasting temperature replica exchange (T-RE) as a special case of Gibbs sampling has led to a simple and efficient scheme for enhanced mixing (Chodera and Shirts, J. Chem. Phys., 2011, 135, 194110). To critically examine if T-RE with independence sampling (T-REis) improves conformational sampling, we performed T-RE and T-REis simulations of ordered and disordered proteins using coarse-grained and atomistic models. The results demonstrate that T-REis effectively increase the replica mobility in temperatures space with minimal computational overhead, especially for folded proteins. However, enhanced mixing does not translate well into improved conformational sampling. The convergences of thermodynamic properties interested are similar, with slight improvements for T-REis of ordered systems. The study re-affirms the efficiency of T-RE does not appear to be limited by temperature diffusion, but by the inherent rates of spontaneous large-scale conformational re-arrangements. Due to its simplicity and efficacy of enhanced mixing, T-REis is expected to be more effective when incorporated with various Hamiltonian-RE protocols. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  12. A study of the Boltzmann and Gibbs entropies in the context of a stochastic toy model

    Science.gov (United States)

    Malgieri, Massimiliano; Onorato, Pasquale; De Ambrosis, Anna

    2018-05-01

    In this article we reconsider a stochastic toy model of thermal contact, first introduced in Onorato et al (2017 Eur. J. Phys. 38 045102), showing its educational potential for clarifying some current issues in the foundations of thermodynamics. The toy model can be realized in practice using dice and coins, and can be seen as representing thermal coupling of two subsystems with energy bounded from above. The system is used as a playground for studying the different behaviours of the Boltzmann and Gibbs temperatures and entropies in the approach to steady state. The process that models thermal contact between the two subsystems can be proved to be an ergodic, reversible Markov chain; thus the dynamics produces an equilibrium distribution in which the weight of each state is proportional to its multiplicity in terms of microstates. Each one of the two subsystems, taken separately, is formally equivalent to an Ising spin system in the non-interacting limit. The model is intended for educational purposes, and the level of readership of the article is aimed at advanced undergraduates.

  13. Decision Tree and Survey Development for Support in Agricultural Sampling Strategies during Nuclear and Radiological Emergencies

    International Nuclear Information System (INIS)

    Yi, Amelia Lee Zhi; Dercon, Gerd

    2017-01-01

    In the event of a severe nuclear or radiological accident, the release of radionuclides results in contamination of land surfaces affecting agricultural and food resources. Speedy accumulation of information and guidance on decision making is essential in enhancing the ability of stakeholders to strategize for immediate countermeasure strategies. Support tools such as decision trees and sampling protocols allow for swift response by governmental bodies and assist in proper management of the situation. While such tools exist, they focus mainly on protecting public well-being and not food safety management strategies. Consideration of the latter is necessary as it has long-term implications especially to agriculturally dependent Member States. However, it is a research gap that remains to be filled.

  14. Limited sampling strategies drawn within 3 hours postdose poorly predict mycophenolic acid area-under-the-curve after enteric-coated mycophenolate sodium.

    NARCIS (Netherlands)

    Winter, B.C. de; Gelder, T. van; Mathôt, R.A.A.; Glander, P.; Tedesco-Silva, H.; Hilbrands, L.B.; Budde, K.; Hest, R.M. van

    2009-01-01

    Previous studies predicted that limited sampling strategies (LSS) for estimation of mycophenolic acid (MPA) area-under-the-curve (AUC(0-12)) after ingestion of enteric-coated mycophenolate sodium (EC-MPS) using a clinically feasible sampling scheme may have poor predictive performance. Failure of

  15. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    Science.gov (United States)

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size.

    Science.gov (United States)

    Yao, Peng-Cheng; Gao, Hai-Yan; Wei, Ya-Nan; Zhang, Jian-Hang; Chen, Xiao-Yong; Li, Hong-Qing

    2017-01-01

    Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.

  17. Modelling metal-humate interactions: an approach based on the Gibbs-Donnan concept

    International Nuclear Information System (INIS)

    Ephraim, J.H.

    1995-01-01

    Humic and fulvic acids constitute an appreciable portion of organic substances in both aquatic and terrestrial environments. Their ability to sequester metal ions and other trace elements has engaged the interest of numerous environmental scientists recently and even though considerable advances have been made, a lot more remains unknown in the area. The existence of high molecular weight fractions and functional group heterogeneity have endowed ion exchange characteristics to these substances. For example, the cation exchange capacities of some humic substances have been compared to those of smectites. Recent development in the solution chemistry has also indicated that humic substances have the capability to interact with other anions because of their amphiphilic nature. In this paper, metal-humate interaction is described by relying heavily on information obtained from treatment of the solution chemistry of ion exchangers as typical polymers. In such a treatment, the perturbations to the metal-humate interaction are estimated by resort to the Gibbs-Donnan concept where the humic substance molecule is envisaged as having a potential counter-ion concentrating region around its molecular domain into which diffusible components can enter or leave depending on their corresponding electrochemical potentials. Information from studies with ion exchangers have been adapted to describe ionic equilibria involving these substances by making it possible to characterise the configuration/conformation of these natural organic acids and to correct for electrostatic effects in the metal-humate interaction. The resultant unified physicochemical approach has facilitated the identification and estimation of the complications to the solution chemistry of humic substances. (authors). 15 refs., 1 fig

  18. Standard methods for sampling and sample preparation for gamma spectroscopy

    International Nuclear Information System (INIS)

    Taskaeva, M.; Taskaev, E.; Nikolov, P.

    1993-01-01

    The strategy for sampling and sample preparation is outlined: necessary number of samples; analysis and treatment of the results received; quantity of the analysed material according to the radionuclide concentrations and analytical methods; the minimal quantity and kind of the data needed for making final conclusions and decisions on the base of the results received. This strategy was tested in gamma spectroscopic analysis of radionuclide contamination of the region of Eleshnitsa Uranium Mines. The water samples was taken and stored according to the ASTM D 3370-82. The general sampling procedures were in conformity with the recommendations of ISO 5667. The radionuclides was concentrated by coprecipitation with iron hydroxide and ion exchange. The sampling of soil samples complied with the rules of ASTM C 998, and their sample preparation - with ASTM C 999. After preparation the samples were sealed hermetically and measured. (author)

  19. Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.

    Science.gov (United States)

    Blutke, Andreas; Wanke, Rüdiger

    2018-03-06

    In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical

  20. Sampling strategies to capture single-cell heterogeneity

    OpenAIRE

    Satwik Rajaram; Louise E. Heinrich; John D. Gordan; Jayant Avva; Kathy M. Bonness; Agnieszka K. Witkiewicz; James S. Malter; Chloe E. Atreya; Robert S. Warren; Lani F. Wu; Steven J. Altschuler

    2017-01-01

    Advances in single-cell technologies have highlighted the prevalence and biological significance of cellular heterogeneity. A critical question is how to design experiments that faithfully capture the true range of heterogeneity from samples of cellular populations. Here, we develop a data-driven approach, illustrated in the context of image data, that estimates the sampling depth required for prospective investigations of single-cell heterogeneity from an existing collection of samples. ...

  1. Hierarchical Bayesian sparse image reconstruction with application to MRFM.

    Science.gov (United States)

    Dobigeon, Nicolas; Hero, Alfred O; Tourneret, Jean-Yves

    2009-09-01

    This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.

  2. Development of improved space sampling strategies for ocean chemical properties: Total carbon dioxide and dissolved nitrate

    Science.gov (United States)

    Goyet, Catherine; Davis, Daniel; Peltzer, Edward T.; Brewer, Peter G.

    1995-01-01

    Large-scale ocean observing programs such as the Joint Global Ocean Flux Study (JGOFS) and the World Ocean Circulation Experiment (WOCE) today, must face the problem of designing an adequate sampling strategy. For ocean chemical variables, the goals and observing technologies are quite different from ocean physical variables (temperature, salinity, pressure). We have recently acquired data on the ocean CO2 properties on WOCE cruises P16c and P17c that are sufficiently dense to test for sampling redundancy. We use linear and quadratic interpolation methods on the sampled field to investigate what is the minimum number of samples required to define the deep ocean total inorganic carbon (TCO2) field within the limits of experimental accuracy (+/- 4 micromol/kg). Within the limits of current measurements, these lines were oversampled in the deep ocean. Should the precision of the measurement be improved, then a denser sampling pattern may be desirable in the future. This approach rationalizes the efficient use of resources for field work and for estimating gridded (TCO2) fields needed to constrain geochemical models.

  3. Sampling Strategies for Evaluating the Rate of Adventitious Transgene Presence in Non-Genetically Modified Crop Fields.

    Science.gov (United States)

    Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine

    2017-09-01

    According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.

  4. Standard operating procedures for collection of soil and sediment samples for the Sediment-bound Contaminant Resiliency and Response (SCoRR) strategy pilot study

    Science.gov (United States)

    Fisher, Shawn C.; Reilly, Timothy J.; Jones, Daniel K.; Benzel, William M.; Griffin, Dale W.; Loftin, Keith A.; Iwanowicz, Luke R.; Cohl, Jonathan A.

    2015-12-17

    An understanding of the effects on human and ecological health brought by major coastal storms or flooding events is typically limited because of a lack of regionally consistent baseline and trends data in locations proximal to potential contaminant sources and mitigation activities, sensitive ecosystems, and recreational facilities where exposures are probable. In an attempt to close this gap, the U.S. Geological Survey (USGS) has implemented the Sediment-bound Contaminant Resiliency and Response (SCoRR) strategy pilot study to collect regional sediment-quality data prior to and in response to future coastal storms. The standard operating procedure (SOP) detailed in this document serves as the sample-collection protocol for the SCoRR strategy by providing step-by-step instructions for site preparation, sample collection and processing, and shipping of soil and surficial sediment (for example, bed sediment, marsh sediment, or beach material). The objectives of the SCoRR strategy pilot study are (1) to create a baseline of soil-, sand-, marsh sediment-, and bed-sediment-quality data from sites located in the coastal counties from Maine to Virginia based on their potential risk of being contaminated in the event of a major coastal storm or flooding (defined as Resiliency mode); and (2) respond to major coastal storms and flooding by reoccupying select baseline sites and sampling within days of the event (defined as Response mode). For both modes, samples are collected in a consistent manner to minimize bias and maximize quality control by ensuring that all sampling personnel across the region collect, document, and process soil and sediment samples following the procedures outlined in this SOP. Samples are analyzed using four USGS-developed screening methods—inorganic geochemistry, organic geochemistry, pathogens, and biological assays—which are also outlined in this SOP. Because the SCoRR strategy employs a multi-metric approach for sample analyses, this

  5. A nested-PCR strategy for molecular diagnosis of mollicutes in uncultured biological samples from cows with vulvovaginitis.

    Science.gov (United States)

    Voltarelli, Daniele Cristina; de Alcântara, Brígida Kussumoto; Lunardi, Michele; Alfieri, Alice Fernandes; de Arruda Leme, Raquel; Alfieri, Amauri Alcindo

    2018-01-01

    Bacteria classified in Mycoplasma (M. bovis and M. bovigenitalium) and Ureaplasma (U. diversum) genera are associated with granular vulvovaginitis that affect heifers and cows at reproductive age. The traditional means for detection and speciation of mollicutes from clinical samples have been culture and serology. However, challenges experienced with these laboratory methods have hampered assessment of their impact in pathogenesis and epidemiology in cattle worldwide. The aim of this study was to develop a PCR strategy to detect and primarily discriminate between the main species of mollicutes associated with reproductive disorders of cattle in uncultured clinical samples. In order to amplify the 16S-23S rRNA internal transcribed spacer region of the genome, a consensual and species-specific nested-PCR assay was developed to identify and discriminate between main species of mollicutes. In addition, 31 vaginal swab samples from dairy and beef affected cows were investigated. This nested-PCR strategy was successfully employed in the diagnosis of single and mixed mollicute infections of diseased cows from cattle herds from Brazil. The developed system enabled the rapid and unambiguous identification of the main mollicute species known to be associated with this cattle reproductive disorder through differential amplification of partial fragments of the ITS region of mollicute genomes. The development of rapid and sensitive tools for mollicute detection and discrimination without the need for previous cultures or sequencing of PCR products is a high priority for accurate diagnosis in animal health. Therefore, the PCR strategy described herein may be helpful for diagnosis of this class of bacteria in genital swabs submitted to veterinary diagnostic laboratories, not demanding expertise in mycoplasma culture and identification. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Appreciating the difference between design-based and model-based sampling strategies in quantitative morphology of the nervous system.

    Science.gov (United States)

    Geuna, S

    2000-11-20

    Quantitative morphology of the nervous system has undergone great developments over recent years, and several new technical procedures have been devised and applied successfully to neuromorphological research. However, a lively debate has arisen on some issues, and a great deal of confusion appears to exist that is definitely responsible for the slow spread of the new techniques among scientists. One such element of confusion is related to uncertainty about the meaning, implications, and advantages of the design-based sampling strategy that characterize the new techniques. In this article, to help remove this uncertainty, morphoquantitative methods are described and contrasted on the basis of the inferential paradigm of the sampling strategy: design-based vs model-based. Moreover, some recommendations are made to help scientists judge the appropriateness of a method used for a given study in relation to its specific goals. Finally, the use of the term stereology to label, more or less expressly, only some methods is critically discussed. Copyright 2000 Wiley-Liss, Inc.

  7. Solid-Phase Extraction Strategies to Surmount Body Fluid Sample Complexity in High-Throughput Mass Spectrometry-Based Proteomics

    Science.gov (United States)

    Bladergroen, Marco R.; van der Burgt, Yuri E. M.

    2015-01-01

    For large-scale and standardized applications in mass spectrometry- (MS-) based proteomics automation of each step is essential. Here we present high-throughput sample preparation solutions for balancing the speed of current MS-acquisitions and the time needed for analytical workup of body fluids. The discussed workflows reduce body fluid sample complexity and apply for both bottom-up proteomics experiments and top-down protein characterization approaches. Various sample preparation methods that involve solid-phase extraction (SPE) including affinity enrichment strategies have been automated. Obtained peptide and protein fractions can be mass analyzed by direct infusion into an electrospray ionization (ESI) source or by means of matrix-assisted laser desorption ionization (MALDI) without further need of time-consuming liquid chromatography (LC) separations. PMID:25692071

  8. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size.

    Directory of Open Access Journals (Sweden)

    Peng-Cheng Yao

    Full Text Available Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01. These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.

  9. Sustained Attention Across the Life Span in a Sample of 10,000: Dissociating Ability and Strategy.

    Science.gov (United States)

    Fortenbaugh, Francesca C; DeGutis, Joseph; Germine, Laura; Wilmer, Jeremy B; Grosso, Mallory; Russo, Kathryn; Esterman, Michael

    2015-09-01

    Normal and abnormal differences in sustained visual attention have long been of interest to scientists, educators, and clinicians. Still lacking, however, is a clear understanding of how sustained visual attention varies across the broad sweep of the human life span. In the present study, we filled this gap in two ways. First, using an unprecedentedly large 10,430-person sample, we modeled age-related differences with substantially greater precision than have prior efforts. Second, using the recently developed gradual-onset continuous performance test (gradCPT), we parsed sustained-attention performance over the life span into its ability and strategy components. We found that after the age of 15 years, the strategy and ability trajectories saliently diverge. Strategy becomes monotonically more conservative with age, whereas ability peaks in the early 40s and is followed by a gradual decline in older adults. These observed life-span trajectories for sustained attention are distinct from results of other life-span studies focusing on fluid and crystallized intelligence. © The Author(s) 2015.

  10. The Gibbs free energy of homogeneous nucleation: From atomistic nuclei to the planar limit.

    Science.gov (United States)

    Cheng, Bingqing; Tribello, Gareth A; Ceriotti, Michele

    2017-09-14

    In this paper we discuss how the information contained in atomistic simulations of homogeneous nucleation should be used when fitting the parameters in macroscopic nucleation models. We show how the number of solid and liquid atoms in such simulations can be determined unambiguously by using a Gibbs dividing surface and how the free energy as a function of the number of solid atoms in the nucleus can thus be extracted. We then show that the parameters (the chemical potential, the interfacial free energy, and a Tolman correction) of a model based on classical nucleation theory can be fitted using the information contained in these free-energy profiles but that the parameters in such models are highly correlated. This correlation is unfortunate as it ensures that small errors in the computed free energy surface can give rise to large errors in the extrapolated properties of the fitted model. To resolve this problem we thus propose a method for fitting macroscopic nucleation models that uses simulations of planar interfaces and simulations of three-dimensional nuclei in tandem. We show that when the chemical potentials and the interface energy are pinned to their planar-interface values, more precise estimates for the Tolman length are obtained. Extrapolating the free energy profile obtained from small simulation boxes to larger nuclei is thus more reliable.

  11. Modeling Electric Double-Layer Capacitors Using Charge Variation Methodology in Gibbs Ensemble

    Directory of Open Access Journals (Sweden)

    Ganeshprasad Pavaskar

    2018-01-01

    Full Text Available Supercapacitors deliver higher power than batteries and find applications in grid integration and electric vehicles. Recent work by Chmiola et al. (2006 has revealed unexpected increase in the capacitance of porous carbon electrodes using ionic liquids as electrolytes. The work has generated curiosity among both experimentalists and theoreticians. Here, we have performed molecular simulations using a recently developed technique (Punnathanam, 2014 for simulating supercapacitor system. In this technique, the two electrodes (containing electrolyte in slit pore are simulated in two different boxes using the Gibbs ensemble methodology. This reduces the number of particles required and interfacial interactions, which helps in reducing computational load. The method simulates an electric double-layer capacitor (EDLC with macroscopic electrodes with much smaller system sizes. In addition, the charges on individual electrode atoms are allowed to vary in response to movement of electrolyte ions (i.e., electrode is polarizable while ensuring these atoms are at the same electric potential. We also present the application of our technique on EDLCs with the electrodes modeled as slit pores and as complex three-dimensional pore networks for different electrolyte geometries. The smallest pore geometry showed an increase in capacitance toward the potential of 0 charge. This is in agreement with the new understanding of the electrical double layer in regions of dense ionic packing, as noted by Kornyshev’s theoretical model (Kornyshev, 2007, which also showed a similar trend. This is not addressed by the classical Gouy–Chapman theory for the electric double layer. Furthermore, the electrode polarizability simulated in the model improved the accuracy of the calculated capacitance. However, its addition did not significantly alter the capacitance values in the voltage range considered.

  12. Radial line-scans as representative sampling strategy in dried-droplet laser ablation of liquid samples deposited on pre-cut filter paper disks

    Energy Technology Data Exchange (ETDEWEB)

    Nischkauer, Winfried [Institute of Chemical Technologies and Analytics, Vienna University of Technology, Vienna (Austria); Department of Analytical Chemistry, Ghent University, Ghent (Belgium); Vanhaecke, Frank [Department of Analytical Chemistry, Ghent University, Ghent (Belgium); Bernacchi, Sébastien; Herwig, Christoph [Institute of Chemical Engineering, Vienna University of Technology, Vienna (Austria); Limbeck, Andreas, E-mail: Andreas.Limbeck@tuwien.ac.at [Institute of Chemical Technologies and Analytics, Vienna University of Technology, Vienna (Austria)

    2014-11-01

    Nebulising liquid samples and using the aerosol thus obtained for further analysis is the standard method in many current analytical techniques, also with inductively coupled plasma (ICP)-based devices. With such a set-up, quantification via external calibration is usually straightforward for samples with aqueous or close-to-aqueous matrix composition. However, there is a variety of more complex samples. Such samples can be found in medical, biological, technological and industrial contexts and can range from body fluids, like blood or urine, to fuel additives or fermentation broths. Specialized nebulizer systems or careful digestion and dilution are required to tackle such demanding sample matrices. One alternative approach is to convert the liquid into a dried solid and to use laser ablation for sample introduction. Up to now, this approach required the application of internal standards or matrix-adjusted calibration due to matrix effects. In this contribution, we show a way to circumvent these matrix effects while using simple external calibration for quantification. The principle of representative sampling that we propose uses radial line-scans across the dried residue. This compensates for centro-symmetric inhomogeneities typically observed in dried spots. The effectiveness of the proposed sampling strategy is exemplified via the determination of phosphorus in biochemical fermentation media. However, the universal viability of the presented measurement protocol is postulated. Detection limits using laser ablation-ICP-optical emission spectrometry were in the order of 40 μg mL{sup −1} with a reproducibility of 10 % relative standard deviation (n = 4, concentration = 10 times the quantification limit). The reported sensitivity is fit-for-purpose in the biochemical context described here, but could be improved using ICP-mass spectrometry, if future analytical tasks would require it. Trueness of the proposed method was investigated by cross-validation with

  13. Fate of organic microcontaminants in wastewater treatment and river systems: An uncertainty assessment in view of sampling strategy, and compound consumption rate and degradability.

    Science.gov (United States)

    Aymerich, I; Acuña, V; Ort, C; Rodríguez-Roda, I; Corominas, Ll

    2017-11-15

    The growing awareness of the relevance of organic microcontaminants on the environment has led to a growing number of studies on attenuation of these compounds in wastewater treatment plants (WWTP) and rivers. However, the effects of the sampling strategies (frequency and duration of composite samples) on the attenuation estimates are largely unknown. Our goal was to assess how frequency and duration of composite samples influence uncertainty of the attenuation estimates in WWTPs and rivers. Furthermore, we also assessed how compound consumption rate and degradability influence uncertainty. The assessment was conducted through simulating the integrated wastewater system of Puigcerdà (NE Iberian Peninsula) using a sewer pattern generator and a coupled model of WWTP and river. Results showed that the sampling strategy is especially critical at the influent of WWTP, particularly when the number of toilet flushes containing the compound of interest is small (≤100 toilet flushes with compound day -1 ), and less critical at the effluent of the WWTP and in the river due to the mixing effects of the WWTP. For example, at the WWTP, when evaluating a compound that is present in 50 pulses·d -1 using a sampling frequency of 15-min to collect a 24-h composite sample, the attenuation uncertainty can range from 94% (0% degradability) to 9% (90% degradability). The estimation of attenuation in rivers is less critical than in WWTPs, as the attenuation uncertainty was lower than 10% for all evaluated scenarios. Interestingly, the errors in the estimates of attenuation are usually lower than those of loads for most sampling strategies and compound characteristics (e.g. consumption and degradability), although the opposite occurs for compounds with low consumption and inappropriate sampling strategies at the WWTP. Hence, when designing a sampling campaign, one should consider the influence of compounds' consumption and degradability as well as the desired level of accuracy in

  14. Actual distribution of Cronobacter spp. in industrial batches of powdered infant formula and consequences for performance of sampling strategies.

    Science.gov (United States)

    Jongenburger, I; Reij, M W; Boer, E P J; Gorris, L G M; Zwietering, M H

    2011-11-15

    The actual spatial distribution of microorganisms within a batch of food influences the results of sampling for microbiological testing when this distribution is non-homogeneous. In the case of pathogens being non-homogeneously distributed, it markedly influences public health risk. This study investigated the spatial distribution of Cronobacter spp. in powdered infant formula (PIF) on industrial batch-scale for both a recalled batch as well a reference batch. Additionally, local spatial occurrence of clusters of Cronobacter cells was assessed, as well as the performance of typical sampling strategies to determine the presence of the microorganisms. The concentration of Cronobacter spp. was assessed in the course of the filling time of each batch, by taking samples of 333 g using the most probable number (MPN) enrichment technique. The occurrence of clusters of Cronobacter spp. cells was investigated by plate counting. From the recalled batch, 415 MPN samples were drawn. The expected heterogeneous distribution of Cronobacter spp. could be quantified from these samples, which showed no detectable level (detection limit of -2.52 log CFU/g) in 58% of samples, whilst in the remainder concentrations were found to be between -2.52 and 2.75 log CFU/g. The estimated average concentration in the recalled batch was -2.78 log CFU/g and a standard deviation of 1.10 log CFU/g. The estimated average concentration in the reference batch was -4.41 log CFU/g, with 99% of the 93 samples being below the detection limit. In the recalled batch, clusters of cells occurred sporadically in 8 out of 2290 samples of 1g taken. The two largest clusters contained 123 (2.09 log CFU/g) and 560 (2.75 log CFU/g) cells. Various sampling strategies were evaluated for the recalled batch. Taking more and smaller samples and keeping the total sampling weight constant, considerably improved the performance of the sampling plans to detect such a type of contaminated batch. Compared to random sampling

  15. A radial sampling strategy for uniform k-space coverage with retrospective respiratory gating in 3D ultrashort-echo-time lung imaging.

    Science.gov (United States)

    Park, Jinil; Shin, Taehoon; Yoon, Soon Ho; Goo, Jin Mo; Park, Jang-Yeon

    2016-05-01

    The purpose of this work was to develop a 3D radial-sampling strategy which maintains uniform k-space sample density after retrospective respiratory gating, and demonstrate its feasibility in free-breathing ultrashort-echo-time lung MRI. A multi-shot, interleaved 3D radial sampling function was designed by segmenting a single-shot trajectory of projection views such that each interleaf samples k-space in an incoherent fashion. An optimal segmentation factor for the interleaved acquisition was derived based on an approximate model of respiratory patterns such that radial interleaves are evenly accepted during the retrospective gating. The optimality of the proposed sampling scheme was tested by numerical simulations and phantom experiments using human respiratory waveforms. Retrospectively, respiratory-gated, free-breathing lung MRI with the proposed sampling strategy was performed in healthy subjects. The simulation yielded the most uniform k-space sample density with the optimal segmentation factor, as evidenced by the smallest standard deviation of the number of neighboring samples as well as minimal side-lobe energy in the point spread function. The optimality of the proposed scheme was also confirmed by minimal image artifacts in phantom images. Human lung images showed that the proposed sampling scheme significantly reduced streak and ring artifacts compared with the conventional retrospective respiratory gating while suppressing motion-related blurring compared with full sampling without respiratory gating. In conclusion, the proposed 3D radial-sampling scheme can effectively suppress the image artifacts due to non-uniform k-space sample density in retrospectively respiratory-gated lung MRI by uniformly distributing gated radial views across the k-space. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Determination of standard Gibbs free energy of formation for Ca2P2O7 and Ca(PO3)2 from solid-state EMF measurements using yttria stabilised zirconia as solid electrolyte

    International Nuclear Information System (INIS)

    Sandstroem, Malin Hannah; Bostroem, Dan; Rosen, Erik

    2006-01-01

    The equilibrium reactions: 3Ca 2 P 2 O 7 (s)+6Ni(s)-bar 2Ca 3 (PO 4 ) 2 (s)+2Ni 3 P(s)+52O 2 (g) and 2Ca(PO 3 ) 2 (s)+6Ni(s)-bar Ca 2 P 2 O 7 (s)+2Ni 3 P(s)+52O 2 (g) were studied in the temperature range 890K to 1140K. The oxygen equilibrium pressures were determined using galvanic cells incorporating yttria stabilized zirconia as solid electrolyte. From the measured data and using the literature values of standard Gibbs free energy of formation for Ca 3 (PO 4 ) 2 and Ni 3 P, the following relationship of the standard Gibbs free energy of formation for Ca 2 P 2 O 7 and Ca(PO 3 ) 2 were calculated:Δ f G o (Ca 2 P 2 O 7 )+/-11/(kJ.mol -1 )=-3475.9+1.5441(T/K)-0.1051(T/K).ln(T/K)andΔ f G o (Ca(PO 3 ) 2 )+/-12/(kJ.mol -1 )=-3334.8+6.1561(T/K)-0.6950(T/K).ln(T/K)

  17. Radial line-scans as representative sampling strategy in dried-droplet laser ablation of liquid samples deposited on pre-cut filter paper disks.

    Science.gov (United States)

    Nischkauer, Winfried; Vanhaecke, Frank; Bernacchi, Sébastien; Herwig, Christoph; Limbeck, Andreas

    2014-11-01

    Nebulising liquid samples and using the aerosol thus obtained for further analysis is the standard method in many current analytical techniques, also with inductively coupled plasma (ICP)-based devices. With such a set-up, quantification via external calibration is usually straightforward for samples with aqueous or close-to-aqueous matrix composition. However, there is a variety of more complex samples. Such samples can be found in medical, biological, technological and industrial contexts and can range from body fluids, like blood or urine, to fuel additives or fermentation broths. Specialized nebulizer systems or careful digestion and dilution are required to tackle such demanding sample matrices. One alternative approach is to convert the liquid into a dried solid and to use laser ablation for sample introduction. Up to now, this approach required the application of internal standards or matrix-adjusted calibration due to matrix effects. In this contribution, we show a way to circumvent these matrix effects while using simple external calibration for quantification. The principle of representative sampling that we propose uses radial line-scans across the dried residue. This compensates for centro-symmetric inhomogeneities typically observed in dried spots. The effectiveness of the proposed sampling strategy is exemplified via the determination of phosphorus in biochemical fermentation media. However, the universal viability of the presented measurement protocol is postulated. Detection limits using laser ablation-ICP-optical emission spectrometry were in the order of 40 μg mL - 1 with a reproducibility of 10 % relative standard deviation (n = 4, concentration = 10 times the quantification limit). The reported sensitivity is fit-for-purpose in the biochemical context described here, but could be improved using ICP-mass spectrometry, if future analytical tasks would require it. Trueness of the proposed method was investigated by cross-validation with

  18. Radial line-scans as representative sampling strategy in dried-droplet laser ablation of liquid samples deposited on pre-cut filter paper disks

    Science.gov (United States)

    Nischkauer, Winfried; Vanhaecke, Frank; Bernacchi, Sébastien; Herwig, Christoph; Limbeck, Andreas

    2014-11-01

    Nebulising liquid samples and using the aerosol thus obtained for further analysis is the standard method in many current analytical techniques, also with inductively coupled plasma (ICP)-based devices. With such a set-up, quantification via external calibration is usually straightforward for samples with aqueous or close-to-aqueous matrix composition. However, there is a variety of more complex samples. Such samples can be found in medical, biological, technological and industrial contexts and can range from body fluids, like blood or urine, to fuel additives or fermentation broths. Specialized nebulizer systems or careful digestion and dilution are required to tackle such demanding sample matrices. One alternative approach is to convert the liquid into a dried solid and to use laser ablation for sample introduction. Up to now, this approach required the application of internal standards or matrix-adjusted calibration due to matrix effects. In this contribution, we show a way to circumvent these matrix effects while using simple external calibration for quantification. The principle of representative sampling that we propose uses radial line-scans across the dried residue. This compensates for centro-symmetric inhomogeneities typically observed in dried spots. The effectiveness of the proposed sampling strategy is exemplified via the determination of phosphorus in biochemical fermentation media. However, the universal viability of the presented measurement protocol is postulated. Detection limits using laser ablation-ICP-optical emission spectrometry were in the order of 40 μg mL- 1 with a reproducibility of 10 % relative standard deviation (n = 4, concentration = 10 times the quantification limit). The reported sensitivity is fit-for-purpose in the biochemical context described here, but could be improved using ICP-mass spectrometry, if future analytical tasks would require it. Trueness of the proposed method was investigated by cross-validation with

  19. SOME SYSTEMATIC SAMPLING STRATEGIES USING MULTIPLE RANDOM STARTS

    OpenAIRE

    Sampath Sundaram; Ammani Sivaraman

    2010-01-01

    In this paper an attempt is made to extend linear systematic sampling using multiple random starts due to Gautschi(1957)for various types of systematic sampling schemes available in literature, namely(i)  Balanced Systematic Sampling (BSS) of  Sethi (1965) and (ii) Modified Systematic Sampling (MSS) of Singh, Jindal, and Garg  (1968). Further, the proposed methods were compared with Yates corrected estimator developed with reference to Gautschi’s Linear systematic samplin...

  20. GARN: Sampling RNA 3D Structure Space with Game Theory and Knowledge-Based Scoring Strategies.

    Science.gov (United States)

    Boudard, Mélanie; Bernauer, Julie; Barth, Dominique; Cohen, Johanne; Denise, Alain

    2015-01-01

    Cellular processes involve large numbers of RNA molecules. The functions of these RNA molecules and their binding to molecular machines are highly dependent on their 3D structures. One of the key challenges in RNA structure prediction and modeling is predicting the spatial arrangement of the various structural elements of RNA. As RNA folding is generally hierarchical, methods involving coarse-grained models hold great promise for this purpose. We present here a novel coarse-grained method for sampling, based on game theory and knowledge-based potentials. This strategy, GARN (Game Algorithm for RNa sampling), is often much faster than previously described techniques and generates large sets of solutions closely resembling the native structure. GARN is thus a suitable starting point for the molecular modeling of large RNAs, particularly those with experimental constraints. GARN is available from: http://garn.lri.fr/.

  1. A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, J. D. (Prostat, Mesa, AZ); Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)

    2006-10-01

    Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.

  2. Improved sample size determination for attributes and variables sampling

    International Nuclear Information System (INIS)

    Stirpe, D.; Picard, R.R.

    1985-01-01

    Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs

  3. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this

  4. Sampling strategies and materials for investigating large reactive particle complaints from Valley Village homeowners near a coal-fired power plant

    International Nuclear Information System (INIS)

    Chang, A.; Davis, H.; Frazar, B.; Haines, B.

    1997-01-01

    This paper will present Phase 3's sampling strategies, techniques, methods and substrates for assisting the District to resolve the complaints involving yellowish-brown staining and spotting of homes, cars, etc. These spots could not be easily washed off and some were permanent. The sampling strategies for the three phases were based on Phase 1 -- the identification of the reactive particles conducted in October, 1989 by APCD and IITRI, Phase 2 -- a study of the size distribution and concentration as a function of distance and direction of reactive particle deposition conducted by Radian and LG and E, and Phase 3 -- the determination of the frequency of soiling events over a full year's duration conducted in 1995 by APCD and IITRI. The sampling methods included two primary substrates -- ACE sheets and painted steel, and four secondary substrates -- mailbox, aluminum siding, painted wood panels and roof tiles. The secondary substrates were the main objects from the Valley Village complaints. The sampling technique included five Valley Village (VV) soiling/staining assessment sites and one southwest of the power plant as background/upwind site. The five VV sites northeast of the power plant covered 50 degrees span sector and 3/4 miles distance from the stacks. Hourly meteorological data for wind speeds and wind directions were collected. Based on this sampling technique, there were fifteen staining episodes detected. Nine of them were in summer, 1995

  5. Interactive Control System, Intended Strategy, Implemented Strategy dan Emergent Strategy

    Directory of Open Access Journals (Sweden)

    Tubagus Ismail

    2012-09-01

    Full Text Available The purpose of this study was to examine the relationship between management control system (MCS and strategy formation processes, namely: intended strategy, emergent strategy and impelemented strategy. The focus of MCS in this study was interactive control system. The study was based on Structural Equation Modeling (SEM as its multivariate analyses instrument. The samples were upper middle managers of manufacturing company in Banten Province, DKI Jakarta Province and West Java Province. AMOS Software 16 program is used as an additional instrument to resolve the problem in SEM modeling. The study found that interactive control system brought a positive and significant influence on Intended strategy; interactive control system brought a positive and significant influence on implemented strategy; interactive control system brought a positive and significant influence on emergent strategy. The limitation of this study is that our empirical model only used one way relationship between the process of strategy formation and interactive control system.

  6. A hybrid computational strategy to address WGS variant analysis in >5000 samples.

    Science.gov (United States)

    Huang, Zhuoyi; Rustagi, Navin; Veeraraghavan, Narayanan; Carroll, Andrew; Gibbs, Richard; Boerwinkle, Eric; Venkata, Manjunath Gorentla; Yu, Fuli

    2016-09-10

    The decreasing costs of sequencing are driving the need for cost effective and real time variant calling of whole genome sequencing data. The scale of these projects are far beyond the capacity of typical computing resources available with most research labs. Other infrastructures like the cloud AWS environment and supercomputers also have limitations due to which large scale joint variant calling becomes infeasible, and infrastructure specific variant calling strategies either fail to scale up to large datasets or abandon joint calling strategies. We present a high throughput framework including multiple variant callers for single nucleotide variant (SNV) calling, which leverages hybrid computing infrastructure consisting of cloud AWS, supercomputers and local high performance computing infrastructures. We present a novel binning approach for large scale joint variant calling and imputation which can scale up to over 10,000 samples while producing SNV callsets with high sensitivity and specificity. As a proof of principle, we present results of analysis on Cohorts for Heart And Aging Research in Genomic Epidemiology (CHARGE) WGS freeze 3 dataset in which joint calling, imputation and phasing of over 5300 whole genome samples was produced in under 6 weeks using four state-of-the-art callers. The callers used were SNPTools, GATK-HaplotypeCaller, GATK-UnifiedGenotyper and GotCloud. We used Amazon AWS, a 4000-core in-house cluster at Baylor College of Medicine, IBM power PC Blue BioU at Rice and Rhea at Oak Ridge National Laboratory (ORNL) for the computation. AWS was used for joint calling of 180 TB of BAM files, and ORNL and Rice supercomputers were used for the imputation and phasing step. All other steps were carried out on the local compute cluster. The entire operation used 5.2 million core hours and only transferred a total of 6 TB of data across the platforms. Even with increasing sizes of whole genome datasets, ensemble joint calling of SNVs for low

  7. Bias of shear wave elasticity measurements in thin layer samples and a simple correction strategy.

    Science.gov (United States)

    Mo, Jianqiang; Xu, Hao; Qiang, Bo; Giambini, Hugo; Kinnick, Randall; An, Kai-Nan; Chen, Shigao; Luo, Zongping

    2016-01-01

    Shear wave elastography (SWE) is an emerging technique for measuring biological tissue stiffness. However, the application of SWE in thin layer tissues is limited by bias due to the influence of geometry on measured shear wave speed. In this study, we investigated the bias of Young's modulus measured by SWE in thin layer gelatin-agar phantoms, and compared the result with finite element method and Lamb wave model simulation. The result indicated that the Young's modulus measured by SWE decreased continuously when the sample thickness decreased, and this effect was more significant for smaller thickness. We proposed a new empirical formula which can conveniently correct the bias without the need of using complicated mathematical modeling. In summary, we confirmed the nonlinear relation between thickness and Young's modulus measured by SWE in thin layer samples, and offered a simple and practical correction strategy which is convenient for clinicians to use.

  8. A comparison of temporal and location-based sampling strategies for global positioning system-triggered electronic diaries.

    Science.gov (United States)

    Törnros, Tobias; Dorn, Helen; Reichert, Markus; Ebner-Priemer, Ulrich; Salize, Hans-Joachim; Tost, Heike; Meyer-Lindenberg, Andreas; Zipf, Alexander

    2016-11-21

    Self-reporting is a well-established approach within the medical and psychological sciences. In order to avoid recall bias, i.e. past events being remembered inaccurately, the reports can be filled out on a smartphone in real-time and in the natural environment. This is often referred to as ambulatory assessment and the reports are usually triggered at regular time intervals. With this sampling scheme, however, rare events (e.g. a visit to a park or recreation area) are likely to be missed. When addressing the correlation between mood and the environment, it may therefore be beneficial to include participant locations within the ambulatory assessment sampling scheme. Based on the geographical coordinates, the database query system then decides if a self-report should be triggered or not. We simulated four different ambulatory assessment sampling schemes based on movement data (coordinates by minute) from 143 voluntary participants tracked for seven consecutive days. Two location-based sampling schemes incorporating the environmental characteristics (land use and population density) at each participant's location were introduced and compared to a time-based sampling scheme triggering a report on the hour as well as to a sampling scheme incorporating physical activity. We show that location-based sampling schemes trigger a report less often, but we obtain more unique trigger positions and a greater spatial spread in comparison to sampling strategies based on time and distance. Additionally, the location-based methods trigger significantly more often at rarely visited types of land use and less often outside the study region where no underlying environmental data are available.

  9. Analytical strategies for uranium determination in natural water and industrial effluents samples

    International Nuclear Information System (INIS)

    Santos, Juracir Silva

    2011-01-01

    The work was developed under the project 993/2007 - 'Development of analytical strategies for uranium determination in environmental and industrial samples - Environmental monitoring in the Caetite city, Bahia, Brazil' and made possible through a partnership established between Universidade Federal da Bahia and the Comissao Nacional de Energia Nuclear. Strategies were developed to uranium determination in natural water and effluents of uranium mine. The first one was a critical evaluation of the determination of uranium by inductively coupled plasma optical emission spectrometry (ICP OES) performed using factorial and Doehlert designs involving the factors: acid concentration, radio frequency power and nebuliser gas flow rate. Five emission lines were simultaneously studied (namely: 367.007, 385.464, 385.957, 386.592 and 409.013 nm), in the presence of HN0 3 , H 3 C 2 00H or HCI. The determinations in HN0 3 medium were the most sensitive. Among the factors studied, the gas flow rate was the most significant for the five emission lines. Calcium caused interference in the emission intensity for some lines and iron did not interfere (at least up to 10 mg L -1 ) in the five lines studied. The presence of 13 other elements did not affect the emission intensity of uranium for the lines chosen. The optimized method, using the line at 385.957 nm, allows the determination of uranium with limit of quantification of 30 μg L -1 and precision expressed as RSD lower than 2.2% for uranium concentrations of either 500 and 1000 μg L -1 . In second one, a highly sensitive flow-based procedure for uranium determination in natural waters is described. A 100-cm optical path flow cell based on a liquid-core waveguide (LCW) was exploited to increase sensitivity of the arsenazo 111 method, aiming to achieve the limits established by environmental regulations. The flow system was designed with solenoid micro-pumps in order to improve mixing and minimize reagent consumption, as well as

  10. Sampling strategies for millipedes (Diplopoda), centipedes ...

    African Journals Online (AJOL)

    At present considerable effort is being made to document and describe invertebrate diversity as part of numerous biodiversity conservation research projects. In order to determine diversity, rapid and effective sampling and estimation procedures are required and these need to be standardized for a particular group of ...

  11. Gibbs energy calculation of electrolytic plasma channel with inclusions of copper and copper oxide with Al-base

    Science.gov (United States)

    Posuvailo, V. M.; Klapkiv, M. D.; Student, M. M.; Sirak, Y. Y.; Pokhmurska, H. V.

    2017-03-01

    The oxide ceramic coating with copper inclusions was synthesized by the method of plasma electrolytic oxidation (PEO). Calculations of the Gibbs energies of reactions between the plasma channel elements with inclusions of copper and copper oxide were carried out. Two methods of forming the oxide-ceramic coatings on aluminum base in electrolytic plasma with copper inclusions were established. The first method - consist in the introduction of copper into the aluminum matrix, the second - copper oxide. During the synthesis of oxide ceramic coatings plasma channel does not react with copper and copper oxide-ceramic included in the coating. In the second case is reduction of copper oxide in interaction with elements of the plasma channel. The content of oxide-ceramic layer was investigated by X-ray and X-ray microelement analysis. The inclusions of copper, CuAl2, Cu9Al4 in the oxide-ceramic coatings were found. It was established that in the spark plasma channels alongside with the oxidation reaction occurs also the reaction aluminothermic reduction of the metal that allows us to dope the oxide-ceramic coating by metal the isobaric-isothermal potential oxidation of which is less negative than the potential of the aluminum oxide.

  12. Gibbs energy modelling of the driving forces and calculation of the fcc/hcp martensitic transformation temperatures in Fe-Mn and Fe-Mn-Si alloys

    International Nuclear Information System (INIS)

    Cotes, S.; Fernandez Guillermet, A.; Sade, M.

    1999-01-01

    Very recent, accurate dilatometric measurements of the fcc hcp martensitic transformation (MT) temperatures are used to develop a new thermodynamic description of the fcc and hcp phases in the Fe-Mn-Si system, based on phenomenological models for the Gibbs energy function. The composition dependence of the driving forces for the fcc→hcp and the hcp→fcc MTs is established. Detailed calculations of the MT temperatures are reported, which are used to investigate the systematic effects of Si additions upon the MT temperatures of Fe-Mn alloys. A critical comparison with one of the most recent thermodynamic analyses of the Fe-Mn-Si system, which is due to Forsberg and Agren, is also presented. (orig.)

  13. Strand Analysis, a free online program for the computational identification of the best RNA interference (RNAi targets based on Gibbs free energy

    Directory of Open Access Journals (Sweden)

    Tiago Campos Pereira

    2007-01-01

    Full Text Available The RNA interference (RNAi technique is a recent technology that uses double-stranded RNA molecules to promote potent and specific gene silencing. The application of this technique to molecular biology has increased considerably, from gene function identification to disease treatment. However, not all small interfering RNAs (siRNAs are equally efficient, making target selection an essential procedure. Here we present Strand Analysis (SA, a free online software tool able to identify and classify the best RNAi targets based on Gibbs free energy (deltaG. Furthermore, particular features of the software, such as the free energy landscape and deltaG gradient, may be used to shed light on RNA-induced silencing complex (RISC activity and RNAi mechanisms, which makes the SA software a distinct and innovative tool.

  14. The Ideal Ionic Liquid Salt Bridge for the Direct Determination of Gibbs Energies of Transfer of Single Ions, Part I: The Concept.

    Science.gov (United States)

    Radtke, Valentin; Ermantraut, Andreas; Himmel, Daniel; Koslowski, Thorsten; Leito, Ivo; Krossing, Ingo

    2018-02-23

    Described is a procedure for the thermodynamically rigorous, experimental determination of the Gibbs energy of transfer of single ions between solvents. The method is based on potential difference measurements between two electrochemical half cells with different solvents connected by an ideal ionic liquid salt bridge (ILSB). Discussed are the specific requirements for the IL with regard to the procedure, thus ensuring that the liquid junction potentials (LJP) at both ends of the ILSB are mostly canceled. The remaining parts of the LJPs can be determined by separate electromotive force measurements. No extra-thermodynamic assumptions are necessary for this procedure. The accuracy of the measurements depends, amongst others, on the ideality of the IL used, as shown in our companion paper Part II. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Strategies for monitoring the emerging polar organic contaminants in water with emphasis on integrative passive sampling.

    Science.gov (United States)

    Söderström, Hanna; Lindberg, Richard H; Fick, Jerker

    2009-01-16

    Although polar organic contaminants (POCs) such as pharmaceuticals are considered as some of today's most emerging contaminants few of them are regulated or included in on-going monitoring programs. However, the growing concern among the public and researchers together with the new legislature within the European Union, the registration, evaluation and authorisation of chemicals (REACH) system will increase the future need of simple, low cost strategies for monitoring and risk assessment of POCs in aquatic environments. In this article, we overview the advantages and shortcomings of traditional and novel sampling techniques available for monitoring the emerging POCs in water. The benefits and drawbacks of using active and biological sampling were discussed and the principles of organic passive samplers (PS) presented. A detailed overview of type of polar organic PS available, and their classes of target compounds and field of applications were given, and the considerations involved in using them such as environmental effects and quality control were discussed. The usefulness of biological sampling of POCs in water was found to be limited. Polar organic PS was considered to be the only available, but nevertheless, an efficient alternative to active water sampling due to its simplicity, low cost, no need of power supply or maintenance, and the ability of collecting time-integrative samples with one sample collection. However, the polar organic PS need to be further developed before they can be used as standard in water quality monitoring programs.

  16. Evaluation of sampling strategies to estimate crown biomass

    Science.gov (United States)

    Krishna P Poudel; Hailemariam Temesgen; Andrew N Gray

    2015-01-01

    Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree. Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products, fuel load assessments and fire management strategies, and wildfire...

  17. How to handle speciose clades? Mass taxon-sampling as a strategy towards illuminating the natural history of Campanula (Campanuloideae.

    Directory of Open Access Journals (Sweden)

    Guilhem Mansion

    Full Text Available BACKGROUND: Speciose clades usually harbor species with a broad spectrum of adaptive strategies and complex distribution patterns, and thus constitute ideal systems to disentangle biotic and abiotic causes underlying species diversification. The delimitation of such study systems to test evolutionary hypotheses is difficult because they often rely on artificial genus concepts as starting points. One of the most prominent examples is the bellflower genus Campanula with some 420 species, but up to 600 species when including all lineages to which Campanula is paraphyletic. We generated a large alignment of petD group II intron sequences to include more than 70% of described species as a reference. By comparison with partial data sets we could then assess the impact of selective taxon sampling strategies on phylogenetic reconstruction and subsequent evolutionary conclusions. METHODOLOGY/PRINCIPAL FINDINGS: Phylogenetic analyses based on maximum parsimony (PAUP, PRAP, Bayesian inference (MrBayes, and maximum likelihood (RAxML were first carried out on the large reference data set (D680. Parameters including tree topology, branch support, and age estimates, were then compared to those obtained from smaller data sets resulting from "classification-guided" (D088 and "phylogeny-guided sampling" (D101. Analyses of D088 failed to fully recover the phylogenetic diversity in Campanula, whereas D101 inferred significantly different branch support and age estimates. CONCLUSIONS/SIGNIFICANCE: A short genomic region with high phylogenetic utility allowed us to easily generate a comprehensive phylogenetic framework for the speciose Campanula clade. Our approach recovered 17 well-supported and circumscribed sub-lineages. Knowing these will be instrumental for developing more specific evolutionary hypotheses and guide future research, we highlight the predictive value of a mass taxon-sampling strategy as a first essential step towards illuminating the detailed

  18. Comparison of Boltzmann and Gibbs entropies for the analysis of single-chain phase transitions

    Science.gov (United States)

    Shakirov, T.; Zablotskiy, S.; Böker, A.; Ivanov, V.; Paul, W.

    2017-03-01

    In the last 10 years, flat histogram Monte Carlo simulations have contributed strongly to our understanding of the phase behavior of simple generic models of polymers. These simulations result in an estimate for the density of states of a model system. To connect this result with thermodynamics, one has to relate the density of states to the microcanonical entropy. In a series of publications, Dunkel, Hilbert and Hänggi argued that it would lead to a more consistent thermodynamic description of small systems, when one uses the Gibbs definition of entropy instead of the Boltzmann one. The latter is the logarithm of the density of states at a certain energy, the former is the logarithm of the integral of the density of states over all energies smaller than or equal to this energy. We will compare the predictions using these two definitions for two polymer models, a coarse-grained model of a flexible-semiflexible multiblock copolymer and a coarse-grained model of the protein poly-alanine. Additionally, it is important to note that while Monte Carlo techniques are normally concerned with the configurational energy only, the microcanonical ensemble is defined for the complete energy. We will show how taking the kinetic energy into account alters the predictions from the analysis. Finally, the microcanonical ensemble is supposed to represent a closed mechanical N-particle system. But due to Galilei invariance such a system has two additional conservation laws, in general: momentum and angular momentum. We will also show, how taking these conservation laws into account alters the results.

  19. A Fast and Robust Feature-Based Scan-Matching Method in 3D SLAM and the Effect of Sampling Strategies

    Directory of Open Access Journals (Sweden)

    Cihan Ulas

    2013-11-01

    Full Text Available Simultaneous localization and mapping (SLAM plays an important role in fully autonomous systems when a GNSS (global navigation satellite system is not available. Studies in both 2D indoor and 3D outdoor SLAM are based on the appearance of environments and utilize scan-matching methods to find rigid body transformation parameters between two consecutive scans. In this study, a fast and robust scan-matching method based on feature extraction is introduced. Since the method is based on the matching of certain geometric structures, like plane segments, the outliers and noise in the point cloud are considerably eliminated. Therefore, the proposed scan-matching algorithm is more robust than conventional methods. Besides, the registration time and the number of iterations are significantly reduced, since the number of matching points is efficiently decreased. As a scan-matching framework, an improved version of the normal distribution transform (NDT is used. The probability density functions (PDFs of the reference scan are generated as in the traditional NDT, and the feature extraction - based on stochastic plane detection - is applied to the only input scan. By using experimental dataset belongs to an outdoor environment like a university campus, we obtained satisfactory performance results. Moreover, the feature extraction part of the algorithm is considered as a special sampling strategy for scan-matching and compared to other sampling strategies, such as random sampling and grid-based sampling, the latter of which is first used in the NDT. Thus, this study also shows the effect of the subsampling on the performance of the NDT.

  20. Estudo da prevalência da tuberculose: uso de métodos bayesianos Study of the prevalence of tuberculosis using Bayesian methods

    Directory of Open Access Journals (Sweden)

    Jorge Alberto Achcar

    2003-12-01

    Full Text Available Neste artigo, apresentamos estimadores bayesianos para a prevalência de tuberculose usando métodos computacionais de simulação de amostras da distribuição a posteriori de interesse. Em especial, consideramos o uso do amostrador de Gibbs para simular amostras da distribuição a posteriori, e daí encontramos, em uma forma simples, inferências precisas para a prevalência de tuberculose. Em uma aplicação, analisamos os resultados do exame de Rx do tórax no diagnóstico da tuberculose. Com essa aplicação, verificamos que os estimadores bayesianos são simples de se obter e apresentam grande precisão. O uso de métodos computacionais para simulação de amostras como o caso do amostrador de Gibbs tem sido recentemente muito utilizado para análise bayesiana de modelos em bioestatística. Essas técnicas de simulação usando o amostrador de Gibbs são facilmente implementadas e não exigem muito conhecimento computacional, podendo ser programadas em qualquer software disponível. Além disso, essas técnicas podem ser consideradas para o estudo da prevalência de outras doenças.In this paper we present Bayesian estimators of the prevalence of tuberculosis using computational methods for simulation of samples of posterior distribution of interest. We especially considered the Gibbs sampling algorithm to generate samples of posterior distribution, and from these samples we obtained accurate inferences for the prevalence of tuberculosis. In an application, we analyzed the results of lung X-ray tests in the diagnosis of tuberculosis. With this application, we verified that Bayesian estimators are more accurate than some existing estimators usually considered by health researchers. The use of computational methods for simulation of samples as the case of the Gibbs sampling algorithm is becoming very popular for Bayesian analysis in biostatistics. These simulation techniques using the Gibbs sampling algorithm are easily implemented and do

  1. Orientação social, papel sexual e julgamento moral: uma comparação entre duas amostras brasileiras e uma norueguesa Social orientation, sexual role, and moral judgment: a comparison of two brazilian and one norwegian sample

    Directory of Open Access Journals (Sweden)

    Angela Biaggio

    2005-04-01

    Full Text Available Sessenta estudantes universitários (30 homens e 30 mulheres, de João Pessoa, e 60 estudantes de Porto Alegre, igualmente distribuídos, foram comparados a uma amostra semelhante da Noruega - 120 estudantes universitários (60 homens e 60 mulheres. Exceto por uma aparente diferença na orientação cultural entre as mulheres brasileiras, comparações através do teste de moralidade de justiça de Gibbs, do teste ECI da ética do cuidado, do inventário de papéis sexuais de Bem, e do teste de orientação cultural de Triandis mostraram que todas as diferenças foram entre a amostra da Noruega e as amostras do Brasil como um bloco. Os brasileiros estabeleceram uma diferenciação em relação aos papéis sexuais que não foi feita pelos noruegueses, e obtiveram escores mais altos na orientação cultural para o coletivismo. Os noruegueses mostraram mais altos escores no ECI, o que pode ser decorrente de um viés cultural no teste. Não houve diferenças, entre o Brasil e a Noruega, nem na orientação cultural para o individualismo, nem no teste de Gibbs. De uma forma geral, os homens obtiveram escores mais altos na medida do individualismo total e as mulheres no coletivismo vertical. As mulheres de João Pessoa obtiveram escores mais hedonísticos e individualistas do que as mulheres de Porto Alegre, que obtiveram escores mais tradicionais.Thirty female and 30 male university students each from Joao Pessoa and Porto Alegre were compared to a comparable Norwegian sample of 60 female and 60 male students. Except for a suggestion of differences in women's cultural orientation, comparisons on Gibbs' test of justice morality, the ECI test for ethic of care, Bem's sex role inventory, and Triandis' test for cultural orientations showed that all differences were between the Norwegian sample and the Brazilian samples as a unit. Brazilians showed a differentiation of sex roles, which was not shown in Norwegians, and higher scores on the collectivism

  2. The Viking X ray fluorescence experiment - Sampling strategies and laboratory simulations. [Mars soil sampling

    Science.gov (United States)

    Baird, A. K.; Castro, A. J.; Clark, B. C.; Toulmin, P., III; Rose, H., Jr.; Keil, K.; Gooding, J. L.

    1977-01-01

    Ten samples of Mars regolith material (six on Viking Lander 1 and four on Viking Lander 2) have been delivered to the X ray fluorescence spectrometers as of March 31, 1977. An additional six samples at least are planned for acquisition in the remaining Extended Mission (to January 1979) for each lander. All samples acquired are Martian fines from the near surface (less than 6-cm depth) of the landing sites except the latest on Viking Lander 1, which is fine material from the bottom of a trench dug to a depth of 25 cm. Several attempts on each lander to acquire fresh rock material (in pebble sizes) for analysis have yielded only cemented surface crustal material (duricrust). Laboratory simulation and experimentation are required both for mission planning of sampling and for interpretation of data returned from Mars. This paper is concerned with the rationale for sample site selections, surface sampler operations, and the supportive laboratory studies needed to interpret X ray results from Mars.

  3. Sampling stored product insect pests: a comparison of four statistical sampling models for probability of pest detection

    Science.gov (United States)

    Statistically robust sampling strategies form an integral component of grain storage and handling activities throughout the world. Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult due to species biology and behavioral characteristics. ...

  4. Utilizing the ultrasensitive Schistosoma up-converting phosphor lateral flow circulating anodic antigen (UCP-LF CAA) assay for sample pooling-strategies.

    Science.gov (United States)

    Corstjens, Paul L A M; Hoekstra, Pytsje T; de Dood, Claudia J; van Dam, Govert J

    2017-11-01

    Methodological applications of the high sensitivity genus-specific Schistosoma CAA strip test, allowing detection of single worm active infections (ultimate sensitivity), are discussed for efficient utilization in sample pooling strategies. Besides relevant cost reduction, pooling of samples rather than individual testing can provide valuable data for large scale mapping, surveillance, and monitoring. The laboratory-based CAA strip test utilizes luminescent quantitative up-converting phosphor (UCP) reporter particles and a rapid user-friendly lateral flow (LF) assay format. The test includes a sample preparation step that permits virtually unlimited sample concentration with urine, reaching ultimate sensitivity (single worm detection) at 100% specificity. This facilitates testing large urine pools from many individuals with minimal loss of sensitivity and specificity. The test determines the average CAA level of the individuals in the pool thus indicating overall worm burden and prevalence. When requiring test results at the individual level, smaller pools need to be analysed with the pool-size based on expected prevalence or when unknown, on the average CAA level of a larger group; CAA negative pools do not require individual test results and thus reduce the number of tests. Straightforward pooling strategies indicate that at sub-population level the CAA strip test is an efficient assay for general mapping, identification of hotspots, determination of stratified infection levels, and accurate monitoring of mass drug administrations (MDA). At the individual level, the number of tests can be reduced i.e. in low endemic settings as the pool size can be increased as opposed to prevalence decrease. At the sub-population level, average CAA concentrations determined in urine pools can be an appropriate measure indicating worm burden. Pooling strategies allowing this type of large scale testing are feasible with the various CAA strip test formats and do not affect

  5. Paleocurrents in the Charlie-Gibbs Fracture Zone during the Late Quaternary

    Science.gov (United States)

    Bashirova, L. D.; Dorokhova, E.; Sivkov, V.; Andersen, N.; Kuleshova, L. A.; Matul, A.

    2017-12-01

    The sedimentary processes prevailing in the Charlie-Gibbs Fracture Zone (CGFZ) are gravity flows. They rework pelagic sediments and contourites, and hereby mask the paleoceanographic information partly. The aim of this work is to study sediments of the AMK-4515 core taken in eastern part of the CGFZ. The sediment core AMK-4515 (52°03.14" N, 29°00.12" W; 370 cm length, water depth 3590 m) is located in the southern valley of the CGFZ. This natural deep corridor is influenced by both the westward Iceland-Scotland Overflow Water and underlying counterflow from the Newfoundland Basin. An alternation of the calcareous silty clays and hemipelagic clayey muds in the studied section indicates similarity between our core and long cores taking from CGFZ. A sharp facies shift was found at 80 cm depth in the investigated core. Only the upper section (0-80 cm) is valid for paleoreconstruction. Planktonic foraminiferal distribution and sea-surface temperature (SST) derived from these allow for tracing the PF and NAC latitudinal migrations during investigated period. So-called sortable silt mean size (SS) was used as proxy for reconstruction of bottom current intensity. The age model is based on δ18O and AMS 14C dating, as well as ice-rafted debris (IRD) counts and CaCO3 content. Stratigraphic subdivision of this section allows to allocate 2 marine isotope stages (MIS) covering the last 27 ka. We refer sediments below this level (80-370 cm) to upper part of turbidite, which was formed as a result of massive slide in the southern channel of the CGFZ. Sandy particles were deposited first, underlying silts and clays. This short-term event occurred so quickly that pelagic sedimentation played no role and was not reflected in the grain size distributions. There is evidence for the significant role of gravity flows in sedimentation in the southern channel of the CGFZ. According to our data, the massive sediment slide occurred in the CGFZ about 27 ka. The authors are grateful to RSF

  6. The Charlie-Gibbs Fracture Zone: A Crossroads of the Atlantic Meridional Overturning Circulation

    Science.gov (United States)

    Bower, A. S.; Furey, H. H.; Xu, X.

    2016-02-01

    The Charlie-Gibbs Fracture Zone (CGFZ), a deep gap in the Mid-Atlantic Ridge at 52N, is the primary conduit for westward-flowing Iceland-Scotland Overflow Water (ISOW), which merges with Denmark Strait Overflow Water to form the Deep Western Boundary Current. The CGFZ has also been shown to "funnel" the path of the northern branch of the eastward-flowing North Atlantic Current (NAC), thereby bringing these two branches of the AMOC into close proximity. A recent two-year time series of hydrographic properties and currents from eight tall moorings across the CGFZ offers the first opportunity to investigate the NAC as a source of variability for ISOW transport. The two-year mean and standard deviation of ISOW transport was -1.7 ± 1.5 Sv, compared to -2.4 ± 3.0 Sv reported by Saunders for a 13-month period in 1988-1989. Differences in the two estimates are partly explained by limitations of the Saunders array, but more importantly reflect the strong low-frequency variability in ISOW transport through CGFZ (which includes complete reversals). Both the observations and output from a multi-decadal simulation of the North Atlantic using the Hybrid Coordinate Ocean Model (HYCOM) forced with interannually varying wind and buoyancy fields indicate a strong positive correlation between ISOW transport and the strength of the NAC through the CGFZ (stronger eastward NAC related to weaker westward ISOW transport). Vertical structure of the low-frequency current variability and water mass structure in the CGFZ will also be discussed. The results have implications regarding the interaction of the upper and lower limbs of the AMOC, and downstream propagation of ISOW transport variability in the Deep Western Boundary Current.

  7. A development of the Gibbs potential of a quantised system made up of a large number of particles. III. The contribution of binary collisions; Un developpement du potentiel de Gibbs d'un systeme quantique compose d'un grand nombre de particules. III- La contribution des collisions binaires

    Energy Technology Data Exchange (ETDEWEB)

    BLOCH, Claude; DE DOMINICIS, Cyrano [Commissariat a l' energie atomique et aux energies alternatives - CEA, Centre d' etudes Nucleaires de Saclay, Gif-sur-Yvette (France)

    1959-07-01

    Starting from an expansion derived in a previous work, we study the contribution to the Gibbs potential of the two-body dynamical correlations, taking into account the statistical correlations. Such a contribution is of interest for low-density systems at low temperature. In the zero density limit, it reduces to the Beth-Uhlenbeck expression for the second virial coefficient. For a system of fermions in the zero temperature limit, it yields the contribution of the Brueckner reaction matrix to the ground state energy, plus, under certain conditions, additional terms of the form exp ( β / Δ /), where the Δ are the binding energies of 'bound states' of the type first discussed by L. Cooper. Finally, we study the wave function of two particles immersed in a medium (defined by its temperature and chemical potential). It satisfies an equation generalizing the Bethe-Goldstone equation for an arbitrary temperature. Reprint of a paper published in Nuclear Physics, 10, p. 509-526, 1959.

  8. Behavioral Contexts, Food-Choice Coping Strategies, and Dietary Quality of a Multiethnic Sample of Employed Parents

    Science.gov (United States)

    Blake, Christine E.; Wethington, Elaine; Farrell, Tracy J.; Bisogni, Carole A.; Devine, Carol M.

    2012-01-01

    Employed parents’ work and family conditions provide behavioral contexts for their food choices. Relationships between employed parents’ food-choice coping strategies, behavioral contexts, and dietary quality were evaluated. Data on work and family conditions, sociodemographic characteristics, eating behavior, and dietary intake from two 24-hour dietary recalls were collected in a random sample cross-sectional pilot telephone survey in the fall of 2006. Black, white, and Latino employed mothers (n=25) and fathers (n=25) were recruited from a low/moderate income urban area in upstate New York. Hierarchical cluster analysis (Ward’s method) identified three clusters of parents differing in use of food-choice coping strategies (ie, Individualized Eating, Missing Meals, and Home Cooking). Cluster sociodemographic, work, and family characteristics were compared using χ2 and Fisher’s exact tests. Cluster differences in dietary quality (Healthy Eating Index 2005) were analyzed using analysis of variance. Clusters differed significantly (P≤0.05) on food-choice coping strategies, dietary quality, and behavioral contexts (ie, work schedule, marital status, partner’s employment, and number of children). Individualized Eating and Missing Meals clusters were characterized by nonstandard work hours, having a working partner, single parenthood and with family meals away from home, grabbing quick food instead of a meal, using convenience entrées at home, and missing meals or individualized eating. The Home Cooking cluster included considerably more married fathers with nonemployed spouses and more home-cooked family meals. Food-choice coping strategies affecting dietary quality reflect parents’ work and family conditions. Nutritional guidance and family policy needs to consider these important behavioral contexts for family nutrition and health. PMID:21338739

  9. Sampling in Developmental Science: Situations, Shortcomings, Solutions, and Standards.

    Science.gov (United States)

    Bornstein, Marc H; Jager, Justin; Putnick, Diane L

    2013-12-01

    Sampling is a key feature of every study in developmental science. Although sampling has far-reaching implications, too little attention is paid to sampling. Here, we describe, discuss, and evaluate four prominent sampling strategies in developmental science: population-based probability sampling, convenience sampling, quota sampling, and homogeneous sampling. We then judge these sampling strategies by five criteria: whether they yield representative and generalizable estimates of a study's target population, whether they yield representative and generalizable estimates of subsamples within a study's target population, the recruitment efforts and costs they entail, whether they yield sufficient power to detect subsample differences, and whether they introduce "noise" related to variation in subsamples and whether that "noise" can be accounted for statistically. We use sample composition of gender, ethnicity, and socioeconomic status to illustrate and assess the four sampling strategies. Finally, we tally the use of the four sampling strategies in five prominent developmental science journals and make recommendations about best practices for sample selection and reporting.

  10. Preschool Boys' Development of Emotional Self-regulation Strategies in a Sample At-risk for Behavior Problems

    Science.gov (United States)

    Supplee, Lauren H.; Skuban, Emily Moye; Trentacosta, Christopher J.; Shaw, Daniel S.; Stoltz, Emilee

    2011-01-01

    Little longitudinal research has been conducted on changes in children's emotional self-regulation strategy (SRS) use after infancy, particularly for children at risk. The current study examined changes in boys' emotional SRS from toddlerhood through preschool. Repeated observational assessments using delay of gratification tasks at ages 2, 3, and 4 were examined with both variable- and person-oriented analyses in a low-income sample of boys (N = 117) at-risk for early problem behavior. Results were consistent with theory on emotional SRS development in young children. Children initially used more emotion-focused SRS (e.g., comfort seeking) and transitioned to greater use of planful SRS (e.g., distraction) by age 4. Person-oriented analysis using trajectory analysis found similar patterns from 2–4, with small groups of boys showing delayed movement away from emotion-focused strategies or delay in the onset of regular use of distraction. The results provide a foundation for future research to examine the development of SRS in low-income young children. PMID:21675542

  11. Monte Carlo full-waveform inversion of crosshole GPR data using multiple-point geostatistical a priori information

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Mosegaard, Klaus

    2012-01-01

    We present a general Monte Carlo full-waveform inversion strategy that integrates a priori information described by geostatistical algorithms with Bayesian inverse problem theory. The extended Metropolis algorithm can be used to sample the a posteriori probability density of highly nonlinear...... inverse problems, such as full-waveform inversion. Sequential Gibbs sampling is a method that allows efficient sampling of a priori probability densities described by geostatistical algorithms based on either two-point (e.g., Gaussian) or multiple-point statistics. We outline the theoretical framework......) Based on a posteriori realizations, complicated statistical questions can be answered, such as the probability of connectivity across a layer. (3) Complex a priori information can be included through geostatistical algorithms. These benefits, however, require more computing resources than traditional...

  12. Vapor pressures and standard molar enthalpies, entropies, and Gibbs free energies of sublimation of 2,4- and 3,4-dinitrobenzoic acids

    International Nuclear Information System (INIS)

    Vecchio, Stefano; Brunetti, Bruno

    2009-01-01

    The vapor pressures of the solid and liquid 2,4- and 3,4-dinitrobenzoic acids were determined by torsion-effusion and thermogravimetry under both isothermal and non-isothermal conditions, respectively. From the temperature dependence of vapor pressure derived by the experimental torsion-effusion and thermogravimetry data the molar enthalpies of sublimation Δ cr g H m 0 ( ) and vaporization Δ l g H m 0 ( ) were determined, respectively, at the middle of the respective temperature intervals. The melting temperatures and the molar enthalpies of fusion of these compounds were measured by d.s.c. Finally, the results obtained by all the methods proposed were corrected at the reference temperature of 298.15 K using the estimated heat capacity differences between gas and liquid for vaporization experiments and the estimated heat capacity differences between gas and solid for sublimation experiments. Therefore, the averages of the standard (p o = 0.1 MPa) molar enthalpies, entropies and Gibbs free energies of sublimation at 298.15 K, have been derived.

  13. Adaptive Angular Sampling for SPECT Imaging

    OpenAIRE

    Li, Nan; Meng, Ling-Jian

    2011-01-01

    This paper presents an analytical approach for performing adaptive angular sampling in single photon emission computed tomography (SPECT) imaging. It allows for a rapid determination of the optimum sampling strategy that minimizes image variance in regions-of-interest (ROIs). The proposed method consists of three key components: (a) a set of close-form equations for evaluating image variance and resolution attainable with a given sampling strategy, (b) a gradient-based algor...

  14. Strategies and equipment for sampling suspended sediment and associated toxic chemicals in large rivers - with emphasis on the Mississippi River

    Science.gov (United States)

    Meade, R.H.; Stevens, H.H.

    1990-01-01

    A Lagrangian strategy for sampling large rivers, which was developed and tested in the Orinoco and Amazon Rivers of South America during the early 1980s, is now being applied to the study of toxic chemicals in the Mississippi River. A series of 15-20 cross-sections of the Mississippi mainstem and its principal tributaries is sampled by boat in downstream sequence, beginning upriver of St. Louis and concluding downriver of New Orleans 3 weeks later. The timing of the downstream sampling sequence approximates the travel time of the river water. Samples at each cross-section are discharge-weighted to provide concentrations of dissolved and suspended constituents that are converted to fluxes. Water-sediment mixtures are collected from 10-40 equally spaced points across the river width by sequential depth integration at a uniform vertical transit rate. Essential equipment includes (i) a hydraulic winch, for sensitive control of vertical transit rates, and (ii) a collapsible-bag sampler, which allows integrated samples to be collected at all depths in the river. A section is usually sampled in 4-8 h, for a total sample recovery of 100-120 l. Sampled concentrations of suspended silt and clay are reproducible within 3%.

  15. Vapour pressure and excess Gibbs free energy of binary mixtures of hydrogen sulphide with ethane, propane, and n-butane at temperature of 182.33K

    International Nuclear Information System (INIS)

    Lobo, L.Q.; Ferreira, A.G.M.; Fonseca, I.M.A.; Senra, A.M.P.

    2006-01-01

    The vapour pressure of binary mixtures of hydrogen sulphide with ethane, propane, and n-butane was measured at T=182.33K covering most of the composition range. The excess Gibbs free energy of these mixtures has been derived from the measurements made. For the equimolar mixtures G m E (x 1 =0.5)=(835.5+/-5.8)J.mol -1 for (H 2 S+C 2 H 6 ) (820.1+/-2.4)J.mol -1 for (H 2 S+C 3 H 8 ), and (818.6+/-0.9)J.mol -1 for (H 2 S+n-C 4 H 10 ). The binary mixtures of H 2 S with ethane and with propane exhibit azeotropes, but that with n-butane does not

  16. Soil sampling

    International Nuclear Information System (INIS)

    Fortunati, G.U.; Banfi, C.; Pasturenzi, M.

    1994-01-01

    This study attempts to survey the problems associated with techniques and strategies of soil sampling. Keeping in mind the well defined objectives of a sampling campaign, the aim was to highlight the most important aspect of representativeness of samples as a function of the available resources. Particular emphasis was given to the techniques and particularly to a description of the many types of samplers which are in use. The procedures and techniques employed during the investigations following the Seveso accident are described. (orig.)

  17. Sampling in Developmental Science: Situations, Shortcomings, Solutions, and Standards

    Science.gov (United States)

    Bornstein, Marc H.; Jager, Justin; Putnick, Diane L.

    2014-01-01

    Sampling is a key feature of every study in developmental science. Although sampling has far-reaching implications, too little attention is paid to sampling. Here, we describe, discuss, and evaluate four prominent sampling strategies in developmental science: population-based probability sampling, convenience sampling, quota sampling, and homogeneous sampling. We then judge these sampling strategies by five criteria: whether they yield representative and generalizable estimates of a study’s target population, whether they yield representative and generalizable estimates of subsamples within a study’s target population, the recruitment efforts and costs they entail, whether they yield sufficient power to detect subsample differences, and whether they introduce “noise” related to variation in subsamples and whether that “noise” can be accounted for statistically. We use sample composition of gender, ethnicity, and socioeconomic status to illustrate and assess the four sampling strategies. Finally, we tally the use of the four sampling strategies in five prominent developmental science journals and make recommendations about best practices for sample selection and reporting. PMID:25580049

  18. New Approach Based on Compressive Sampling for Sample Rate Enhancement in DASs for Low-Cost Sensing Nodes

    Directory of Open Access Journals (Sweden)

    Francesco Bonavolontà

    2014-10-01

    Full Text Available The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate.

  19. Uncertainty and sampling issues in tank characterization

    International Nuclear Information System (INIS)

    Liebetrau, A.M.; Pulsipher, B.A.; Kashporenko, D.M.

    1997-06-01

    A defensible characterization strategy must recognize that uncertainties are inherent in any measurement or estimate of interest and must employ statistical methods for quantifying and managing those uncertainties. Estimates of risk and therefore key decisions must incorporate knowledge about uncertainty. This report focuses statistical methods that should be employed to ensure confident decision making and appropriate management of uncertainty. Sampling is a major source of uncertainty that deserves special consideration in the tank characterization strategy. The question of whether sampling will ever provide the reliable information needed to resolve safety issues is explored. The issue of sample representativeness must be resolved before sample information is reliable. Representativeness is a relative term but can be defined in terms of bias and precision. Currently, precision can be quantified and managed through an effective sampling and statistical analysis program. Quantifying bias is more difficult and is not being addressed under the current sampling strategies. Bias could be bounded by (1) employing new sampling methods that can obtain samples from other areas in the tanks, (2) putting in new risers on some worst case tanks and comparing the results from existing risers with new risers, or (3) sampling tanks through risers under which no disturbance or activity has previously occurred. With some bound on bias and estimates of precision, various sampling strategies could be determined and shown to be either cost-effective or infeasible

  20. Equilibrium modeling of gasification: Gibbs free energy minimization approach and its application to spouted bed and spout-fluid bed gasifiers

    International Nuclear Information System (INIS)

    Jarungthammachote, S.; Dutta, A.

    2008-01-01

    Spouted beds have been found in many applications, one of which is gasification. In this paper, the gasification processes of conventional and modified spouted bed gasifiers were considered. The conventional spouted bed is a central jet spouted bed, while the modified spouted beds are circular split spouted bed and spout-fluid bed. The Gibbs free energy minimization method was used to predict the composition of the producer gas. The major six components, CO, CO 2 , CH 4 , H 2 O, H 2 and N 2 , were determined in the mixture of the producer gas. The results showed that the carbon conversion in the gasification process plays an important role in the model. A modified model was developed by considering the carbon conversion in the constraint equations and in the energy balance calculation. The results from the modified model showed improvements. The higher heating values (HHV) were also calculated and compared with the ones from experiments. The agreements of the calculated and experimental values of HHV, especially in the case of the circular split spouted bed and the spout-fluid bed were observed

  1. Gibbs Free-Energy Gradient along the Path of Glucose Transport through Human Glucose Transporter 3.

    Science.gov (United States)

    Liang, Huiyun; Bourdon, Allen K; Chen, Liao Y; Phelix, Clyde F; Perry, George

    2018-06-11

    Fourteen glucose transporters (GLUTs) play essential roles in human physiology by facilitating glucose diffusion across the cell membrane. Due to its central role in the energy metabolism of the central nervous system, GLUT3 has been thoroughly investigated. However, the Gibbs free-energy gradient (what drives the facilitated diffusion of glucose) has not been mapped out along the transport path. Some fundamental questions remain. Here we present a molecular dynamics study of GLUT3 embedded in a lipid bilayer to quantify the free-energy profile along the entire transport path of attracting a β-d-glucose from the interstitium to the inside of GLUT3 and, from there, releasing it to the cytoplasm by Arrhenius thermal activation. From the free-energy profile, we elucidate the unique Michaelis-Menten characteristics of GLUT3, low K M and high V MAX , specifically suitable for neurons' high and constant demand of energy from their low-glucose environments. We compute GLUT3's binding free energy for β-d-glucose to be -4.6 kcal/mol in agreement with the experimental value of -4.4 kcal/mol ( K M = 1.4 mM). We also compute the hydration energy of β-d-glucose, -18.0 kcal/mol vs the experimental data, -17.8 kcal/mol. In this, we establish a dynamics-based connection from GLUT3's crystal structure to its cellular thermodynamics with quantitative accuracy. We predict equal Arrhenius barriers for glucose uptake and efflux through GLUT3 to be tested in future experiments.

  2. A Gibbs Energy Minimization Approach for Modeling of Chemical Reactions in a Basic Oxygen Furnace

    Science.gov (United States)

    Kruskopf, Ari; Visuri, Ville-Valtteri

    2017-12-01

    In modern steelmaking, the decarburization of hot metal is converted into steel primarily in converter processes, such as the basic oxygen furnace. The objective of this work was to develop a new mathematical model for top blown steel converter, which accounts for the complex reaction equilibria in the impact zone, also known as the hot spot, as well as the associated mass and heat transport. An in-house computer code of the model has been developed in Matlab. The main assumption of the model is that all reactions take place in a specified reaction zone. The mass transfer between the reaction volume, bulk slag, and metal determine the reaction rates for the species. The thermodynamic equilibrium is calculated using the partitioning of Gibbs energy (PGE) method. The activity model for the liquid metal is the unified interaction parameter model and for the liquid slag the modified quasichemical model (MQM). The MQM was validated by calculating iso-activity lines for the liquid slag components. The PGE method together with the MQM was validated by calculating liquidus lines for solid components. The results were compared with measurements from literature. The full chemical reaction model was validated by comparing the metal and slag compositions to measurements from industrial scale converter. The predictions were found to be in good agreement with the measured values. Furthermore, the accuracy of the model was found to compare favorably with the models proposed in the literature. The real-time capability of the proposed model was confirmed in test calculations.

  3. Sampling in Developmental Science: Situations, Shortcomings, Solutions, and Standards

    OpenAIRE

    Bornstein, Marc H.; Jager, Justin; Putnick, Diane L.

    2013-01-01

    Sampling is a key feature of every study in developmental science. Although sampling has far-reaching implications, too little attention is paid to sampling. Here, we describe, discuss, and evaluate four prominent sampling strategies in developmental science: population-based probability sampling, convenience sampling, quota sampling, and homogeneous sampling. We then judge these sampling strategies by five criteria: whether they yield representative and generalizable estimates of a study’s t...

  4. A SHORT-DURATION EVENT AS THE CAUSE OF DUST EJECTION FROM MAIN-BELT COMET P/2012 F5 (GIBBS)

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, F. [Instituto de Astrofisica de Andalucia, CSIC, Glorieta de la Astronomia s/n, E-18008 Granada (Spain); Licandro, J.; Cabrera-Lavers, A., E-mail: fernando@iaa.es [Instituto de Astrofisica de Canarias, c/Via Lactea s/n, E-38200 La Laguna, Tenerife (Spain)

    2012-12-10

    We present observations and an interpretative model of the dust environment of the Main-Belt Comet P/2010 F5 (Gibbs). The narrow dust trails observed can be interpreted unequivocally as an impulsive event that took place around 2011 July 1 with an uncertainty of {+-}10 days, and a duration of less than a day, possibly of the order of a few hours. The best Monte Carlo dust model fits to the observed trail brightness imply ejection velocities in the range 8-10 cm s{sup -1} for particle sizes between 30 cm and 130 {mu}m. This weak dependence of velocity on size contrasts with that expected from ice sublimation and agrees with that found recently for (596) Scheila, a likely impacted asteroid. The particles seen in the trail are found to follow a power-law size distribution of index Almost-Equal-To -3.7. Assuming that the slowest particles were ejected at the escape velocity of the nucleus, its size is constrained to about 200-300 m in diameter. The total ejected dust mass is {approx}> 5 Multiplication-Sign 10{sup 8} kg, which represents approximately 4%-20% of the nucleus mass.

  5. Solving Person Re-identification in Non-overlapping Camera using Efficient Gibbs Sampling

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Krose, B.; Burghardt, T.; Damen, D.; Mayol-Cuevas, W.; Mirmehdi, M.

    2013-01-01

    This paper proposes a novel probabilistic approach for appearance-based person reidentification in non-overlapping camera networks. It accounts for varying illumination, varying camera gain and has low computational complexity. More specifically, we present a graphical model where we model the

  6. Markov chain sampling of the O(n) loop models on the infinite plane

    Science.gov (United States)

    Herdeiro, Victor

    2017-07-01

    A numerical method was recently proposed in Herdeiro and Doyon [Phys. Rev. E 94, 043322 (2016), 10.1103/PhysRevE.94.043322] showing a precise sampling of the infinite plane two-dimensional critical Ising model for finite lattice subsections. The present note extends the method to a larger class of models, namely the O(n) loop gas models for n ∈(1 ,2 ] . We argue that even though the Gibbs measure is nonlocal, it is factorizable on finite subsections when sufficient information on the loops touching the boundaries is stored. Our results attempt to show that provided an efficient Markov chain mixing algorithm and an improved discrete lattice dilation procedure the planar limit of the O(n) models can be numerically studied with efficiency similar to the Ising case. This confirms that scale invariance is the only requirement for the present numerical method to work.

  7. Modern survey sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...

  8. Collaboration During the NASA ABoVE Airborne SAR Campaign: Sampling Strategies Used by NGEE Arctic and Other Partners in Alaska and Western Canada

    Science.gov (United States)

    Wullschleger, S. D.; Charsley-Groffman, L.; Baltzer, J. L.; Berg, A. A.; Griffith, P. C.; Jafarov, E. E.; Marsh, P.; Miller, C. E.; Schaefer, K. M.; Siqueira, P.; Wilson, C. J.; Kasischke, E. S.

    2017-12-01

    There is considerable interest in using L- and P-band Synthetic Aperture Radar (SAR) data to monitor variations in aboveground woody biomass, soil moisture, and permafrost conditions in high-latitude ecosystems. Such information is useful for quantifying spatial heterogeneity in surface and subsurface properties, and for model development and evaluation. To conduct these studies, it is desirable that field studies share a common sampling strategy so that the data from multiple sites can be combined and used to analyze variations in conditions across different landscape geomorphologies and vegetation types. In 2015, NASA launched the decade-long Arctic-Boreal Vulnerability Experiment (ABoVE) to study the sensitivity and resilience of these ecosystems to disturbance and environmental change. NASA is able to leverage its remote sensing strengths to collect airborne and satellite observations to capture important ecosystem properties and dynamics across large spatial scales. A critical component of this effort includes collection of ground-based data that can be used to analyze, calibrate and validate remote sensing products. ABoVE researchers at a large number of sites located in important Arctic and boreal ecosystems in Alaska and western Canada are following common design protocols and strategies for measuring soil moisture, thaw depth, biomass, and wetland inundation. Here we elaborate on those sampling strategies as used in the 2017 summer SAR campaign and address the sampling design and measurement protocols for supporting the ABoVE aerial activities. Plot size, transect length, and distribution of replicates across the landscape systematically allowed investigators to optimally sample a site for soil moisture, thaw depth, and organic layer thickness. Specific examples and data sets are described for the Department of Energy's Next-Generation Ecosystem Experiments (NGEE Arctic) project field sites near Nome and Barrow, Alaska. Future airborne and satellite

  9. A method of language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik; Hengeveld, Kees

    1993-01-01

    In recent years more attention is paid to the quality of language samples in typological work. Without an adequate sampling strategy, samples may suffer from various kinds of bias. In this article we propose a sampling method in which the genetic criterion is taken as the most important: samples...... to determine how many languages from each phylum should be selected, given any required sample size....

  10. Phase relations and Gibbs energies of spinel phases and solid solutions in the system Mg-Rh-O

    Energy Technology Data Exchange (ETDEWEB)

    Jacob, K.T., E-mail: katob@materials.iisc.ernet.in [Department of Materials Engineering, Indian Institute of Science, Bangalore 560 012 (India); Prusty, Debadutta [Department of Materials Engineering, Indian Institute of Science, Bangalore 560 012 (India); Kale, G.M. [Institute for Materials Research, University of Leeds, Leeds, LS2 9JT (United Kingdom)

    2012-02-05

    Highlights: Black-Right-Pointing-Pointer Refinement of phase diagram for the system Mg-Rh-O and thermodynamic data for spinel compounds MgRh{sub 2}O{sub 4} and Mg{sub 2}RhO{sub 4} is presented. Black-Right-Pointing-Pointer A solid-state electrochemical cell is used for thermodynamic measurement. Black-Right-Pointing-Pointer An advanced design of the solid-state electrochemical cell incorporating buffer electrodes is deployed to minimize polarization of working electrode. Black-Right-Pointing-Pointer Regular solution model for the spinel solid solution MgRh{sub 2}O{sub 4} - Mg{sub 2}RhO{sub 4} based on ideal mixing of cations on the octahedral site is proposed. Black-Right-Pointing-Pointer Factors responsible for stabilization of tetravalent rhodium in spinel compounds are identified. - Abstract: Pure stoichiometric MgRh{sub 2}O{sub 4} could not be prepared by solid state reaction from an equimolar mixture of MgO and Rh{sub 2}O{sub 3} in air. The spinel phase formed always contained excess of Mg and traces of Rh or Rh{sub 2}O{sub 3}. The spinel phase can be considered as a solid solution of Mg{sub 2}RhO{sub 4} in MgRh{sub 2}O{sub 4}. The compositions of the spinel solid solution in equilibrium with different phases in the ternary system Mg-Rh-O were determined by electron probe microanalysis. The oxygen potential established by the equilibrium between Rh + MgO + Mg{sub 1+x}Rh{sub 2-x}O{sub 4} was measured as a function of temperature using a solid-state cell incorporating yttria-stabilized zirconia as an electrolyte and pure oxygen at 0.1 MPa as the reference electrode. To avoid polarization of the working electrode during the measurements, an improved design of the cell with a buffer electrode was used. The standard Gibbs energies of formation of MgRh{sub 2}O{sub 4} and Mg{sub 2}RhO{sub 4} were deduced from the measured electromotive force (e.m.f.) by invoking a model for the spinel solid solution. The parameters of the model were optimized using the measured

  11. Optimisation (sampling strategies and analytical procedures) for site specific environment monitoring at the areas of uranium production legacy sites in Ukraine - 59045

    International Nuclear Information System (INIS)

    Voitsekhovych, Oleg V.; Lavrova, Tatiana V.; Kostezh, Alexander B.

    2012-01-01

    There are many sites in the world, where Environment are still under influence of the contamination related to the Uranium production carried out in past. Author's experience shows that lack of site characterization data, incomplete or unreliable environment monitoring studies can significantly limit quality of Safety Assessment procedures and Priority actions analyses needed for Remediation Planning. During recent decades the analytical laboratories of the many enterprises, currently being responsible for establishing the site specific environment monitoring program have been significantly improved their technical sampling and analytical capacities. However, lack of experience in the optimal site specific sampling strategy planning and also not enough experience in application of the required analytical techniques, such as modern alpha-beta radiometers, gamma and alpha spectrometry and liquid-scintillation analytical methods application for determination of U-Th series radionuclides in the environment, does not allow to these laboratories to develop and conduct efficiently the monitoring programs as a basis for further Safety Assessment in decision making procedures. This paper gives some conclusions, which were gained from the experience establishing monitoring programs in Ukraine and also propose some practical steps on optimization in sampling strategy planning and analytical procedures to be applied for the area required Safety assessment and justification for its potential remediation and safe management. (authors)

  12. How to Handle Speciose Clades? Mass Taxon-Sampling as a Strategy towards Illuminating the Natural History of Campanula (Campanuloideae)

    Science.gov (United States)

    Mansion, Guilhem; Parolly, Gerald; Crowl, Andrew A.; Mavrodiev, Evgeny; Cellinese, Nico; Oganesian, Marine; Fraunhofer, Katharina; Kamari, Georgia; Phitos, Dimitrios; Haberle, Rosemarie; Akaydin, Galip; Ikinci, Nursel; Raus, Thomas; Borsch, Thomas

    2012-01-01

    Background Speciose clades usually harbor species with a broad spectrum of adaptive strategies and complex distribution patterns, and thus constitute ideal systems to disentangle biotic and abiotic causes underlying species diversification. The delimitation of such study systems to test evolutionary hypotheses is difficult because they often rely on artificial genus concepts as starting points. One of the most prominent examples is the bellflower genus Campanula with some 420 species, but up to 600 species when including all lineages to which Campanula is paraphyletic. We generated a large alignment of petD group II intron sequences to include more than 70% of described species as a reference. By comparison with partial data sets we could then assess the impact of selective taxon sampling strategies on phylogenetic reconstruction and subsequent evolutionary conclusions. Methodology/Principal Findings Phylogenetic analyses based on maximum parsimony (PAUP, PRAP), Bayesian inference (MrBayes), and maximum likelihood (RAxML) were first carried out on the large reference data set (D680). Parameters including tree topology, branch support, and age estimates, were then compared to those obtained from smaller data sets resulting from “classification-guided” (D088) and “phylogeny-guided sampling” (D101). Analyses of D088 failed to fully recover the phylogenetic diversity in Campanula, whereas D101 inferred significantly different branch support and age estimates. Conclusions/Significance A short genomic region with high phylogenetic utility allowed us to easily generate a comprehensive phylogenetic framework for the speciose Campanula clade. Our approach recovered 17 well-supported and circumscribed sub-lineages. Knowing these will be instrumental for developing more specific evolutionary hypotheses and guide future research, we highlight the predictive value of a mass taxon-sampling strategy as a first essential step towards illuminating the detailed evolutionary

  13. Sampling strategies and stopping criteria for stochastic dual dynamic programming: a case study in long-term hydrothermal scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Homem-de-Mello, Tito [University of Illinois at Chicago, Department of Mechanical and Industrial Engineering, Chicago, IL (United States); Matos, Vitor L. de; Finardi, Erlon C. [Universidade Federal de Santa Catarina, LabPlan - Laboratorio de Planejamento de Sistemas de Energia Eletrica, Florianopolis (Brazil)

    2011-03-15

    The long-term hydrothermal scheduling is one of the most important problems to be solved in the power systems area. This problem aims to obtain an optimal policy, under water (energy) resources uncertainty, for hydro and thermal plants over a multi-annual planning horizon. It is natural to model the problem as a multi-stage stochastic program, a class of models for which algorithms have been developed. The original stochastic process is represented by a finite scenario tree and, because of the large number of stages, a sampling-based method such as the Stochastic Dual Dynamic Programming (SDDP) algorithm is required. The purpose of this paper is two-fold. Firstly, we study the application of two alternative sampling strategies to the standard Monte Carlo - namely, Latin hypercube sampling and randomized quasi-Monte Carlo - for the generation of scenario trees, as well as for the sampling of scenarios that is part of the SDDP algorithm. Secondly, we discuss the formulation of stopping criteria for the optimization algorithm in terms of statistical hypothesis tests, which allows us to propose an alternative criterion that is more robust than that originally proposed for the SDDP. We test these ideas on a problem associated with the whole Brazilian power system, with a three-year planning horizon. (orig.)

  14. Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research.

    Science.gov (United States)

    Gentles, Stephen J; Charles, Cathy; Nicholas, David B; Ploeg, Jenny; McKibbon, K Ann

    2016-10-11

    Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that demand unique review procedures. The purpose of this paper is to initiate discussion about what a rigorous systematic approach to reviews of methods, referred to here as systematic methods overviews, might look like by providing tentative suggestions for approaching specific challenges likely to be encountered. The guidance offered here was derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research. The guidance is organized into several principles that highlight specific objectives for this type of review given the common challenges that must be overcome to achieve them. Optional strategies for achieving each principle are also proposed, along with discussion of how they were successfully implemented in the overview on sampling. We describe seven paired principles and strategies that address the following aspects: delimiting the initial set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology used to describe specific methods topics, and generating rigorous verifiable analytic interpretations. Since a broad aim in systematic methods overviews is to describe and interpret the relevant literature in qualitative terms, we suggest that iterative decision making at various stages of the review process, and a rigorous qualitative approach to analysis are necessary features of this review type

  15. Sampling strategies for efficient estimation of tree foliage biomass

    Science.gov (United States)

    Hailemariam Temesgen; Vicente Monleon; Aaron Weiskittel; Duncan Wilson

    2011-01-01

    Conifer crowns can be highly variable both within and between trees, particularly with respect to foliage biomass and leaf area. A variety of sampling schemes have been used to estimate biomass and leaf area at the individual tree and stand scales. Rarely has the effectiveness of these sampling schemes been compared across stands or even across species. In addition,...

  16. Purposeful Sampling for Qualitative Data Collection and Analysis in Mixed Method Implementation Research.

    Science.gov (United States)

    Palinkas, Lawrence A; Horwitz, Sarah M; Green, Carla A; Wisdom, Jennifer P; Duan, Naihua; Hoagwood, Kimberly

    2015-09-01

    Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research.

  17. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research

    Science.gov (United States)

    Palinkas, Lawrence A.; Horwitz, Sarah M.; Green, Carla A.; Wisdom, Jennifer P.; Duan, Naihua; Hoagwood, Kimberly

    2013-01-01

    Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research. PMID:24193818

  18. The role of coping strategies and self-efficacy as predictors of life satisfaction in a sample of parents of children with autism spectrum disorder.

    Science.gov (United States)

    Luque Salas, Bárbara; Yáñez Rodríguez, Virginia; Tabernero Urbieta, Carmen; Cuadrado, Esther

    2017-02-01

    This research aims to understand the role of coping strategies and self-efficacy expectations as predictors of life satisfaction in a sample of parents of boys and girls diagnosed with autistic spectrum disorder. A total of 129 parents (64 men and 65 women) answered a questionnaire on life-satisfaction, coping strategies and self-efficacy scales. Using a regression model, results show that the age of the child is associated with a lower level of satisfaction in parents. The results show that self-efficacy is the variable that best explains the level of satisfaction in mothers, while the use of problem solving explains a higher level of satisfaction in fathers. Men and women show similar levels of life satisfaction; however significant differences were found in coping strategies where women demonstrated higher expressing emotions and social support strategies than men. The development of functional coping strategies and of a high level of self-efficacy represents a key tool for adapting to caring for children with autism. Our results indicated the necessity of early intervention with parents to promote coping strategies, self-efficacy and high level of life satisfaction.

  19. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.

    Science.gov (United States)

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-11-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.

  20. Sampling of temporal networks: Methods and biases

    Science.gov (United States)

    Rocha, Luis E. C.; Masuda, Naoki; Holme, Petter

    2017-11-01

    Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of sampling methods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.

  1. Theoretical Understanding the Relations of Melting-point Determination Methods from Gibbs Thermodynamic Surface and Applications on Melting Curves of Lower Mantle Minerals

    Science.gov (United States)

    Yin, K.; Belonoshko, A. B.; Zhou, H.; Lu, X.

    2016-12-01

    The melting temperatures of materials in the interior of the Earth has significant implications in many areas of geophysics. The direct calculations of the melting point by atomic simulations would face substantial hysteresis problem. To overcome the hysteresis encountered in the atomic simulations there are a few different melting-point determination methods available nowadays, which are founded independently, such as the free energy method, the two-phase or coexistence method, and the Z method, etc. In this study, we provide a theoretical understanding the relations of these methods from a geometrical perspective based on a quantitative construction of the volume-entropy-energy thermodynamic surface, a model first proposed by J. Willard Gibbs in 1873. Then combining with an experimental data and/or a previous melting-point determination method, we apply this model to derive the high-pressure melting curves for several lower mantle minerals with less computational efforts relative to using previous methods only. Through this way, some polyatomic minerals at extreme pressures which are almost unsolvable before are calculated fully from first principles now.

  2. New Comment on Gibbs Density Surface of Fluid Argon: Revised Critical Parameters, L. V. Woodcock, Int. J. Thermophys. (2014) 35, 1770-1784

    Science.gov (United States)

    Umirzakov, I. H.

    2018-01-01

    The author comments on an article by Woodcock (Int J Thermophys 35:1770-1784, 2014), who investigates the idea of a critical line instead of a single critical point using the example of argon. In the introduction, Woodcock states that "The Van der Waals critical point does not comply with the Gibbs phase rule. Its existence is based upon a hypothesis rather than a thermodynamic definition". The present comment is a response to the statement by Woodcock. The comment mathematically demonstrates that a critical point is not only based on a hypothesis that is used to define values of two parameters of the Van der Waals equation of state. Instead, the author argues that a critical point is a direct consequence of the thermodynamic phase equilibrium conditions resulting in a single critical point. It is shown that the thermodynamic conditions result in the first and second partial derivatives of pressure with respect to volume at constant temperature at a critical point equal to zero which are usual conditions of an existence of a critical point.

  3. Long-term strategic asset allocation: An out-of-sample evaluation

    NARCIS (Netherlands)

    Diris, B.F.; Palm, F.C.; Schotman, P.C.

    We evaluate the out-of-sample performance of a long-term investor who follows an optimized dynamic trading strategy. Although the dynamic strategy is able to benefit from predictability out-of-sample, a short-term investor using a single-period market timing strategy would have realized an almost

  4. Integrating the Theory of Sampling into Underground Mine Grade Control Strategies

    Directory of Open Access Journals (Sweden)

    Simon C. Dominy

    2018-05-01

    Full Text Available Grade control in underground mines aims to deliver quality tonnes to the process plant via the accurate definition of ore and waste. It comprises a decision-making process including data collection and interpretation; local estimation; development and mining supervision; ore and waste destination tracking; and stockpile management. The foundation of any grade control programme is that of high-quality samples collected in a geological context. The requirement for quality samples has long been recognised, where they should be representative and fit-for-purpose. Once a sampling error is introduced, it propagates through all subsequent processes contributing to data uncertainty, which leads to poor decisions and financial loss. Proper application of the Theory of Sampling reduces errors during sample collection, preparation, and assaying. To achieve quality, sampling techniques must minimise delimitation, extraction, and preparation errors. Underground sampling methods include linear (chip and channel, grab (broken rock, and drill-based samples. Grade control staff should be well-trained and motivated, and operating staff should understand the critical need for grade control. Sampling must always be undertaken with a strong focus on safety and alternatives sought if the risk to humans is high. A quality control/quality assurance programme must be implemented, particularly when samples contribute to a reserve estimate. This paper assesses grade control sampling with emphasis on underground gold operations and presents recommendations for optimal practice through the application of the Theory of Sampling.

  5. SAMPLING ADAPTIVE STRATEGY AND SPATIAL ORGANISATION ESTIMATION OF SOIL ANIMAL COMMUNITIES AT VARIOUS HIERARCHICAL LEVELS OF URBANISED TERRITORIES

    Directory of Open Access Journals (Sweden)

    Baljuk J.A.

    2014-12-01

    Full Text Available In work the algorithm of adaptive strategy of optimum spatial sampling for studying of the spatial organisation of communities of soil animals in the conditions of an urbanization have been presented. As operating variables the principal components obtained as a result of the analysis of the field data on soil penetration resistance, soils electrical conductivity and density of a forest stand, collected on a quasiregular grid have been used. The locations of experimental polygons have been stated by means of program ESAP. The sampling has been made on a regular grid within experimental polygons. The biogeocoenological estimation of experimental polygons have been made on a basis of A.L.Belgard's ecomorphic analysis. The spatial configuration of biogeocoenosis types has been established on the basis of the data of earth remote sensing and the analysis of digital elevation model. The algorithm was suggested which allows to reveal the spatial organisation of soil animal communities at investigated point, biogeocoenosis, and landscape.

  6. Bayesian Analysis of the Cosmic Microwave Background

    Science.gov (United States)

    Jewell, Jeffrey

    2007-01-01

    There is a wealth of cosmological information encoded in the spatial power spectrum of temperature anisotropies of the cosmic microwave background! Experiments designed to map the microwave sky are returning a flood of data (time streams of instrument response as a beam is swept over the sky) at several different frequencies (from 30 to 900 GHz), all with different resolutions and noise properties. The resulting analysis challenge is to estimate, and quantify our uncertainty in, the spatial power spectrum of the cosmic microwave background given the complexities of "missing data", foreground emission, and complicated instrumental noise. Bayesian formulation of this problem allows consistent treatment of many complexities including complicated instrumental noise and foregrounds, and can be numerically implemented with Gibbs sampling. Gibbs sampling has now been validated as an efficient, statistically exact, and practically useful method for low-resolution (as demonstrated on WMAP 1 and 3 year temperature and polarization data). Continuing development for Planck - the goal is to exploit the unique capabilities of Gibbs sampling to directly propagate uncertainties in both foreground and instrument models to total uncertainty in cosmological parameters.

  7. Sampling strategies to improve passive optical remote sensing of river bathymetry

    Science.gov (United States)

    Legleiter, Carl; Overstreet, Brandon; Kinzel, Paul J.

    2018-01-01

    Passive optical remote sensing of river bathymetry involves establishing a relation between depth and reflectance that can be applied throughout an image to produce a depth map. Building upon the Optimal Band Ratio Analysis (OBRA) framework, we introduce sampling strategies for constructing calibration data sets that lead to strong relationships between an image-derived quantity and depth across a range of depths. Progressively excluding observations that exceed a series of cutoff depths from the calibration process improved the accuracy of depth estimates and allowed the maximum detectable depth ($d_{max}$) to be inferred directly from an image. Depth retrieval in two distinct rivers also was enhanced by a stratified version of OBRA that partitions field measurements into a series of depth bins to avoid biases associated with under-representation of shallow areas in typical field data sets. In the shallower, clearer of the two rivers, including the deepest field observations in the calibration data set did not compromise depth retrieval accuracy, suggesting that $d_{max}$ was not exceeded and the reach could be mapped without gaps. Conversely, in the deeper and more turbid stream, progressive truncation of input depths yielded a plausible estimate of $d_{max}$ consistent with theoretical calculations based on field measurements of light attenuation by the water column. This result implied that the entire channel, including pools, could not be mapped remotely. However, truncation improved the accuracy of depth estimates in areas shallower than $d_{max}$, which comprise the majority of the channel and are of primary interest for many habitat-oriented applications.

  8. Innovative recruitment using online networks: lessons learned from an online study of alcohol and other drug use utilizing a web-based, respondent-driven sampling (webRDS) strategy.

    Science.gov (United States)

    Bauermeister, José A; Zimmerman, Marc A; Johns, Michelle M; Glowacki, Pietreck; Stoddard, Sarah; Volz, Erik

    2012-09-01

    We used a web version of Respondent-Driven Sampling (webRDS) to recruit a sample of young adults (ages 18-24) and examined whether this strategy would result in alcohol and other drug (AOD) prevalence estimates comparable to national estimates (National Survey on Drug Use and Health [NSDUH]). We recruited 22 initial participants (seeds) via Facebook to complete a web survey examining AOD risk correlates. Sequential, incentivized recruitment continued until our desired sample size was achieved. After correcting for webRDS clustering effects, we contrasted our AOD prevalence estimates (past 30 days) to NSDUH estimates by comparing the 95% confidence intervals of prevalence estimates. We found comparable AOD prevalence estimates between our sample and NSDUH for the past 30 days for alcohol, marijuana, cocaine, Ecstasy (3,4-methylenedioxymethamphetamine, or MDMA), and hallucinogens. Cigarette use was lower than NSDUH estimates. WebRDS may be a suitable strategy to recruit young adults online. We discuss the unique strengths and challenges that may be encountered by public health researchers using webRDS methods.

  9. Calculation of the surface free energy of fcc copper nanoparticles

    International Nuclear Information System (INIS)

    Jia Ming; Lai Yanqing; Tian Zhongliang; Liu Yexiang

    2009-01-01

    Using molecular dynamics simulations with the modified analytic embedded-atom method we calculate the Gibbs free energy and surface free energy for fcc Cu bulk, and further obtain the Gibbs free energy of nanoparticles. Based on the Gibbs free energy of nanoparticles, we have investigated the heat capacity of copper nanoparticles. Calculation results indicate that the Gibbs free energy and the heat capacity of nanoparticles can be divided into two parts: bulk quantity and surface quantity. The molar heat capacity of the bulk sample is lower compared with the molar heat capacity of nanoparticles, and this difference increases with the decrease in the particle size. It is also observed that the size effect on the thermodynamic properties of Cu nanoparticles is not really significant until the particle is less than about 20 nm. It is the surface atoms that decide the size effect on the thermodynamic properties of nanoparticles

  10. Improvements to robotics-inspired conformational sampling in rosetta.

    Directory of Open Access Journals (Sweden)

    Amelie Stein

    Full Text Available To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new "next-generation KIC" method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions.

  11. Order parameter free enhanced sampling of the vapor-liquid transition using the generalized replica exchange method.

    Science.gov (United States)

    Lu, Qing; Kim, Jaegil; Straub, John E

    2013-03-14

    The generalized Replica Exchange Method (gREM) is extended into the isobaric-isothermal ensemble, and applied to simulate a vapor-liquid phase transition in Lennard-Jones fluids. Merging an optimally designed generalized ensemble sampling with replica exchange, gREM is particularly well suited for the effective simulation of first-order phase transitions characterized by "backbending" in the statistical temperature. While the metastable and unstable states in the vicinity of the first-order phase transition are masked by the enthalpy gap in temperature replica exchange method simulations, they are transformed into stable states through the parameterized effective sampling weights in gREM simulations, and join vapor and liquid phases with a succession of unimodal enthalpy distributions. The enhanced sampling across metastable and unstable states is achieved without the need to identify a "good" order parameter for biased sampling. We performed gREM simulations at various pressures below and near the critical pressure to examine the change in behavior of the vapor-liquid phase transition at different pressures. We observed a crossover from the first-order phase transition at low pressure, characterized by the backbending in the statistical temperature and the "kink" in the Gibbs free energy, to a continuous second-order phase transition near the critical pressure. The controlling mechanisms of nucleation and continuous phase transition are evident and the coexistence properties and phase diagram are found in agreement with literature results.

  12. An instrument design and sample strategy for measuring soil respiration in the coastal temperate rain forest

    Science.gov (United States)

    Nay, S. M.; D'Amore, D. V.

    2009-12-01

    The coastal temperate rainforest (CTR) along the northwest coast of North America is a large and complex mosaic of forests and wetlands located on an undulating terrain ranging from sea level to thousands of meters in elevation. This biome stores a dynamic portion of the total carbon stock of North America. The fate of the terrestrial carbon stock is of concern due to the potential for mobilization and export of this store to both the atmosphere as carbon respiration flux and ocean as dissolved organic and inorganic carbon flux. Soil respiration is the largest export vector in the system and must be accurately measured to gain any comprehensive understanding of how carbon moves though this system. Suitable monitoring tools capable of measuring carbon fluxes at small spatial scales are essential for our understanding of carbon dynamics at larger spatial scales within this complex assemblage of ecosystems. We have adapted instrumentation and developed a sampling strategy for optimizing replication of soil respiration measurements to quantify differences among spatially complex landscape units of the CTR. We start with the design of the instrument to ease the technological, ergonomic and financial barriers that technicians encounter in monitoring the efflux of CO2 from the soil. Our sampling strategy optimizes the physical efforts of the field work and manages for the high variation of flux measurements encountered in this difficult environment of rough terrain, dense vegetation and wet climate. Our soil respirometer incorporates an infra-red gas analyzer (LiCor Inc. LI-820) and an 8300 cm3 soil respiration chamber; the device is durable, lightweight, easy to operate and can be built for under $5000 per unit. The modest unit price allows for a multiple unit fleet to be deployed and operated in an intensive field monitoring campaign. We use a large 346 cm2 collar to accommodate as much micro spatial variation as feasible and to facilitate repeated measures for tracking

  13. A development of the Gibbs potential of a quantised system made up of a large number of particles. III. The contribution of binary collisions

    International Nuclear Information System (INIS)

    BLOCH, Claude; DE DOMINICIS, Cyrano

    1959-01-01

    Starting from an expansion derived in a previous work, we study the contribution to the Gibbs potential of the two-body dynamical correlations, taking into account the statistical correlations. Such a contribution is of interest for low-density systems at low temperature. In the zero density limit, it reduces to the Beth-Uhlenbeck expression for the second virial coefficient. For a system of fermions in the zero temperature limit, it yields the contribution of the Brueckner reaction matrix to the ground state energy, plus, under certain conditions, additional terms of the form exp ( β / Δ /), where the Δ are the binding energies of 'bound states' of the type first discussed by L. Cooper. Finally, we study the wave function of two particles immersed in a medium (defined by its temperature and chemical potential). It satisfies an equation generalizing the Bethe-Goldstone equation for an arbitrary temperature. Reprint of a paper published in Nuclear Physics, 10, p. 509-526, 1959

  14. WRAP Module 1 sampling strategy and waste characterization alternatives study

    Energy Technology Data Exchange (ETDEWEB)

    Bergeson, C.L.

    1994-09-30

    The Waste Receiving and Processing Module 1 Facility is designed to examine, process, certify, and ship drums and boxes of solid wastes that have a surface dose equivalent of less than 200 mrem/h. These wastes will include low-level and transuranic wastes that are retrievably stored in the 200 Area burial grounds and facilities in addition to newly generated wastes. Certification of retrievably stored wastes processing in WRAP 1 is required to meet the waste acceptance criteria for onsite treatment and disposal of low-level waste and mixed low-level waste and the Waste Isolation Pilot Plant Waste Acceptance Criteria for the disposal of TRU waste. In addition, these wastes will need to be certified for packaging in TRUPACT-II shipping containers. Characterization of the retrievably stored waste is needed to support the certification process. Characterization data will be obtained from historical records, process knowledge, nondestructive examination nondestructive assay, visual inspection of the waste, head-gas sampling, and analysis of samples taken from the waste containers. Sample characterization refers to the method or methods that are used to test waste samples for specific analytes. The focus of this study is the sample characterization needed to accurately identify the hazardous and radioactive constituents present in the retrieved wastes that will be processed in WRAP 1. In addition, some sampling and characterization will be required to support NDA calculations and to provide an over-check for the characterization of newly generated wastes. This study results in the baseline definition of WRAP 1 sampling and analysis requirements and identifies alternative methods to meet these requirements in an efficient and economical manner.

  15. WRAP Module 1 sampling strategy and waste characterization alternatives study

    International Nuclear Information System (INIS)

    Bergeson, C.L.

    1994-01-01

    The Waste Receiving and Processing Module 1 Facility is designed to examine, process, certify, and ship drums and boxes of solid wastes that have a surface dose equivalent of less than 200 mrem/h. These wastes will include low-level and transuranic wastes that are retrievably stored in the 200 Area burial grounds and facilities in addition to newly generated wastes. Certification of retrievably stored wastes processing in WRAP 1 is required to meet the waste acceptance criteria for onsite treatment and disposal of low-level waste and mixed low-level waste and the Waste Isolation Pilot Plant Waste Acceptance Criteria for the disposal of TRU waste. In addition, these wastes will need to be certified for packaging in TRUPACT-II shipping containers. Characterization of the retrievably stored waste is needed to support the certification process. Characterization data will be obtained from historical records, process knowledge, nondestructive examination nondestructive assay, visual inspection of the waste, head-gas sampling, and analysis of samples taken from the waste containers. Sample characterization refers to the method or methods that are used to test waste samples for specific analytes. The focus of this study is the sample characterization needed to accurately identify the hazardous and radioactive constituents present in the retrieved wastes that will be processed in WRAP 1. In addition, some sampling and characterization will be required to support NDA calculations and to provide an over-check for the characterization of newly generated wastes. This study results in the baseline definition of WRAP 1 sampling and analysis requirements and identifies alternative methods to meet these requirements in an efficient and economical manner

  16. A computational study of a fast sampling valve designed to sample soot precursors inside a forming diesel spray plume

    International Nuclear Information System (INIS)

    Dumitrescu, Cosmin; Puzinauskas, Paulius V.; Agrawal, Ajay K.; Liu, Hao; Daly, Daniel T.

    2009-01-01

    Accurate chemical reaction mechanisms are critically needed to fully optimize combustion strategies for modern internal-combustion engines. These mechanisms are needed to predict emission formation and the chemical heat release characteristics for traditional direct-injection diesel as well as recently-developed and proposed variant combustion strategies. Experimental data acquired under conditions representative of such combustion strategies are required to validate these reaction mechanisms. This paper explores the feasibility of developing a fast sampling valve which extracts reactants at known locations in the spray reaction structure to provide these data. CHEMKIN software is used to establish the reaction timescales which dictate the required fast sampling capabilities. The sampling process is analyzed using separate FLUENT and CHEMKIN calculations. The non-reacting FLUENT CFD calculations give a quantitative estimate of the sample quantity as well as the fluid mixing and thermal history. A CHEMKIN reactor network has been created that reflects these mixing and thermal time scales and allows a theoretical evaluation of the quenching process

  17. Improved detection of multiple environmental antibiotics through an optimized sample extraction strategy in liquid chromatography-mass spectrometry analysis.

    Science.gov (United States)

    Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi

    2015-12-01

    A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.

  18. Effects of Direct Fuel Injection Strategies on Cycle-by-Cycle Variability in a Gasoline Homogeneous Charge Compression Ignition Engine: Sample Entropy Analysis

    Directory of Open Access Journals (Sweden)

    Jacek Hunicz

    2015-01-01

    Full Text Available In this study we summarize and analyze experimental observations of cyclic variability in homogeneous charge compression ignition (HCCI combustion in a single-cylinder gasoline engine. The engine was configured with negative valve overlap (NVO to trap residual gases from prior cycles and thus enable auto-ignition in successive cycles. Correlations were developed between different fuel injection strategies and cycle average combustion and work output profiles. Hypothesized physical mechanisms based on these correlations were then compared with trends in cycle-by-cycle predictability as revealed by sample entropy. The results of these comparisons help to clarify how fuel injection strategy can interact with prior cycle effects to affect combustion stability and so contribute to design control methods for HCCI engines.

  19. Statistical sampling strategies for survey of soil contamination

    NARCIS (Netherlands)

    Brus, D.J.

    2011-01-01

    This chapter reviews methods for selecting sampling locations in contaminated soils for three situations. In the first situation a global estimate of the soil contamination in an area is required. The result of the surey is a number or a series of numbers per contaminant, e.g. the estimated mean

  20. THE PREDICTION OF pH BY GIBBS FREE ENERGY MINIMIZATION IN THE SUMP SOLUTION UNDER LOCA CONDITION OF PWR

    Directory of Open Access Journals (Sweden)

    HYOUNGJU YOON

    2013-02-01

    Full Text Available It is required that the pH of the sump solution should be above 7.0 to retain iodine in a liquid phase and be within the material compatibility constraints under LOCA condition of PWR. The pH of the sump solution can be determined by conventional chemical equilibrium constants or by the minimization of Gibbs free energy. The latter method developed as a computer code called SOLGASMIX-PV is more convenient than the former since various chemical components can be easily treated under LOCA conditions. In this study, SOLGASMIX-PV code was modified to accommodate the acidic and basic materials produced by radiolysis reactions and to calculate the pH of the sump solution. When the computed pH was compared with measured by the ORNL experiment to verify the reliability of the modified code, the error between two values was within 0.3 pH. Finally, two cases of calculation were performed for the SKN 3&4 and UCN 1&2. As results, pH of the sump solution for the SKN 3&4 was between 7.02 and 7.45, and for the UCN 1&2 plant between 8.07 and 9.41. Furthermore, it was found that the radiolysis reactions have insignificant effects on pH because the relative concentrations of HCl, HNO3, and Cs are very low.

  1. Focusing and non-focusing modulation strategies for the improvement of on-line two-dimensional hydrophilic interaction chromatography × reversed phase profiling of complex food samples.

    Science.gov (United States)

    Montero, Lidia; Ibáñez, Elena; Russo, Mariateresa; Rastrelli, Luca; Cifuentes, Alejandro; Herrero, Miguel

    2017-09-08

    Comprehensive two-dimensional liquid chromatography (LC × LC) is ever gaining interest in food analysis, as often, food-related samples are too complex to be analyzed through one-dimensional approaches. The use of hydrophilic interaction chromatography (HILIC) combined with reversed phase (RP) separations has already been demonstrated as a very orthogonal combination, which allows attaining increased resolving power. However, this coupling encompasses different analytical challenges, mainly related to the important solvent strength mismatch between the two dimensions, besides those common to every LC × LC method. In the present contribution, different strategies are proposed and compared to further increase HILIC × RP method performance for the analysis of complex food samples, using licorice as a model sample. The influence of different parameters in non-focusing modulation methods based on sampling loops, as well as under focusing modulation, through the use of trapping columns in the interface and through active modulation procedures are studied in order to produce resolving power and sensitivity gains. Although the use of a dilution strategy using sampling loops as well as the highest possible first dimension sampling rate allowed significant improvements on resolution, focusing modulation produced significant gains also in peak capacity and sensitivity. Overall, the obtained results demonstrate the great applicability and potential that active modulation may have for the analysis of complex food samples, such as licorice, by HILIC × RP. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Sample preservation, transport and processing strategies for honeybee RNA extraction: Influence on RNA yield, quality, target quantification and data normalization.

    Science.gov (United States)

    Forsgren, Eva; Locke, Barbara; Semberg, Emilia; Laugen, Ane T; Miranda, Joachim R de

    2017-08-01

    Viral infections in managed honey bees are numerous, and most of them are caused by viruses with an RNA genome. Since RNA degrades rapidly, appropriate sample management and RNA extraction methods are imperative to get high quality RNA for downstream assays. This study evaluated the effect of various sampling-transport scenarios (combinations of temperature, RNA stabilizers, and duration) of transport on six RNA quality parameters; yield, purity, integrity, cDNA synthesis efficiency, target detection and quantification. The use of water and extraction buffer were also compared for a primary bee tissue homogenate prior to RNA extraction. The strategy least affected by time was preservation of samples at -80°C. All other regimens turned out to be poor alternatives unless the samples were frozen or processed within 24h. Chemical stabilizers have the greatest impact on RNA quality and adding an extra homogenization step (a QIAshredder™ homogenizer) to the extraction protocol significantly improves the RNA yield and chemical purity. This study confirms that RIN values (RNA Integrity Number), should be used cautiously with bee RNA. Using water for the primary homogenate has no negative effect on RNA quality as long as this step is no longer than 15min. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Aromatic hydrocarbons in produced water from offshore oil and gas production. Test of sample strategy; Aromatiske kulbrinter i produceret vand fra offshore olie- og gas industrien. Test af proevetagningsstrategi

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, A.

    2005-07-01

    In co-operation with the Danish EPA, the National Environmental Research Institute (NERI) has carried out a series of measurements of aromatic hydrocarbons in produced water from an offshore oil and gas production platform in the Danish sector of the North Sea as part of the project 'Testing of sampling strategy for aromatic hydrocarbons in produced water from the offshore oil and gas industry'. The measurements included both volatile (BTEX: benzene, toluene, ethylbenzene and xylenes) and semi-volatile aromatic hydrocarbons: NPD (naphthalenes, phenanthrenes and dibenzothiophenes) and selected PAHs (polycyclic aromatic hydrocarbons). In total, 12 samples of produced water were sampled at the Dan FF production platform located in the North Sea by the operator, Maersk Oil and Gas, as four sets of three parallel samples from November 24 - December 02, 2004. After collection of the last set, the samples were shipped to NERI for analysis. The water samples were collected in 1 L glass bottles that were filled completely (without overfilling) and tightly closed. After sampling, the samples were preserved with hydrochloric acid and cooled below ambient until being shipped off to NERI. Here all samples were analysed in dublicates, and the results show that for BTEX, levels were reduced compared to similar measurements carried out by NERI in 2002 and others. In this work, BTEX levels were approximately 5 mg/L, while similar studies showed levels in the range 0,5 - 35 mg/L. For NPD levels were similar, 0,5 - 1,4 mg/L, while for PAH they seerred elevated; 0,1 - 0,4 mg/L in this work compared to 0,001 - 0,3 mg/L in similar studies. The applied sampling strategy has been tested by performing analysis of variance on the analytical data. The test of the analytical data has shown that the mean values of the three parallel samples collected in series constituted a good estimate of the levels at the time of sampling; thus, the variance between the parallel samples was not

  4. Technical Note: Comparison of storage strategies of sea surface microlayer samples

    Directory of Open Access Journals (Sweden)

    K. Schneider-Zapp

    2013-07-01

    Full Text Available The sea surface microlayer (SML is an important biogeochemical system whose physico-chemical analysis often necessitates some degree of sample storage. However, many SML components degrade with time so the development of optimal storage protocols is paramount. We here briefly review some commonly used treatment and storage protocols. Using freshwater and saline SML samples from a river estuary, we investigated temporal changes in surfactant activity (SA and the absorbance and fluorescence of chromophoric dissolved organic matter (CDOM over four weeks, following selected sample treatment and storage protocols. Some variability in the effectiveness of individual protocols most likely reflects sample provenance. None of the various protocols examined performed any better than dark storage at 4 °C without pre-treatment. We therefore recommend storing samples refrigerated in the dark.

  5. ESTIMASI BAYESIAN PADA MODEL PERSAMAAN STRUKTURAL DENGAN VARIABEL KATEGORIK TERURUT

    Directory of Open Access Journals (Sweden)

    Rini Yunita

    2016-05-01

    Full Text Available Abstract  This article explains about parameter estimation of structural equation model with ordered categorical variable using Bayes method. The basic assumptions of SEM are the data type is continuous, minimum scale is interval, and it has to satisfy the normality assumption. The categorical data is ordinal data which the observation is in discrete form, and to treat the categorical data as normally distributed continuous data is by finding threshold parameter for each categorical data. Bayes method only focuses on individual data by combining sample data and the research data before (prior information, in order to minimize the error rate. Hence, the parameter estimation of structural equation model can be obtained well. In this estimation process, it is done numerically by using Monte Carlo method, i.e. Gibbs Sampling and Metropolis Hasting. Keywords:   Structural Equation Modeling ,categorical data, Threshold, Gibbs Sampling, Metropolis Hasting. Abstrak Dalam artikel ini dijelaskan tentang estimasi parameter dari model persamaan struktural dengan variabel kategorik terurut dengan menggunakan metode Bayes. Asumsi dasar dari SEM adalah  jenis datanya kontinu dan minimal berskala interval serta memenuhi asumsi normalitas. Sementara data kategorik merupakan data ordinal dengan pengamatan dalam bentuk diskrit, untuk dapat memperlakukan data kategorik sebagai data kontinu berdistribusi normal yaitu dengan mencari treshold paramater untuk masing-masing data kategorik. Metode Bayes  hanya berfokus pada data individu dengan menggabungkan antara data sampel dengan data penelitian sebelumnya (informasi prior, dengan tujuan untuk meminimalkan tingkat kesalahan. Sehingga estimasi parameter dari model persamaan struktural dapat dihasilkan dengan baik. Dalam proses estimasi, hal ini dilakukan secara numerik dengan menggunakan metode Monte Carlo, yaitu Gibbs Sampling dan Metropolis Hasting. Kata Kunci:  Model Persamaan Struktural, data kategorik

  6. Sampling wild species to conserve genetic diversity

    Science.gov (United States)

    Sampling seed from natural populations of crop wild relatives requires choice of the locations to sample from and the amount of seed to sample. While this may seem like a simple choice, in fact careful planning of a collector’s sampling strategy is needed to ensure that a crop wild collection will ...

  7. Selective hedging strategies for oil stockpiling

    International Nuclear Information System (INIS)

    Yun, Won-Cheol

    2006-01-01

    As a feasible option for improving the economics and operational efficiency of stockpiling by public agency, this study suggests simple selective hedging strategies using forward contracts. The main advantage of these selective hedging strategies over the previous ones is not to predict future spot prices, but to utilize the sign and magnitude of basis easily available to the public. Using the weekly spot and forward prices of West Texas Intermediate for the period of October 1997-August 2002, this study adopts an ex ante out-of-sample analysis to examine selective hedging performances compared to no-hedge and minimum-variance routine hedging strategies. To some extent, selective hedging strategies dominate the traditional routine hedging strategy, but do not improve upon the expected returns of no-hedge case, which is mainly due to the data characteristics of out-of-sample period used in this analysis

  8. Improving multiple-point-based a priori models for inverse problems by combining Sequential Simulation with the Frequency Matching Method

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine

    In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...

  9. Image Sampling with Quasicrystals

    Directory of Open Access Journals (Sweden)

    Mark Grundland

    2009-07-01

    Full Text Available We investigate the use of quasicrystals in image sampling. Quasicrystals produce space-filling, non-periodic point sets that are uniformly discrete and relatively dense, thereby ensuring the sample sites are evenly spread out throughout the sampled image. Their self-similar structure can be attractive for creating sampling patterns endowed with a decorative symmetry. We present a brief general overview of the algebraic theory of cut-and-project quasicrystals based on the geometry of the golden ratio. To assess the practical utility of quasicrystal sampling, we evaluate the visual effects of a variety of non-adaptive image sampling strategies on photorealistic image reconstruction and non-photorealistic image rendering used in multiresolution image representations. For computer visualization of point sets used in image sampling, we introduce a mosaic rendering technique.

  10. Synthetic Multiple-Imputation Procedure for Multistage Complex Samples

    Directory of Open Access Journals (Sweden)

    Zhou Hanzhi

    2016-03-01

    Full Text Available Multiple imputation (MI is commonly used when item-level missing data are present. However, MI requires that survey design information be built into the imputation models. For multistage stratified clustered designs, this requires dummy variables to represent strata as well as primary sampling units (PSUs nested within each stratum in the imputation model. Such a modeling strategy is not only operationally burdensome but also inferentially inefficient when there are many strata in the sample design. Complexity only increases when sampling weights need to be modeled. This article develops a generalpurpose analytic strategy for population inference from complex sample designs with item-level missingness. In a simulation study, the proposed procedures demonstrate efficient estimation and good coverage properties. We also consider an application to accommodate missing body mass index (BMI data in the analysis of BMI percentiles using National Health and Nutrition Examination Survey (NHANES III data. We argue that the proposed methods offer an easy-to-implement solution to problems that are not well-handled by current MI techniques. Note that, while the proposed method borrows from the MI framework to develop its inferential methods, it is not designed as an alternative strategy to release multiply imputed datasets for complex sample design data, but rather as an analytic strategy in and of itself.

  11. Predicting cyclohexane/water distribution coefficients for the SAMPL5 challenge using MOSCED and the SMD solvation model

    Science.gov (United States)

    Diaz-Rodriguez, Sebastian; Bozada, Samantha M.; Phifer, Jeremy R.; Paluch, Andrew S.

    2016-11-01

    We present blind predictions using the solubility parameter based method MOSCED submitted for the SAMPL5 challenge on calculating cyclohexane/water distribution coefficients at 298 K. Reference data to parameterize MOSCED was generated with knowledge only of chemical structure by performing solvation free energy calculations using electronic structure calculations in the SMD continuum solvent. To maintain simplicity and use only a single method, we approximate the distribution coefficient with the partition coefficient of the neutral species. Over the final SAMPL5 set of 53 compounds, we achieved an average unsigned error of 2.2± 0.2 log units (ranking 15 out of 62 entries), the correlation coefficient ( R) was 0.6± 0.1 (ranking 35), and 72± 6 % of the predictions had the correct sign (ranking 30). While used here to predict cyclohexane/water distribution coefficients at 298 K, MOSCED is broadly applicable, allowing one to predict temperature dependent infinite dilution activity coefficients in any solvent for which parameters exist, and provides a means by which an excess Gibbs free energy model may be parameterized to predict composition dependent phase-equilibrium.

  12. Accurate and precise determination of critical properties from Gibbs ensemble Monte Carlo simulations

    International Nuclear Information System (INIS)

    Dinpajooh, Mohammadhasan; Bai, Peng; Allan, Douglas A.; Siepmann, J. Ilja

    2015-01-01

    Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T c = 1.3128 ± 0.0016, ρ c = 0.316 ± 0.004, and p c = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ t ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r cut = 3.5σ yield T c and p c that are higher by 0.2% and 1.4% than simulations with r cut = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r cut = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard-core square-well particles with various

  13. Evaluation of Multiple Linear Regression-Based Limited Sampling Strategies for Enteric-Coated Mycophenolate Sodium in Adult Kidney Transplant Recipients.

    Science.gov (United States)

    Brooks, Emily K; Tett, Susan E; Isbel, Nicole M; McWhinney, Brett; Staatz, Christine E

    2018-04-01

    Although multiple linear regression-based limited sampling strategies (LSSs) have been published for enteric-coated mycophenolate sodium, none have been evaluated for the prediction of subsequent mycophenolic acid (MPA) exposure. This study aimed to examine the predictive performance of the published LSS for the estimation of future MPA area under the concentration-time curve from 0 to 12 hours (AUC0-12) in renal transplant recipients. Total MPA plasma concentrations were measured in 20 adult renal transplant patients on 2 occasions a week apart. All subjects received concomitant tacrolimus and were approximately 1 month after transplant. Samples were taken at 0, 0.33, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 6, and 8 hours and 0, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2, 3, 4, 6, 9, and 12 hours after dose on the first and second sampling occasion, respectively. Predicted MPA AUC0-12 was calculated using 19 published LSSs and data from the first or second sampling occasion for each patient and compared with the second occasion full MPA AUC0-12 calculated using the linear trapezoidal rule. Bias (median percentage prediction error) and imprecision (median absolute prediction error) were determined. Median percentage prediction error and median absolute prediction error for the prediction of full MPA AUC0-12 were multiple linear regression-based LSS was not possible without concentrations up to at least 8 hours after the dose.

  14. Lifetime Prevalence of Suicide Attempts Among Sexual Minority Adults by Study Sampling Strategies: A Systematic Review and Meta-Analysis.

    Science.gov (United States)

    Hottes, Travis Salway; Bogaert, Laura; Rhodes, Anne E; Brennan, David J; Gesink, Dionne

    2016-05-01

    Previous reviews have demonstrated a higher risk of suicide attempts for lesbian, gay, and bisexual (LGB) persons (sexual minorities), compared with heterosexual groups, but these were restricted to general population studies, thereby excluding individuals sampled through LGB community venues. Each sampling strategy, however, has particular methodological strengths and limitations. For instance, general population probability studies have defined sampling frames but are prone to information bias associated with underreporting of LGB identities. By contrast, LGB community surveys may support disclosure of sexuality but overrepresent individuals with strong LGB community attachment. To reassess the burden of suicide-related behavior among LGB adults, directly comparing estimates derived from population- versus LGB community-based samples. In 2014, we searched MEDLINE, EMBASE, PsycInfo, CINAHL, and Scopus databases for articles addressing suicide-related behavior (ideation, attempts) among sexual minorities. We selected quantitative studies of sexual minority adults conducted in nonclinical settings in the United States, Canada, Europe, Australia, and New Zealand. Random effects meta-analysis and meta-regression assessed for a difference in prevalence of suicide-related behavior by sample type, adjusted for study or sample-level variables, including context (year, country), methods (medium, response rate), and subgroup characteristics (age, gender, sexual minority construct). We examined residual heterogeneity by using τ(2). We pooled 30 cross-sectional studies, including 21,201 sexual minority adults, generating the following lifetime prevalence estimates of suicide attempts: 4% (95% confidence interval [CI] = 3%, 5%) for heterosexual respondents to population surveys, 11% (95% CI = 8%, 15%) for LGB respondents to population surveys, and 20% (95% CI = 18%, 22%) for LGB respondents to community surveys (Figure 1). The difference in LGB estimates by sample

  15. Atypical antipsychotics: trends in analysis and sample preparation of various biological samples.

    Science.gov (United States)

    Fragou, Domniki; Dotsika, Spyridoula; Sarafidou, Parthena; Samanidou, Victoria; Njau, Samuel; Kovatsi, Leda

    2012-05-01

    Atypical antipsychotics are increasingly popular and increasingly prescribed. In some countries, they can even be obtained over-the-counter, without a prescription, making their abuse quite easy. Although atypical antipsychotics are thought to be safer than typical antipsychotics, they still have severe side effects. Intoxications are not rare and some of them have a fatal outcome. Drug interactions involving atypical antipsychotics complicate patient management in clinical settings and the determination of the cause of death in fatalities. In view of the above, analytical strategies that can efficiently isolate atypical antipsychotics from a variety of biological samples and quantify them accurately, sensitively and reliably, are of utmost importance both for the clinical, as well as for the forensic toxicologist. In this review, we will present and discuss novel analytical strategies that have been developed from 2004 to the present day for the determination of atypical antipsychotics in various biological samples.

  16. Estimation after classification using lot quality assurance sampling: corrections for curtailed sampling with application to evaluating polio vaccination campaigns.

    Science.gov (United States)

    Olives, Casey; Valadez, Joseph J; Pagano, Marcello

    2014-03-01

    To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.

  17. External calibration strategy for trace element quantification in botanical samples by LA-ICP-MS using filter paper

    International Nuclear Information System (INIS)

    Nunes, Matheus A.G.; Voss, Mônica; Corazza, Gabriela; Flores, Erico M.M.; Dressler, Valderi L.

    2016-01-01

    The use of reference solutions dispersed on filter paper discs is proposed for the first time as an external calibration strategy for matrix matching and determination of As, Cd, Co, Cr, Cu, Mn, Ni, Pb, Sr, V and Zn in plants by laser ablation-inductively coupled plasma mass spectrometry (LA-ICP-MS). The procedure is based on the use of filter paper discs as support for aqueous reference solutions, which are further evaporated, resulting in solid standards with concentrations up to 250 μg g −1 of each element. The use of filter paper for calibration is proposed as matrix matched standards due to the similarities of this material with botanical samples, regarding to carbon concentration and its distribution through both matrices. These characteristics allowed the use of 13 C as internal standard (IS) during the analysis by LA-ICP-MS. In this way, parameters as analyte signal normalization with 13 C, carrier gas flow rate, laser energy, spot size, and calibration range were monitored. The calibration procedure using solution deposition on filter paper discs resulted in precision improvement when 13 C was used as IS. The method precision was calculated by the analysis of a certified reference material (CRM) of botanical matrix, considering the RSD obtained for 5 line scans and was lower than 20%. Accuracy of LA-ICP-MS determinations were evaluated by analysis of four CRM pellets of botanical composition, as well as by comparison with results obtained by ICP-MS using solution nebulization after microwave assisted digestion. Plant samples of unknown elemental composition were analyzed by the proposed LA method and good agreement were obtained with results of solution analysis. Limits of detection (LOD) established for LA-ICP-MS were obtained by the ablation of 10 lines on the filter paper disc containing 40 μL of 5% HNO 3 (v v −1 ) as calibration blank. Values ranged from 0.05 to 0.81  μg g −1 . Overall, the use of filter paper as support for dried

  18. External calibration strategy for trace element quantification in botanical samples by LA-ICP-MS using filter paper

    Energy Technology Data Exchange (ETDEWEB)

    Nunes, Matheus A.G.; Voss, Mônica; Corazza, Gabriela; Flores, Erico M.M.; Dressler, Valderi L., E-mail: vdressler@gmail.com

    2016-01-28

    The use of reference solutions dispersed on filter paper discs is proposed for the first time as an external calibration strategy for matrix matching and determination of As, Cd, Co, Cr, Cu, Mn, Ni, Pb, Sr, V and Zn in plants by laser ablation-inductively coupled plasma mass spectrometry (LA-ICP-MS). The procedure is based on the use of filter paper discs as support for aqueous reference solutions, which are further evaporated, resulting in solid standards with concentrations up to 250 μg g{sup −1} of each element. The use of filter paper for calibration is proposed as matrix matched standards due to the similarities of this material with botanical samples, regarding to carbon concentration and its distribution through both matrices. These characteristics allowed the use of {sup 13}C as internal standard (IS) during the analysis by LA-ICP-MS. In this way, parameters as analyte signal normalization with {sup 13}C, carrier gas flow rate, laser energy, spot size, and calibration range were monitored. The calibration procedure using solution deposition on filter paper discs resulted in precision improvement when {sup 13}C was used as IS. The method precision was calculated by the analysis of a certified reference material (CRM) of botanical matrix, considering the RSD obtained for 5 line scans and was lower than 20%. Accuracy of LA-ICP-MS determinations were evaluated by analysis of four CRM pellets of botanical composition, as well as by comparison with results obtained by ICP-MS using solution nebulization after microwave assisted digestion. Plant samples of unknown elemental composition were analyzed by the proposed LA method and good agreement were obtained with results of solution analysis. Limits of detection (LOD) established for LA-ICP-MS were obtained by the ablation of 10 lines on the filter paper disc containing 40 μL of 5% HNO{sub 3} (v v{sup −1}) as calibration blank. Values ranged from 0.05 to 0.81  μg g{sup −1}. Overall, the use of filter

  19. The association between implementation strategy use and the uptake of hepatitis C treatment in a national sample.

    Science.gov (United States)

    Rogal, Shari S; Yakovchenko, Vera; Waltz, Thomas J; Powell, Byron J; Kirchner, JoAnn E; Proctor, Enola K; Gonzalez, Rachel; Park, Angela; Ross, David; Morgan, Timothy R; Chartier, Maggie; Chinman, Matthew J

    2017-05-11

    Hepatitis C virus (HCV) is a common and highly morbid illness. New medications that have much higher cure rates have become the new evidence-based practice in the field. Understanding the implementation of these new medications nationally provides an opportunity to advance the understanding of the role of implementation strategies in clinical outcomes on a large scale. The Expert Recommendations for Implementing Change (ERIC) study defined discrete implementation strategies and clustered these strategies into groups. The present evaluation assessed the use of these strategies and clusters in the context of HCV treatment across the US Department of Veterans Affairs (VA), Veterans Health Administration, the largest provider of HCV care nationally. A 73-item survey was developed and sent to all VA sites treating HCV via electronic survey, to assess whether or not a site used each ERIC-defined implementation strategy related to employing the new HCV medication in 2014. VA national data regarding the number of Veterans starting on the new HCV medications at each site were collected. The associations between treatment starts and number and type of implementation strategies were assessed. A total of 80 (62%) sites responded. Respondents endorsed an average of 25 ± 14 strategies. The number of treatment starts was positively correlated with the total number of strategies endorsed (r = 0.43, p strategies endorsed (p strategies, compared to 15 strategies in the lowest quartile. There were significant differences in the types of strategies endorsed by sites in the highest and lowest quartiles of treatment starts. Four of the 10 top strategies for sites in the top quartile had significant correlations with treatment starts compared to only 1 of the 10 top strategies in the bottom quartile sites. Overall, only 3 of the top 15 most frequently used strategies were associated with treatment. These results suggest that sites that used a greater number of implementation

  20. Improving snow density estimation for mapping SWE with Lidar snow depth: assessment of uncertainty in modeled density and field sampling strategies in NASA SnowEx

    Science.gov (United States)

    Raleigh, M. S.; Smyth, E.; Small, E. E.

    2017-12-01

    The spatial distribution of snow water equivalent (SWE) is not sufficiently monitored with either remotely sensed or ground-based observations for water resources management. Recent applications of airborne Lidar have yielded basin-wide mapping of SWE when combined with a snow density model. However, in the absence of snow density observations, the uncertainty in these SWE maps is dominated by uncertainty in modeled snow density rather than in Lidar measurement of snow depth. Available observations tend to have a bias in physiographic regime (e.g., flat open areas) and are often insufficient in number to support testing of models across a range of conditions. Thus, there is a need for targeted sampling strategies and controlled model experiments to understand where and why different snow density models diverge. This will enable identification of robust model structures that represent dominant processes controlling snow densification, in support of basin-scale estimation of SWE with remotely-sensed snow depth datasets. The NASA SnowEx mission is a unique opportunity to evaluate sampling strategies of snow density and to quantify and reduce uncertainty in modeled snow density. In this presentation, we present initial field data analyses and modeling results over the Colorado SnowEx domain in the 2016-2017 winter campaign. We detail a framework for spatially mapping the uncertainty in snowpack density, as represented across multiple models. Leveraging the modular SUMMA model, we construct a series of physically-based models to assess systematically the importance of specific process representations to snow density estimates. We will show how models and snow pit observations characterize snow density variations with forest cover in the SnowEx domains. Finally, we will use the spatial maps of density uncertainty to evaluate the selected locations of snow pits, thereby assessing the adequacy of the sampling strategy for targeting uncertainty in modeled snow density.

  1. Sample-efficient Strategies for Learning in the Presence of Noise

    DEFF Research Database (Denmark)

    Cesa-Bianchi, N.; Dichterman, E.; Fischer, Paul

    1999-01-01

    In this paper, we prove various results about PAC learning in the presence of malicious noise. Our main interest is the sample size behavior of learning algorithms. We prove the first nontrivial sample complexity lower bound in this model by showing that order of &egr;/&Dgr;2 + d/&Dgr; (up...... to logarithmic factors) examples are necessary for PAC learning any target class of {#123;0,1}#125;-valued functions of VC dimension d, where &egr; is the desired accuracy and &eegr; = &egr;/(1 + &egr;) - &Dgr; the malicious noise rate (it is well known that any nontrivial target class cannot be PAC learned...... with accuracy &egr; and malicious noise rate &eegr; &egr;/(1 + &egr;), this irrespective to sample complexity). We also show that this result cannot be significantly improved in general by presenting efficient learning algorithms for the class of all subsets of d elements and the class of unions of at most d...

  2. Demonstration/Validation of Incremental Sampling at Two Diverse Military Ranges and Development of an Incremental Sampling Tool

    Science.gov (United States)

    2010-06-01

    Sampling (MIS)? • Technique of combining many increments of soil from a number of points within exposure area • Developed by Enviro Stat (Trademarked...Demonstrating a reliable soil sampling strategy to accurately characterize contaminant concentrations in spatially extreme and heterogeneous...into a set of decision (exposure) units • One or several discrete or small- scale composite soil samples collected to represent each decision unit

  3. A method of language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik; Hengeveld, Kees

    1993-01-01

    In recent years more attention is paid to the quality of language samples in typological work. Without an adequate sampling strategy, samples may suffer from various kinds of bias. In this article we propose a sampling method in which the genetic criterion is taken as the most important: samples...... created with this method will reflect optimally the diversity of the languages of the world. On the basis of the internal structure of each genetic language tree a measure is computed that reflects the linguistic diversity in the language families represented by these trees. This measure is used...... to determine how many languages from each phylum should be selected, given any required sample size....

  4. Complementary sample preparation strategies for analysis of cereal β-glucan oxidation products by UPLC-MS/MS

    Science.gov (United States)

    Boulos, Samy; Nyström, Laura

    2017-11-01

    The oxidation of cereal (1→3,1→4)-β-D-glucan can influence the health promoting and technological properties of this linear, soluble homopolysaccharide by introduction of new functional groups or chain scission. Apart from deliberate oxidative modifications, oxidation of β-glucan can already occur during processing and storage, which is mediated by hydroxyl radicals (HO•) formed by the Fenton reaction. We present four complementary sample preparation strategies to investigate oat and barley β-glucan oxidation products by hydrophilic interaction ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS), employing selective enzymatic digestion, graphitized carbon solid phase extraction (SPE), and functional group labeling techniques. The combination of these methods allows for detection of both lytic (C1, C3/4, C5) and non-lytic (C2, C4/3, C6) oxidation products resulting from HO•-attack at different glucose-carbons. By treating oxidized β-glucan with lichenase and β-glucosidase, only oxidized parts of the polymer remained in oligomeric form, which could be separated by SPE from the vast majority of non-oxidized glucose units. This allowed for the detection of oligomers with mid-chain glucuronic acids (C6) and carbonyls, as well as carbonyls at the non-reducing end from lytic C3/C4 oxidation. Neutral reducing ends were detected by reductive amination with anthranilic acid/amide as labeled glucose and cross-ring cleaved units (arabinose, erythrose) after enzyme treatment and SPE. New acidic chain termini were observed by carbodiimide-mediated amidation of carboxylic acids as anilides of gluconic, arabinonic, and erythronic acids. Hence, a full characterization of all types of oxidation products was possible by combining complementary sample preparation strategies. Differences in fine structure depending on source (oat vs. barley) translates to the ratio of observed oxidized oligomers, with in-depth analysis corroborating a random HO

  5. The Effect of Summarizing and Presentation Strategies

    Directory of Open Access Journals (Sweden)

    Hooshang Khoshsima

    2014-07-01

    Full Text Available The present study aimed to find out the effect of summarizing and presentation strategies on Iranian intermediate EFL learners’ reading comprehension. 61 students were selected and divided into two experimental and control groups. The homogeneity of their proficiency level was established using a TOEFL proficiency test. The experimental group used the two strategies three sessions each week for twenty weeks, while the control group was not trained on the strategies. After every two-week instruction, an immediate posttest was administered. At the end of the study, a post-test was administered to both groups. Paired-sample t-test and Independent sample t-test were used for analysis. The results of the study revealed that summarizing and presentation strategies had significant effect on promoting reading comprehension of intermediate EFL learners. It also indicated that the presentation strategy was significantly more effective on students’ reading comprehension.

  6. BUSINESS STRATEGY, STRUCTURE AND ORGANIZATIONAL PERFORMANCE

    OpenAIRE

    CORINA GAVREA; ROXANA STEGEREAN; LIVIU ILIES

    2012-01-01

    Organizational structure and competitive strategy play an important role in gaining competitive advantage and improving organizational performance. The objective of this paper is to examine how organizational structure and strategy affects firm performance within a sample of 92 Romanian firms. The data used in this study was collected through a questionnaire used to quantify the three variables of interest: organizational performance, strategy and structure.

  7. Intelligent sampling for the measurement of structured surfaces

    International Nuclear Information System (INIS)

    Wang, J; Jiang, X; Blunt, L A; Scott, P J; Leach, R K

    2012-01-01

    Uniform sampling in metrology has known drawbacks such as coherent spectral aliasing and a lack of efficiency in terms of measuring time and data storage. The requirement for intelligent sampling strategies has been outlined over recent years, particularly where the measurement of structured surfaces is concerned. Most of the present research on intelligent sampling has focused on dimensional metrology using coordinate-measuring machines with little reported on the area of surface metrology. In the research reported here, potential intelligent sampling strategies for surface topography measurement of structured surfaces are investigated by using numerical simulation and experimental verification. The methods include the jittered uniform method, low-discrepancy pattern sampling and several adaptive methods which originate from computer graphics, coordinate metrology and previous research by the authors. By combining the use of advanced reconstruction methods and feature-based characterization techniques, the measurement performance of the sampling methods is studied using case studies. The advantages, stability and feasibility of these techniques for practical measurements are discussed. (paper)

  8. Stable isotope labeling strategy based on coding theory

    Energy Technology Data Exchange (ETDEWEB)

    Kasai, Takuma; Koshiba, Seizo; Yokoyama, Jun; Kigawa, Takanori, E-mail: kigawa@riken.jp [RIKEN Quantitative Biology Center (QBiC), Laboratory for Biomolecular Structure and Dynamics (Japan)

    2015-10-15

    We describe a strategy for stable isotope-aided protein nuclear magnetic resonance (NMR) analysis, called stable isotope encoding. The basic idea of this strategy is that amino-acid selective labeling can be considered as “encoding and decoding” processes, in which the information of amino acid type is encoded by the stable isotope labeling ratio of the corresponding residue and it is decoded by analyzing NMR spectra. According to the idea, the strategy can diminish the required number of labelled samples by increasing information content per sample, enabling discrimination of 19 kinds of non-proline amino acids with only three labeled samples. The idea also enables this strategy to combine with information technologies, such as error detection by check digit, to improve the robustness of analyses with low quality data. Stable isotope encoding will facilitate NMR analyses of proteins under non-ideal conditions, such as those in large complex systems, with low-solubility, and in living cells.

  9. Stable isotope labeling strategy based on coding theory

    International Nuclear Information System (INIS)

    Kasai, Takuma; Koshiba, Seizo; Yokoyama, Jun; Kigawa, Takanori

    2015-01-01

    We describe a strategy for stable isotope-aided protein nuclear magnetic resonance (NMR) analysis, called stable isotope encoding. The basic idea of this strategy is that amino-acid selective labeling can be considered as “encoding and decoding” processes, in which the information of amino acid type is encoded by the stable isotope labeling ratio of the corresponding residue and it is decoded by analyzing NMR spectra. According to the idea, the strategy can diminish the required number of labelled samples by increasing information content per sample, enabling discrimination of 19 kinds of non-proline amino acids with only three labeled samples. The idea also enables this strategy to combine with information technologies, such as error detection by check digit, to improve the robustness of analyses with low quality data. Stable isotope encoding will facilitate NMR analyses of proteins under non-ideal conditions, such as those in large complex systems, with low-solubility, and in living cells

  10. Spatial scan statistics to assess sampling strategy of antimicrobial resistance monitoring programme

    DEFF Research Database (Denmark)

    Vieira, Antonio; Houe, Hans; Wegener, Henrik Caspar

    2009-01-01

    Pie collection and analysis of data on antimicrobial resistance in human and animal Populations are important for establishing a baseline of the occurrence of resistance and for determining trends over time. In animals, targeted monitoring with a stratified sampling plan is normally used. However...... sampled by the Danish Integrated Antimicrobial Resistance Monitoring and Research Programme (DANMAP), by identifying spatial Clusters of samples and detecting areas with significantly high or low sampling rates. These analyses were performed for each year and for the total 5-year study period for all...... by an antimicrobial monitoring program....

  11. A Sample-Based Forest Monitoring Strategy Using Landsat, AVHRR and MODIS Data to Estimate Gross Forest Cover Loss in Malaysia between 1990 and 2005

    Directory of Open Access Journals (Sweden)

    Peter Potapov

    2013-04-01

    Full Text Available Insular Southeast Asia is a hotspot of humid tropical forest cover loss. A sample-based monitoring approach quantifying forest cover loss from Landsat imagery was implemented to estimate gross forest cover loss for two eras, 1990–2000 and 2000–2005. For each time interval, a probability sample of 18.5 km × 18.5 km blocks was selected, and pairs of Landsat images acquired per sample block were interpreted to quantify forest cover area and gross forest cover loss. Stratified random sampling was implemented for 2000–2005 with MODIS-derived forest cover loss used to define the strata. A probability proportional to x (πpx design was implemented for 1990–2000 with AVHRR-derived forest cover loss used as the x variable to increase the likelihood of including forest loss area in the sample. The estimated annual gross forest cover loss for Malaysia was 0.43 Mha/yr (SE = 0.04 during 1990–2000 and 0.64 Mha/yr (SE = 0.055 during 2000–2005. Our use of the πpx sampling design represents a first practical trial of this design for sampling satellite imagery. Although the design performed adequately in this study, a thorough comparative investigation of the πpx design relative to other sampling strategies is needed before general design recommendations can be put forth.

  12. An Energy Efficient Localization Strategy for Outdoor Objects based on Intelligent Light-Intensity Sampling

    OpenAIRE

    Sandnes, Frode Eika

    2010-01-01

    A simple and low cost strategy for implementing pervasive objects that identify and track their own geographical location is proposed. The strategy, which is not reliant on any GIS infrastructure such as GPS, is realized using an electronic artifact with a built in clock, a light sensor, or low-cost digital camera, persistent storage such as flash and sufficient computational circuitry to make elementary trigonometric computations. The object monitors the lighting conditions and thereby detec...

  13. On-capillary sample cleanup method for the electrophoretic determination of carbohydrates in juice samples.

    Science.gov (United States)

    Morales-Cid, Gabriel; Simonet, Bartolomé M; Cárdenas, Soledad; Valcárcel, Miguel

    2007-05-01

    On many occasions, sample treatment is a critical step in electrophoretic analysis. As an alternative to batch procedures, in this work, a new strategy is presented with a view to develop an on-capillary sample cleanup method. This strategy is based on the partial filling of the capillary with carboxylated single-walled carbon nanotube (c-SWNT). The nanoparticles retain interferences from the matrix allowing the determination and quantification of carbohydrates (viz glucose, maltose and fructose). The precision of the method for the analysis of real samples ranged from 5.3 to 6.4%. The proposed method was compared with a method based on a batch filtration of the juice sample through diatomaceous earth and further electrophoretic determination. This method was also validated in this work. The RSD for this other method ranged from 5.1 to 6%. The results obtained by both methods were statistically comparable demonstrating the accuracy of the proposed methods and their effectiveness. Electrophoretic separation of carbohydrates was achieved using 200 mM borate solution as a buffer at pH 9.5 and applying 15 kV. During separation, the capillary temperature was kept constant at 40 degrees C. For the on-capillary cleanup method, a solution containing 50 mg/L of c-SWNTs prepared in 300 mM borate solution at pH 9.5 was introduced for 60 s into the capillary just before sample introduction. For the electrophoretic analysis of samples cleaned in batch with diatomaceous earth, it is also recommended to introduce into the capillary, just before the sample, a 300 mM borate solution as it enhances the sensitivity and electrophoretic resolution.

  14. LC-MS analysis of the plasma metabolome–a novel sample preparation strategy

    DEFF Research Database (Denmark)

    Skov, Kasper; Hadrup, Niels; Smedsgaard, Jørn

    2015-01-01

    Blood plasma is a well-known body fluid often analyzed in studies on the effects of toxic compounds as physiological or chemical induced changes in the mammalian body are reflected in the plasma metabolome. Sample preparation prior to LC-MS based analysis of the plasma metabolome is a challenge...... as plasma contains compounds with very different properties. Besides, proteins, which usually are precipitated with organic solvent, phospholipids, are known to cause ion suppression in electrospray mass spectrometry. We have compared two different sample preparation techniques prior to LC-qTOF analysis...... of plasma samples: The first is protein precipitation; the second is protein precipitation followed by solid phase extraction with sub-fractionation into three sub-samples; a phospholipid, a lipid and a polar sub-fraction. Molecular feature extraction of the data files from LC-qTOF analysis of the samples...

  15. Reproductive Strategy and Sexual Conflict Slow Life History Strategy Inihibts Negative Androcentrism

    Directory of Open Access Journals (Sweden)

    Paul R. Gladden

    2013-11-01

    Full Text Available Recent findings indicate that a slow Life History (LH strategy factor is associated with increased levels of Executive Functioning (EF, increased emotional intelligence, decreased levels of sexually coercive behaviors, and decreased levels of negative ethnocentrism. Based on these findings, as well as the generative theory, we predicted that slow LH strategy should inhibit negative androcentrism (bias against women. A sample of undergraduates responded to a battery of questionnaires measuring various facets of their LH Strategy, (e.g., sociosexual orientation, mating effort, mate-value, psychopathy, executive functioning, and emotional intelligence and various convergent measures of Negative Androcentrism. A structural model that the data fit well indicated a latent protective LH strategy trait predicted decreased negative androcentrism. This trait fully mediated the relationship between participant biological sex and androcentrism. We suggest that slow LH strategy may inhibit negative attitudes toward women because of relatively decreased intrasexual competition and intersexual conflict among slow LH strategists. DOI: 10.2458/azu_jmmss.v4i1.17774

  16. Evaluation of the Frequency for Gas Sampling for the High Burnup Confirmatory Data Project

    Energy Technology Data Exchange (ETDEWEB)

    Stockman, Christine T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Alsaed, Halim A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bryan, Charles R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Marschman, Steven C. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Scaglione, John M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-05-01

    This report provides a technically based gas sampling frequency strategy for the High Burnup (HBU) Confirmatory Data Project. The evaluation of: 1) the types and magnitudes of gases that could be present in the project cask and, 2) the degradation mechanisms that could change gas compositions culminates in an adaptive gas sampling frequency strategy. This adaptive strategy is compared against the sampling frequency that has been developed based on operational considerations.

  17. Impacts of human activities and sampling strategies on soil heavy metal distribution in a rapidly developing region of China.

    Science.gov (United States)

    Shao, Xuexin; Huang, Biao; Zhao, Yongcun; Sun, Weixia; Gu, Zhiquan; Qian, Weifei

    2014-06-01

    The impacts of industrial and agricultural activities on soil Cd, Hg, Pb, and Cu in Zhangjiagang City, a rapidly developing region in China, were evaluated using two sampling strategies. The soil Cu, Cd, and Pb concentrations near industrial locations were greater than those measured away from industrial locations. The converse was true for Hg. The top enrichment factor (TEF) values, calculated as the ratio of metal concentrations between the topsoil and subsoil, were greater near industrial location than away from industrial locations and were further related to the industry type. Thus, the TEF is an effective index to distinguish sources of toxic elements not only between anthropogenic and geogenic but also among different industry types. Target soil sampling near industrial locations resulted in a greater estimation in high levels of soil heavy metals. This study revealed that the soil heavy metal contamination was primarily limited to local areas near industrial locations, despite rapid development over the last 20 years. The prevention and remediation of the soil heavy metal pollution should focus on these high-risk areas in the future. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Sampling for validation of digital soil maps

    NARCIS (Netherlands)

    Brus, D.J.; Kempen, B.; Heuvelink, G.B.M.

    2011-01-01

    The increase in digital soil mapping around the world means that appropriate and efficient sampling strategies are needed for validation. Data used for calibrating a digital soil mapping model typically are non-random samples. In such a case we recommend collection of additional independent data and

  19. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    Science.gov (United States)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  20. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    Science.gov (United States)

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  1. A Sequential Kriging reliability analysis method with characteristics of adaptive sampling regions and parallelizability

    International Nuclear Information System (INIS)

    Wen, Zhixun; Pei, Haiqing; Liu, Hai; Yue, Zhufeng

    2016-01-01

    The sequential Kriging reliability analysis (SKRA) method has been developed in recent years for nonlinear implicit response functions which are expensive to evaluate. This type of method includes EGRA: the efficient reliability analysis method, and AK-MCS: the active learning reliability method combining Kriging model and Monte Carlo simulation. The purpose of this paper is to improve SKRA by adaptive sampling regions and parallelizability. The adaptive sampling regions strategy is proposed to avoid selecting samples in regions where the probability density is so low that the accuracy of these regions has negligible effects on the results. The size of the sampling regions is adapted according to the failure probability calculated by last iteration. Two parallel strategies are introduced and compared, aimed at selecting multiple sample points at a time. The improvement is verified through several troublesome examples. - Highlights: • The ISKRA method improves the efficiency of SKRA. • Adaptive sampling regions strategy reduces the number of needed samples. • The two parallel strategies reduce the number of needed iterations. • The accuracy of the optimal value impacts the number of samples significantly.

  2. Development Strategy of Lanting Small Industry

    Directory of Open Access Journals (Sweden)

    Atika Tri Puspitasari

    2015-12-01

    Full Text Available This research aims to describe and analyze the strategy of production, marketing, human resources (labor, and capital. The technique of collecting data used observation, interviews, documentation, questionnaires, and triangulation. The technique of sampling was purposive sampling. Findings show that the strategy of production, marketing strategies by the way of increased order coupled with the trademark shows as well as various flavors of innovation development, adjustment of the selling price with the price of raw materials production, the cooperation of manufacturers and suppliers in the distribution of lanting, promotional activities by means of cooperation with the agency and related service trade off products online. The strategy of human resources is with the formation groups of industry in the village of Lemahduwur (but not running smoothly. Strategy capital with the initial capital comes from its own capital and profit as capital accumulation, additional capital when many party and by feast day; increased access to capital, financial administration and against accounting in a simple and routine. The advice given is the government and manufacturers improve HR, technology development, marketing and capital. Manufacturer improves collaboration with suppliers of raw materials, maintaining the typical features and making a trademark.

  3. ELIMINATION OF THE CHARACTERIZATION OF DWPF POUR STREAM SAMPLE AND THE GLASS FABRICATION AND TESTING OF THE DWPF SLUDGE BATCH QUALIFICATION SAMPLE

    Energy Technology Data Exchange (ETDEWEB)

    Amoroso, J.; Peeler, D.; Edwards, T.

    2012-05-11

    A recommendation to eliminate all characterization of pour stream glass samples and the glass fabrication and Product Consistency Test (PCT) of the sludge batch qualification sample was made by a Six-Sigma team chartered to eliminate non-value-added activities for the Defense Waste Processing Facility (DWPF) sludge batch qualification program and is documented in the report SS-PIP-2006-00030. That recommendation was supported through a technical data review by the Savannah River National Laboratory (SRNL) and is documented in the memorandums SRNL-PSE-2007-00079 and SRNL-PSE-2007-00080. At the time of writing those memorandums, the DWPF was processing sludge-only waste but, has since transitioned to a coupled operation (sludge and salt). The SRNL was recently tasked to perform a similar data review relevant to coupled operations and re-evaluate the previous recommendations. This report evaluates the validity of eliminating the characterization of pour stream glass samples and the glass fabrication and Product Consistency Test (PCT) of the sludge batch qualification samples based on sludge-only and coupled operations. The pour stream sample has confirmed the DWPF's ability to produce an acceptable waste form from Slurry Mix Evaporator (SME) blending and product composition/durability predictions for the previous sixteen years but, ultimately the pour stream analysis has added minimal value to the DWPF's waste qualification strategy. Similarly, the information gained from the glass fabrication and PCT of the sludge batch qualification sample was determined to add minimal value to the waste qualification strategy since that sample is routinely not representative of the waste composition ultimately processed at the DWPF due to blending and salt processing considerations. Moreover, the qualification process has repeatedly confirmed minimal differences in glass behavior from actual radioactive waste to glasses fabricated from simulants or batch chemicals. In

  4. Elimination Of The Characterization Of DWPF Pour Stream Sample And The Glass Fabrication And Testing Of The DWPF Sludge Batch Qualification Sample

    International Nuclear Information System (INIS)

    Amoroso, J.; Peeler, D.; Edwards, T.

    2012-01-01

    A recommendation to eliminate all characterization of pour stream glass samples and the glass fabrication and Product Consistency Test (PCT) of the sludge batch qualification sample was made by a Six-Sigma team chartered to eliminate non-value-added activities for the Defense Waste Processing Facility (DWPF) sludge batch qualification program and is documented in the report SS-PIP-2006-00030. That recommendation was supported through a technical data review by the Savannah River National Laboratory (SRNL) and is documented in the memorandums SRNL-PSE-2007-00079 and SRNL-PSE-2007-00080. At the time of writing those memorandums, the DWPF was processing sludge-only waste but, has since transitioned to a coupled operation (sludge and salt). The SRNL was recently tasked to perform a similar data review relevant to coupled operations and re-evaluate the previous recommendations. This report evaluates the validity of eliminating the characterization of pour stream glass samples and the glass fabrication and Product Consistency Test (PCT) of the sludge batch qualification samples based on sludge-only and coupled operations. The pour stream sample has confirmed the DWPF's ability to produce an acceptable waste form from Slurry Mix Evaporator (SME) blending and product composition/durability predictions for the previous sixteen years but, ultimately the pour stream analysis has added minimal value to the DWPF's waste qualification strategy. Similarly, the information gained from the glass fabrication and PCT of the sludge batch qualification sample was determined to add minimal value to the waste qualification strategy since that sample is routinely not representative of the waste composition ultimately processed at the DWPF due to blending and salt processing considerations. Moreover, the qualification process has repeatedly confirmed minimal differences in glass behavior from actual radioactive waste to glasses fabricated from simulants or batch chemicals. In contrast, the

  5. Limited-sampling strategy models for estimating the pharmacokinetic parameters of 4-methylaminoantipyrine, an active metabolite of dipyrone

    Directory of Open Access Journals (Sweden)

    Suarez-Kurtz G.

    2001-01-01

    Full Text Available Bioanalytical data from a bioequivalence study were used to develop limited-sampling strategy (LSS models for estimating the area under the plasma concentration versus time curve (AUC and the peak plasma concentration (Cmax of 4-methylaminoantipyrine (MAA, an active metabolite of dipyrone. Twelve healthy adult male volunteers received single 600 mg oral doses of dipyrone in two formulations at a 7-day interval in a randomized, crossover protocol. Plasma concentrations of MAA (N = 336, measured by HPLC, were used to develop LSS models. Linear regression analysis and a "jack-knife" validation procedure revealed that the AUC0-¥ and the Cmax of MAA can be accurately predicted (R²>0.95, bias 0.85 of the AUC0-¥ or Cmax for the other formulation. LSS models based on three sampling points (1.5, 4 and 24 h, but using different coefficients for AUC0-¥ and Cmax, predicted the individual values of both parameters for the enrolled volunteers (R²>0.88, bias = -0.65 and -0.37%, precision = 4.3 and 7.4% as well as for plasma concentration data sets generated by simulation (R²>0.88, bias = -1.9 and 8.5%, precision = 5.2 and 8.7%. Bioequivalence assessment of the dipyrone formulations based on the 90% confidence interval of log-transformed AUC0-¥ and Cmax provided similar results when either the best-estimated or the LSS-derived metrics were used.

  6. Trends in Scottish newborn screening programme for congenital hypothyroidism 1980-2014: strategies for reducing age at notification after initial and repeat sampling.

    Science.gov (United States)

    Mansour, Chourouk; Ouarezki, Yasmine; Jones, Jeremy; Fitch, Moira; Smith, Sarah; Mason, Avril; Donaldson, Malcolm

    2017-10-01

    To determine ages at first capillary sampling and notification and age at notification after second sampling in Scottish newborns referred with elevated thyroid-stimulating hormone (TSH). Referrals between 1980 and 2014 inclusive were grouped into seven 5-year blocks and analysed according to agreed standards. Of 2 116 132 newborn infants screened, 919 were referred with capillary TSH elevation ≥8 mU/L of whom 624 had definite (606) or probable (18) congenital hypothyroidism. Median age at first sampling fell from 7 to 5 days between 1980 and 2014 (standard 4-7 days), with 22, 8 and 3 infants sampled >7 days during 2000-2004, 2005-2009 and 2010-2014. Median age at notification was consistently ≤14 days, range falling during 2000-2004, 2005-2009 and 2010-2014 from 6 to 78, 7-52 and 7-32 days with 12 (14.6%), 6 (5.6%) and 5 (4.3%) infants notified >14 days. However 18/123 (14.6%) of infants undergoing second sampling from 2000 onwards breached the ≤26-day standard for notification. By 2010-2014, the 91 infants with confirmed congenital hypothyroidism had shown favourable median age at first sample (5 days) with start of treatment (10.5 days) approaching age at notification. Most standards for newborn thyroid screening are being met by the Scottish programme, but there is a need to reduce age range at notification, particularly following second sampling. Strategies to improve screening performance include carrying out initial capillary sampling as close to 96 hours as possible; introducing 6-day laboratory reporting and use of electronic transmission for communicating repeat requests. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  7. Protocol for Microplastics Sampling on the Sea Surface and Sample Analysis

    Science.gov (United States)

    Kovač Viršek, Manca; Palatinus, Andreja; Koren, Špela; Peterlin, Monika; Horvat, Petra; Kržan, Andrej

    2016-01-01

    Microplastic pollution in the marine environment is a scientific topic that has received increasing attention over the last decade. The majority of scientific publications address microplastic pollution of the sea surface. The protocol below describes the methodology for sampling, sample preparation, separation and chemical identification of microplastic particles. A manta net fixed on an »A frame« attached to the side of the vessel was used for sampling. Microplastic particles caught in the cod end of the net were separated from samples by visual identification and use of stereomicroscopes. Particles were analyzed for their size using an image analysis program and for their chemical structure using ATR-FTIR and micro FTIR spectroscopy. The described protocol is in line with recommendations for microplastics monitoring published by the Marine Strategy Framework Directive (MSFD) Technical Subgroup on Marine Litter. This written protocol with video guide will support the work of researchers that deal with microplastics monitoring all over the world. PMID:28060297

  8. Dissecting the pathobiology of altered MRI signal in amyotrophic lateral sclerosis: A post mortem whole brain sampling strategy for the integration of ultra-high-field MRI and quantitative neuropathology.

    Science.gov (United States)

    Pallebage-Gamarallage, Menuka; Foxley, Sean; Menke, Ricarda A L; Huszar, Istvan N; Jenkinson, Mark; Tendler, Benjamin C; Wang, Chaoyue; Jbabdi, Saad; Turner, Martin R; Miller, Karla L; Ansorge, Olaf

    2018-03-13

    Amyotrophic lateral sclerosis (ALS) is a clinically and histopathologically heterogeneous neurodegenerative disorder, in which therapy is hindered by the rapid progression of disease and lack of biomarkers. Magnetic resonance imaging (MRI) has demonstrated its potential for detecting the pathological signature and tracking disease progression in ALS. However, the microstructural and molecular pathological substrate is poorly understood and generally defined histologically. One route to understanding and validating the pathophysiological correlates of MRI signal changes in ALS is to directly compare MRI to histology in post mortem human brains. The article delineates a universal whole brain sampling strategy of pathologically relevant grey matter (cortical and subcortical) and white matter tracts of interest suitable for histological evaluation and direct correlation with MRI. A standardised systematic sampling strategy that was compatible with co-registration of images across modalities was established for regions representing phosphorylated 43-kDa TAR DNA-binding protein (pTDP-43) patterns that were topographically recognisable with defined neuroanatomical landmarks. Moreover, tractography-guided sampling facilitated accurate delineation of white matter tracts of interest. A digital photography pipeline at various stages of sampling and histological processing was established to account for structural deformations that might impact alignment and registration of histological images to MRI volumes. Combined with quantitative digital histology image analysis, the proposed sampling strategy is suitable for routine implementation in a high-throughput manner for acquisition of large-scale histology datasets. Proof of concept was determined in the spinal cord of an ALS patient where multiple MRI modalities (T1, T2, FA and MD) demonstrated sensitivity to axonal degeneration and associated heightened inflammatory changes in the lateral corticospinal tract. Furthermore

  9. An Optimal Sample Data Usage Strategy to Minimize Overfitting and Underfitting Effects in Regression Tree Models Based on Remotely-Sensed Data

    Directory of Open Access Journals (Sweden)

    Yingxin Gu

    2016-11-01

    Full Text Available Regression tree models have been widely used for remote sensing-based ecosystem mapping. Improper use of the sample data (model training and testing data may cause overfitting and underfitting effects in the model. The goal of this study is to develop an optimal sampling data usage strategy for any dataset and identify an appropriate number of rules in the regression tree model that will improve its accuracy and robustness. Landsat 8 data and Moderate-Resolution Imaging Spectroradiometer-scaled Normalized Difference Vegetation Index (NDVI were used to develop regression tree models. A Python procedure was designed to generate random replications of model parameter options across a range of model development data sizes and rule number constraints. The mean absolute difference (MAD between the predicted and actual NDVI (scaled NDVI, value from 0–200 and its variability across the different randomized replications were calculated to assess the accuracy and stability of the models. In our case study, a six-rule regression tree model developed from 80% of the sample data had the lowest MAD (MADtraining = 2.5 and MADtesting = 2.4, which was suggested as the optimal model. This study demonstrates how the training data and rule number selections impact model accuracy and provides important guidance for future remote-sensing-based ecosystem modeling.

  10. A new modeling strategy for third-order fast high-performance liquid chromatographic data with fluorescence detection. Quantitation of fluoroquinolones in water samples.

    Science.gov (United States)

    Alcaráz, Mirta R; Bortolato, Santiago A; Goicoechea, Héctor C; Olivieri, Alejandro C

    2015-03-01

    Matrix augmentation is regularly employed in extended multivariate curve resolution-alternating least-squares (MCR-ALS), as applied to analytical calibration based on second- and third-order data. However, this highly useful concept has almost no correspondence in parallel factor analysis (PARAFAC) of third-order data. In the present work, we propose a strategy to process third-order chromatographic data with matrix fluorescence detection, based on an Augmented PARAFAC model. The latter involves decomposition of a three-way data array augmented along the elution time mode with data for the calibration samples and for each of the test samples. A set of excitation-emission fluorescence matrices, measured at different chromatographic elution times for drinking water samples, containing three fluoroquinolones and uncalibrated interferences, were evaluated using this approach. Augmented PARAFAC exploits the second-order advantage, even in the presence of significant changes in chromatographic profiles from run to run. The obtained relative errors of prediction were ca. 10 % for ofloxacin, ciprofloxacin, and danofloxacin, with a significant enhancement in analytical figures of merit in comparison with previous reports. The results are compared with those furnished by MCR-ALS.

  11. GMOtrack: generator of cost-effective GMO testing strategies.

    Science.gov (United States)

    Novak, Petra Krau; Gruden, Kristina; Morisset, Dany; Lavrac, Nada; Stebih, Dejan; Rotter, Ana; Zel, Jana

    2009-01-01

    Commercialization of numerous genetically modified organisms (GMOs) has already been approved worldwide, and several additional GMOs are in the approval process. Many countries have adopted legislation to deal with GMO-related issues such as food safety, environmental concerns, and consumers' right of choice, making GMO traceability a necessity. The growing extent of GMO testing makes it important to study optimal GMO detection and identification strategies. This paper formally defines the problem of routine laboratory-level GMO tracking as a cost optimization problem, thus proposing a shift from "the same strategy for all samples" to "sample-centered GMO testing strategies." An algorithm (GMOtrack) for finding optimal two-phase (screening-identification) testing strategies is proposed. The advantages of cost optimization with increasing GMO presence on the market are demonstrated, showing that optimization approaches to analytic GMO traceability can result in major cost reductions. The optimal testing strategies are laboratory-dependent, as the costs depend on prior probabilities of local GMO presence, which are exemplified on food and feed samples. The proposed GMOtrack approach, publicly available under the terms of the General Public License, can be extended to other domains where complex testing is involved, such as safety and quality assurance in the food supply chain.

  12. Simultaneous escaping of explicit and hidden free energy barriers: application of the orthogonal space random walk strategy in generalized ensemble based conformational sampling.

    Science.gov (United States)

    Zheng, Lianqing; Chen, Mengen; Yang, Wei

    2009-06-21

    To overcome the pseudoergodicity problem, conformational sampling can be accelerated via generalized ensemble methods, e.g., through the realization of random walks along prechosen collective variables, such as spatial order parameters, energy scaling parameters, or even system temperatures or pressures, etc. As usually observed, in generalized ensemble simulations, hidden barriers are likely to exist in the space perpendicular to the collective variable direction and these residual free energy barriers could greatly abolish the sampling efficiency. This sampling issue is particularly severe when the collective variable is defined in a low-dimension subset of the target system; then the "Hamiltonian lagging" problem, which reveals the fact that necessary structural relaxation falls behind the move of the collective variable, may be likely to occur. To overcome this problem in equilibrium conformational sampling, we adopted the orthogonal space random walk (OSRW) strategy, which was originally developed in the context of free energy simulation [L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008)]. Thereby, generalized ensemble simulations can simultaneously escape both the explicit barriers along the collective variable direction and the hidden barriers that are strongly coupled with the collective variable move. As demonstrated in our model studies, the present OSRW based generalized ensemble treatments show improved sampling capability over the corresponding classical generalized ensemble treatments.

  13. Combining Electrochemical Sensors with Miniaturized Sample Preparation for Rapid Detection in Clinical Samples

    Science.gov (United States)

    Bunyakul, Natinan; Baeumner, Antje J.

    2015-01-01

    Clinical analyses benefit world-wide from rapid and reliable diagnostics tests. New tests are sought with greatest demand not only for new analytes, but also to reduce costs, complexity and lengthy analysis times of current techniques. Among the myriad of possibilities available today to develop new test systems, amperometric biosensors are prominent players—best represented by the ubiquitous amperometric-based glucose sensors. Electrochemical approaches in general require little and often enough only simple hardware components, are rugged and yet provide low limits of detection. They thus offer many of the desirable attributes for point-of-care/point-of-need tests. This review focuses on investigating the important integration of sample preparation with (primarily electrochemical) biosensors. Sample clean up requirements, miniaturized sample preparation strategies, and their potential integration with sensors will be discussed, focusing on clinical sample analyses. PMID:25558994

  14. Spatiotemporally Representative and Cost-Efficient Sampling Design for Validation Activities in Wanglang Experimental Site

    Directory of Open Access Journals (Sweden)

    Gaofei Yin

    2017-11-01

    Full Text Available Spatiotemporally representative Elementary Sampling Units (ESUs are required for capturing the temporal variations in surface spatial heterogeneity through field measurements. Since inaccessibility often coexists with heterogeneity, a cost-efficient sampling design is mandatory. We proposed a sampling strategy to generate spatiotemporally representative and cost-efficient ESUs based on the conditioned Latin hypercube sampling scheme. The proposed strategy was constrained by multi-temporal Normalized Difference Vegetation Index (NDVI imagery, and the ESUs were limited within a sampling feasible region established based on accessibility criteria. A novel criterion based on the Overlapping Area (OA between the NDVI frequency distribution histogram from the sampled ESUs and that from the entire study area was used to assess the sampling efficiency. A case study in Wanglang National Nature Reserve in China showed that the proposed strategy improves the spatiotemporally representativeness of sampling (mean annual OA = 74.7% compared to the single-temporally constrained (OA = 68.7% and the random sampling (OA = 63.1% strategies. The introduction of the feasible region constraint significantly reduces in-situ labour-intensive characterization necessities at expenses of about 9% loss in the spatiotemporal representativeness of the sampling. Our study will support the validation activities in Wanglang experimental site providing a benchmark for locating the nodes of automatic observation systems (e.g., LAINet which need a spatially distributed and temporally fixed sampling design.

  15. Soil sampling for environmental contaminants

    International Nuclear Information System (INIS)

    2004-10-01

    The Consultants Meeting on Sampling Strategies, Sampling and Storage of Soil for Environmental Monitoring of Contaminants was organized by the International Atomic Energy Agency to evaluate methods for soil sampling in radionuclide monitoring and heavy metal surveys for identification of punctual contamination (hot particles) in large area surveys and screening experiments. A group of experts was invited by the IAEA to discuss and recommend methods for representative soil sampling for different kinds of environmental issues. The ultimate sinks for all kinds of contaminants dispersed within the natural environment through human activities are sediment and soil. Soil is a particularly difficult matrix for environmental pollution studies as it is generally composed of a multitude of geological and biological materials resulting from weathering and degradation, including particles of different sizes with varying surface and chemical properties. There are so many different soil types categorized according to their content of biological matter, from sandy soils to loam and peat soils, which make analytical characterization even more complicated. Soil sampling for environmental monitoring of pollutants, therefore, is still a matter of debate in the community of soil, environmental and analytical sciences. The scope of the consultants meeting included evaluating existing techniques with regard to their practicability, reliability and applicability to different purposes, developing strategies of representative soil sampling for cases not yet considered by current techniques and recommending validated techniques applicable to laboratories in developing Member States. This TECDOC includes a critical survey of existing approaches and their feasibility to be applied in developing countries. The report is valuable for radioanalytical laboratories in Member States. It would assist them in quality control and accreditation process

  16. Comprehensive Study of Human External Exposure to Organophosphate Flame Retardants via Air, Dust, and Hand Wipes: The Importance of Sampling and Assessment Strategy.

    Science.gov (United States)

    Xu, Fuchao; Giovanoulis, Georgios; van Waes, Sofie; Padilla-Sanchez, Juan Antonio; Papadopoulou, Eleni; Magnér, Jorgen; Haug, Line Småstuen; Neels, Hugo; Covaci, Adrian

    2016-07-19

    We compared the human exposure to organophosphate flame retardants (PFRs) via inhalation, dust ingestion, and dermal absorption using different sampling and assessment strategies. Air (indoor stationary air and personal ambient air), dust (floor dust and surface dust), and hand wipes were sampled from 61 participants and their houses. We found that stationary air contains higher levels of ΣPFRs (median = 163 ng/m(3), IQR = 161 ng/m(3)) than personal air (median = 44 ng/m(3), IQR = 55 ng/m(3)), suggesting that the stationary air sample could generate a larger bias for inhalation exposure assessment. Tris(chloropropyl) phosphate isomers (ΣTCPP) accounted for over 80% of ΣPFRs in both stationary and personal air. PFRs were frequently detected in both surface dust (ΣPFRs median = 33 100 ng/g, IQR = 62 300 ng/g) and floor dust (ΣPFRs median = 20 500 ng/g, IQR = 30 300 ng/g). Tris(2-butoxylethyl) phosphate (TBOEP) accounted for 40% and 60% of ΣPFRs in surface and floor dust, respectively, followed by ΣTCPP (30% and 20%, respectively). TBOEP (median = 46 ng, IQR = 69 ng) and ΣTCPP (median = 37 ng, IQR = 49 ng) were also frequently detected in hand wipe samples. For the first time, a comprehensive assessment of human exposure to PFRs via inhalation, dust ingestion, and dermal absorption was conducted with individual personal data rather than reference factors of the general population. Inhalation seems to be the major exposure pathway for ΣTCPP and tris(2-chloroethyl) phosphate (TCEP), while participants had higher exposure to TBOEP and triphenyl phosphate (TPHP) via dust ingestion. Estimated exposure to ΣPFRs was the highest with stationary air inhalation (median =34 ng·kg bw(-1)·day(-1), IQR = 38 ng·kg bw(-1)·day(-1)), followed by surface dust ingestion (median = 13 ng·kg bw(-1)·day(-1), IQR = 28 ng·kg bw(-1)·day(-1)), floor dust ingestion and personal air inhalation. The median dermal exposure on hand wipes was 0.32 ng·kg bw(-1)·day(-1) (IQR

  17. Facebook or Twitter?: Effective recruitment strategies for family caregivers.

    Science.gov (United States)

    Herbell, Kayla; Zauszniewski, Jaclene A

    2018-06-01

    This brief details recent recruitment insights from a large all-online study of family caregivers that aimed to develop a measure to assess how family caregivers manage daily stresses. Online recruitment strategies included the use of Twitter and Facebook. Overall, 800 individuals responded to the recruitment strategy; 230 completed all study procedures. The most effective online recruitment strategy for targeting family caregivers was Facebook, yielding 86% of the sample. Future researchers may find the use of social media recruitment methods appealing because they are inexpensive, simple, and efficient methods for obtaining National samples. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Maternal Attachment Strategies and Emotion Regulation with Adolescent Offspring.

    Science.gov (United States)

    Kobak, Roger; And Others

    1994-01-01

    Examined the relationship between mothers' attachment strategies and emotion regulation in a sample of 42 families with 2 high school-aged siblings. Found that mothers with preoccupied strategies had difficulty regulating emotion during conversations with their older teenagers about them leaving home. Mothers with secure strategies perceived their…

  19. Service Quality Strategy: Implementation in Algarve Hotels

    OpenAIRE

    Carlos J. F. Cândido

    2010-01-01

    This chapter addresses the problem of service quality strategy implementation and undertakes a tentative validation of three models. The first focuses on service quality, as a function of quality gaps, while the second and third ones examine strategy implementation. The models aim to help to explain how to implement a service quality strategy that simultaneously avoids quality gaps and resistance to change. Sample data has been collected through questionnaires distributed within the p...

  20. Hemodialysis: stressors and coping strategies.

    Science.gov (United States)

    Ahmad, Muayyad M; Al Nazly, Eman K

    2015-01-01

    End-stage renal disease (ESRD) is an irreversible and life-threatening condition. In Jordan, the number of ESRD patients treated with hemodialysis is on the rise. Identifying stressors and coping strategies used by patients with ESRD may help nurses and health care providers to gain a clearer understanding of the condition of these patients and thus institute effective care planning. The purpose of this study was to identify stressors perceived by Jordanian patients on hemodialysis, and the coping strategies used by them. A convenience sample of 131 Jordanian men and women was recruited from outpatients' dialysis units in four hospitals. Stressors perceived by participants on hemodialysis and the coping strategies were measured using Hemodialysis Stressor Scale, and Ways of Coping Scale-Revised. Findings showed that patients on hemodialysis psychosocial stressors scores mean was higher than the physiological stressors mean. Positive reappraisal coping strategy had the highest mean among the coping strategies and the lowest mean was accepting responsibility. Attention should be focused towards the psychosocial stressors of patients on hemodialysis and also helping patients utilize the coping strategies that help to alleviate the stressors. The most used coping strategy was positive reappraisal strategy which includes faith and prayer.

  1. Research-Grade 3D Virtual Astromaterials Samples: Novel Visualization of NASA's Apollo Lunar Samples and Antarctic Meteorite Samples to Benefit Curation, Research, and Education

    Science.gov (United States)

    Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K. R.; Zeigler, R. A.; Righter, K.; Hanna, R. D.; Ketcham, R. A.

    2017-01-01

    NASA's vast and growing collections of astromaterials are both scientifically and culturally significant, requiring unique preservation strategies that need to be recurrently updated to contemporary technological capabilities and increasing accessibility demands. New technologies have made it possible to advance documentation and visualization practices that can enhance conservation and curation protocols for NASA's Astromaterials Collections. Our interdisciplinary team has developed a method to create 3D Virtual Astromaterials Samples (VAS) of the existing collections of Apollo Lunar Samples and Antarctic Meteorites. Research-grade 3D VAS will virtually put these samples in the hands of researchers and educators worldwide, increasing accessibility and visibility of these significant collections. With new sample return missions on the horizon, it is of primary importance to develop advanced curation standards for documentation and visualization methodologies.

  2. Stress and coping strategies in a sample of South African managers involved in post-graduate managerial studies

    Directory of Open Access Journals (Sweden)

    Judora J. Spangenberg

    2000-06-01

    Full Text Available To examine the relationships between stress levels and, respectively, stressor appraisal, coping strategies and bio- graphical variables, 107 managers completed a biographical questionnaire. Experience of Work and Life Circumstances Questionnaire, and Coping Strategy Indicator. Significant negative correlations were found between stress levels and appraisal scores on all work-related stressors. An avoidant coping strategy explained significant variance in stress levels in a model also containing social support-seeking and problem-solving coping strategies. It was concluded that an avoidant coping strategy probably contributed to increased stress levels. Female managers experienced significantly higher stress levels and utilized a social support-seeking coping strategy significantly more than male managers did. Opsomming Om die verband tussen stresvlakke en, onderskeidelik, taksering van stressors, streshanteringstrategiee en biografiese veranderlikes te ondersoek, het 107 bestuurders n biografiese vraelys, Ervaring vanWerk- en Lewensomstandighedevraelys en Streshanteringstrategieskaal voltooi. Beduidende negatiewe korrelasies is aangetref tussen stresvlakke en takseringtellings ten opsigte van alle werkverwante stressors. 'nVermydende streshantermgstrategie het beduidende variansie in stresvlakke verklaar in n model wat ook sosiale ondersteuningsoekende en pro-bleemoplossende streshanteringstrategiee ingesluit het. Die gevolgtrekking is bereik dat n vermydende stres- hanteringstrategie waarskynlik bygedra het tot verhoogde stresvlakke. Vroulike bestuurders het beduidend hoer stresvlakke ervaar en het n sosiale ondersteuningsoekende streshanteringstrategie beduidend meer gebnnk as manlike bestuurders.

  3. Direct and long-term detection of gene doping in conventional blood samples.

    Science.gov (United States)

    Beiter, T; Zimmermann, M; Fragasso, A; Hudemann, J; Niess, A M; Bitzer, M; Lauer, U M; Simon, P

    2011-03-01

    The misuse of somatic gene therapy for the purpose of enhancing athletic performance is perceived as a coming threat to the world of sports and categorized as 'gene doping'. This article describes a direct detection approach for gene doping that gives a clear yes-or-no answer based on the presence or absence of transgenic DNA in peripheral blood samples. By exploiting a priming strategy to specifically amplify intronless DNA sequences, we developed PCR protocols allowing the detection of very small amounts of transgenic DNA in genomic DNA samples to screen for six prime candidate genes. Our detection strategy was verified in a mouse model, giving positive signals from minute amounts (20 μl) of blood samples for up to 56 days following intramuscular adeno-associated virus-mediated gene transfer, one of the most likely candidate vector systems to be misused for gene doping. To make our detection strategy amenable for routine testing, we implemented a robust sample preparation and processing protocol that allows cost-efficient analysis of small human blood volumes (200 μl) with high specificity and reproducibility. The practicability and reliability of our detection strategy was validated by a screening approach including 327 blood samples taken from professional and recreational athletes under field conditions.

  4. Relationship with Parents and Coping Strategies in Adolescents of Lima

    Directory of Open Access Journals (Sweden)

    Tomás P. Caycho

    2016-04-01

    Full Text Available This correlational and comparative study aims to determine the relationship between the perception of the relationship with parents and coping strategies in a sample of 320 students chosen through a non-probabilistic sampling of 156 men (48.75% and 164 women (51.25%. To that end, information gathering instruments like the Children’s Report of Parental Behavior Inventory and Adolescent Coping Scale were used. The results suggest that there are statistically significant correlations between some dimensions of perception of the relationship with parents and coping strategies in the sample studied. Finally, with regard to the perception of parenting styles of both mother and father, we see no significant differences between men and women, except for the extreme autonomy of the father, in which men score higher than women. There were no some statistically significant differences in the analysis of coping strategies in the sample in relation to gender.

  5. An energy-efficient adaptive sampling scheme for wireless sensor networks

    NARCIS (Netherlands)

    Masoum, Alireza; Meratnia, Nirvana; Havinga, Paul J.M.

    2013-01-01

    Wireless sensor networks are new monitoring platforms. To cope with their resource constraints, in terms of energy and bandwidth, spatial and temporal correlation in sensor data can be exploited to find an optimal sampling strategy to reduce number of sampling nodes and/or sampling frequencies while

  6. A simple algorithm to estimate genetic variance in an animal threshold model using Bayesian inference Genetics Selection Evolution 2010, 42:29

    DEFF Research Database (Denmark)

    Ødegård, Jørgen; Meuwissen, Theo HE; Heringstad, Bjørg

    2010-01-01

    Background In the genetic analysis of binary traits with one observation per animal, animal threshold models frequently give biased heritability estimates. In some cases, this problem can be circumvented by fitting sire- or sire-dam models. However, these models are not appropriate in cases where...... records exist for the parents). Furthermore, the new algorithm showed much faster Markov chain mixing properties for genetic parameters (similar to the sire-dam model). Conclusions The new algorithm to estimate genetic parameters via Gibbs sampling solves the bias problems typically occurring in animal...... individual records exist on parents. Therefore, the aim of our study was to develop a new Gibbs sampling algorithm for a proper estimation of genetic (co)variance components within an animal threshold model framework. Methods In the proposed algorithm, individuals are classified as either "informative...

  7. Effective Teaching Strategies for Predicting Reading Growth in English Language Learners

    Science.gov (United States)

    Melgarejo, Melina

    2017-01-01

    The goal of the present study was to examine how effective use of teaching strategies predict reading growth among a sample of English Language Learners. The study specifically examined whether the types of teaching strategies that predict growth in decoding skills also predict growth in comprehension skills. The sample consisted of students in…

  8. Sensemaking Strategies for Ethical Decision-making.

    Science.gov (United States)

    Caughron, Jay J; Antes, Alison L; Stenmark, Cheryl K; Thiel, Chaise E; Wang, Xiaoqian; Mumford, Michael D

    2011-01-01

    The current study uses a sensemaking model and thinking strategies identified in earlier research to examine ethical decision-making. Using a sample of 163 undergraduates, a low fidelity simulation approach is used to study the effects personal involvement (in causing the problem and personal involvement in experiencing the outcomes of the problem) could have on the use of cognitive reasoning strategies that have been shown to promote ethical decision-making. A mediated model is presented which suggests that environmental factors influence reasoning strategies, reasoning strategies influence sensemaking, and sensemaking in turn influences ethical decision-making. Findings were mixed but generally supported the hypothesized model. Interestingly, framing the outcomes of ethically charged situations in terms of more global organizational outcomes rather than personal outcomes was found to promote the use of pro-ethical cognitive reasoning strategies.

  9. Developmental Strategy For Effective Sampling To Detect Possible Nutrient Fluxes In Oligotrophic Coastal Reef Waters In The Caribbean

    Science.gov (United States)

    Mendoza, W. G.; Corredor, J. E.; Ko, D.; Zika, R. G.; Mooers, C. N.

    2008-05-01

    The increasing effort to develop the coastal ocean observing system (COOS) in various institutions has gained momentum due to its high value to climate, environmental, economic, and health issues. The stress contributed by nutrients to the coral reef ecosystem is among many problems that are targeted to be resolved using this system. Traditional nutrient sampling has been inadequate to resolve issues on episodic nutrient fluxes in reef regions due to temporal and spatial variability. This paper illustrates sampling strategy using the COOS information to identify areas that need critical investigation. The area investigated is within the Puerto Rico subdomain (60-70oW, 15-20oN), and Caribbean Time Series (CaTS), World Ocean Circulation Experiment (WOCE), Intra-America Sea (IAS) ocean nowcast/forecast system (IASNFS), and other COOS-related online datasets are utilized. Nutrient profile results indicate nitrate is undetectable in the upper 50 m apparently due to high biological consumption. Nutrients are delivered in Puerto Rico particularly in the CaTS station either via a meridional jet formed from opposing cyclonic and anticyclonic eddies or wind-driven upwelling. The strong vertical fluctuation in the upper 50 m demonstrates a high anomaly in temperature and salinity and a strong cross correlation signal. High chlorophyll a concentration corresponding to seasonal high nutrient influx coincides with higher precipitation accumulation rates and apparent riverine input from the Amazon and Orinoco Rivers during summer (August) than during winter (February) seasons. Non-detectability of nutrients in the upper 50 m is a reflection of poor sampling frequency or the absence of a highly sensitive nutrient analysis method to capture episodic events. Thus, this paper was able to determine the range of depths and concentrations that need to be critically investigated to determine nutrient fluxes, nutrient sources, and climatological factors that can affect nutrient delivery

  10. Generalized interpolative quantum statistics

    International Nuclear Information System (INIS)

    Ramanathan, R.

    1992-01-01

    A generalized interpolative quantum statistics is presented by conjecturing a certain reordering of phase space due to the presence of possible exotic objects other than bosons and fermions. Such an interpolation achieved through a Bose-counting strategy predicts the existence of an infinite quantum Boltzmann-Gibbs statistics akin to the one discovered by Greenberg recently

  11. Vapor pressures and standard molar enthalpies, entropies and Gibbs energies of sublimation of two hexachloro herbicides using a TG unit

    International Nuclear Information System (INIS)

    Vecchio, Stefano

    2010-01-01

    The vapor pressures above the solid hexachlorobenzene (HCB) and above both the solid and liquid 1,2,3,4,5,6-hexachlorocyclohexane (lindane) were determined in the ranges 332-450 K and 347-429 K, respectively, by measuring the mass loss rates recorded by thermogravimetry under both isothermal and nonisothermal conditions. The results obtained were compared with those taken from literature. From the temperature dependence of vapor pressure derived by the experimental thermogravimetry data the molar enthalpies of sublimation Δ cr g H m o ( ) were selected for HCB and lindane as well as the molar enthalpy of vaporization Δ l g H m o ( ) for lindane only, at the middle of the respective temperature intervals. The melting temperatures and the molar enthalpies of fusion Δ cr l H m o (T fus ) of lindane were measured by differential scanning calorimetry. Finally, the standard molar enthalpies of sublimation Δ cr g H m o (298.15 K) were obtained for both chlorinated compounds at the reference temperature of 298.15 K using the Δ cr g H m o ( ), Δ l g H m o ( ) and Δ cr l H m o (T fus ) values, as well as the heat capacity differences between gas and liquid and the heat capacity differences between gas and solid, Δ l g C p,m o and Δ cr g C p,m o , respectively, both estimated by applying a group additivity procedure. Therefore, the averages of the standard (p o = 0.1 MPa) molar enthalpies, entropies and Gibbs energies of sublimation at 298.15 K, have been derived.

  12. Abnormal Returns and Contrarian Strategies

    Directory of Open Access Journals (Sweden)

    Ivana Dall'Agnol

    2003-12-01

    Full Text Available We test the hypothesis that strategies which are long on portfolios of looser stocks and short on portfolios of winner stocks generate abnormal returns in Brazil. This type of evidence for the US stock market was interpreted by The Bondt and Thaler (1985 as reflecting systematic evaluation mistakes caused by investors overreaction to news related to the firm performance. We found evidence of contrarian strategies profitability for horizons from 3 months to 3 years in a sample of stock returns from BOVESPA and SOMA from 1986 to 2000. The strategies are more profitable for shorter horizons. Therefore, there was no trace of the momentum effect found by Jagadeesh and Titman (1993 for the same horizons with US data. There are remaing unexplained positive returns for contrarian strategies after accounting for risk, size, and liquidity. We also found that the strategy profitability is reduced after the Real Plan, which suggests that the Brazilian stock market became more efficient after inflation stabilization.

  13. Quasi-Phase Diagrams at Air/Oil Interfaces and Bulk Oil Phases for Crystallization of Small-Molecular Semiconductors by Adjusting Gibbs Adsorption.

    Science.gov (United States)

    Watanabe, Satoshi; Ohta, Takahisa; Urata, Ryota; Sato, Tetsuya; Takaishi, Kazuto; Uchiyama, Masanobu; Aoyama, Tetsuya; Kunitake, Masashi

    2017-09-12

    The temperature and concentration dependencies of the crystallization of two small-molecular semiconductors were clarified by constructing quasi-phase diagrams at air/oil interfaces and in bulk oil phases. A quinoidal quaterthiophene derivative with four alkyl chains (QQT(CN)4) in 1,1,2,2-tetrachroloethane (TCE) and a thienoacene derivative with two alkyl chains (C8-BTBT) in o-dichlorobenzene were used. The apparent crystal nucleation temperature (T n ) and dissolution temperature (T d ) of the molecules were determined based on optical microscopy examination in closed glass capillaries and open dishes during slow cooling and heating processes, respectively. T n and T d were considered estimates of the critical temperatures for nuclear formation and crystal growth, respectively. The T n values of QQT(CN)4 and C8-BTBT at the air/oil interfaces were higher than those in the bulk oil phases, whereas the T d values at the air/oil interfaces were almost the same as those in the bulk oil phases. These Gibbs adsorption phenomena were attributed to the solvophobic effect of the alkyl chain moieties. The temperature range between T n and T d corresponds to suitable supercooling conditions for ideal crystal growth based on the suppression of nucleation. The T n values at the water/oil and oil/glass interfaces did not shift compared with those of the bulk phases, indicating that adsorption did not occur at the hydrophilic interfaces. Promotion and inhibition of nuclear formation for crystal growth of the semiconductors were achieved at the air/oil and hydrophilic interfaces, respectively.

  14. Peers Influence Mathematics Strategy Use in Early Elementary School

    Science.gov (United States)

    Carr, Martha; Barned, Nicole; Otumfuor, Beryl

    2016-01-01

    This study examined the impact of performance goals on arithmetic strategy use, and how same-sex peer groups contributed to the selection of strategies used by first-graders. It was hypothesized that gender differences in strategy use are a function of performance goals and the influence of same-sex peers. Using a sample of 75 first grade…

  15. Reading Skills and Strategies: Assessing Primary School Students’ Awareness in L1 and EFL Strategy Use

    Directory of Open Access Journals (Sweden)

    Evdokimos Aivazoglou

    2014-09-01

    Full Text Available The present study was designed and conducted with the purpose to assess primary school students’ awareness in GL1 (Greek as first language and EFL (English as a foreign language strategy use and investigate the relations between the reported reading strategies use in first (L1 and foreign language (FL.  The sample (455 students attending the fifth and sixth grades of primary schools in Northern Greece was first categorized into skilled and less skilled L1 and EFL readers through screening reading comprehension tests, one in L1 and one in FL, before filling in the reading strategy questionnaires. The findings revealed participants’ preference for “problem solving” strategies, while “global strategies” coming next. Girls were proved to be more aware of their reading strategies use with the boys reporting a more frequent use in both languages. Also, skilled readers were found to use reading strategies more effectively, and appeared to be more flexible in transferring strategies from L1 to FL compared to less-skilled readers.

  16. Message strategies in direct-to-consumer pharmaceutical advertising: a content analysis using Taylor's six-segment message strategy wheel.

    Science.gov (United States)

    Tsai, Wan-Hsiu Sunny; Lancaster, Alyse R

    2012-01-01

    This exploratory study applies Taylor's (1999) six-segment message strategy wheel to direct-to-consumer (DTC) pharmaceutical television commercials to understand message strategies adopted by pharmaceutical advertisers to persuade consumers. A convenience sample of 96 DTC commercial campaigns was analyzed. The results suggest that most DTC drug ads used a combination approach, providing consumers with medical and drug information while simultaneously appealing to the viewer's ego-related needs and desires. In contrast to ration and ego strategies, other approaches including routine, acute need, and social are relatively uncommon while sensory was the least common message strategy. Findings thus recognized the educational value of DTC commercials.

  17. Human Life History Strategies.

    Science.gov (United States)

    Chua, Kristine J; Lukaszewski, Aaron W; Grant, DeMond M; Sng, Oliver

    2017-01-01

    Human life history (LH) strategies are theoretically regulated by developmental exposure to environmental cues that ancestrally predicted LH-relevant world states (e.g., risk of morbidity-mortality). Recent modeling work has raised the question of whether the association of childhood family factors with adult LH variation arises via (i) direct sampling of external environmental cues during development and/or (ii) calibration of LH strategies to internal somatic condition (i.e., health), which itself reflects exposure to variably favorable environments. The present research tested between these possibilities through three online surveys involving a total of over 26,000 participants. Participants completed questionnaires assessing components of self-reported environmental harshness (i.e., socioeconomic status, family neglect, and neighborhood crime), health status, and various LH-related psychological and behavioral phenotypes (e.g., mating strategies, paranoia, and anxiety), modeled as a unidimensional latent variable. Structural equation models suggested that exposure to harsh ecologies had direct effects on latent LH strategy as well as indirect effects on latent LH strategy mediated via health status. These findings suggest that human LH strategies may be calibrated to both external and internal cues and that such calibrational effects manifest in a wide range of psychological and behavioral phenotypes.

  18. Human Life History Strategies

    Directory of Open Access Journals (Sweden)

    Kristine J. Chua

    2016-12-01

    Full Text Available Human life history (LH strategies are theoretically regulated by developmental exposure to environmental cues that ancestrally predicted LH-relevant world states (e.g., risk of morbidity–mortality. Recent modeling work has raised the question of whether the association of childhood family factors with adult LH variation arises via (i direct sampling of external environmental cues during development and/or (ii calibration of LH strategies to internal somatic condition (i.e., health, which itself reflects exposure to variably favorable environments. The present research tested between these possibilities through three online surveys involving a total of over 26,000 participants. Participants completed questionnaires assessing components of self-reported environmental harshness (i.e., socioeconomic status, family neglect, and neighborhood crime, health status, and various LH-related psychological and behavioral phenotypes (e.g., mating strategies, paranoia, and anxiety, modeled as a unidimensional latent variable. Structural equation models suggested that exposure to harsh ecologies had direct effects on latent LH strategy as well as indirect effects on latent LH strategy mediated via health status. These findings suggest that human LH strategies may be calibrated to both external and internal cues and that such calibrational effects manifest in a wide range of psychological and behavioral phenotypes.

  19. Surface reconstruction through poisson disk sampling.

    Directory of Open Access Journals (Sweden)

    Wenguang Hou

    Full Text Available This paper intends to generate the approximate Voronoi diagram in the geodesic metric for some unbiased samples selected from original points. The mesh model of seeds is then constructed on basis of the Voronoi diagram. Rather than constructing the Voronoi diagram for all original points, the proposed strategy is to run around the obstacle that the geodesic distances among neighboring points are sensitive to nearest neighbor definition. It is obvious that the reconstructed model is the level of detail of original points. Hence, our main motivation is to deal with the redundant scattered points. In implementation, Poisson disk sampling is taken to select seeds and helps to produce the Voronoi diagram. Adaptive reconstructions can be achieved by slightly changing the uniform strategy in selecting seeds. Behaviors of this method are investigated and accuracy evaluations are done. Experimental results show the proposed method is reliable and effective.

  20. Pengaruh strategi bauran pemasaran terhadap harga pada home industri jenang “Mirah” Ponorogo

    Directory of Open Access Journals (Sweden)

    Prasetiyani Ika Saputri

    2017-08-01

    Full Text Available Of this study was to determine the marketing mix strategy in Home Industry Jenang "MIRAH" in Ponorogo, to determine the pricing of products in Home Industry Jenang "MIRAH" in Ponorogo. Marketing Mix Strategy is one factor in the price. The samples in this study using sampling techniques saturated as many as 39 people. Sampling using techniques sampling. Results of simple linear regression Y = 43,477 + 0,558 X, meaning that if the price increase of 1%, then marketing mix strategy will increase by 0,558 if other factors held constant. It is obtained from tcount on marketing mix strategy variable (X is 9,440 with 0,000 signifkansi level. Because 9,440 >1,68488 and 0.000 ttable ie, greater than t table of a significance level of 0.000 t less than 0.05 then the research hypothesis reject Ho and accept Ha. While the results of R2 of 0.670, indicating that 67% strategy marketing mix variables influenced the price, while the remaining 33% is influenced by other factors not examined. Conclusion there is the effect of strategy marketing mix on price satisfaction in Home Industry Jenang "MIRAH" in Ponorogo.