Geometric and Texture Inpainting by Gibbs Sampling
Gustafsson, David Karl John; Pedersen, Kim Steenstrup; Nielsen, Mads
2007-01-01
. In this paper we use the well-known FRAME (Filters, Random Fields and Maximum Entropy) for inpainting. We introduce a temperature term in the learned FRAME Gibbs distribution. By sampling using different temperature in the FRAME Gibbs distribution, different contents of the image are reconstructed. We propose...... a two step method for inpainting using FRAME. First the geometric structure of the image is reconstructed by sampling from a cooled Gibbs distribution, then the stochastic component is reconstructed by sample froma heated Gibbs distribution. Both steps in the reconstruction process are necessary...
Gibbs sampling on large lattice with GMRF
Marcotte, Denis; Allard, Denis
2018-02-01
Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.
Inverse Gaussian model for small area estimation via Gibbs sampling
We present a Bayesian method for estimating small area parameters under an inverse Gaussian model. The method is extended to estimate small area parameters for finite populations. The Gibbs sampler is proposed as a mechanism for implementing the Bayesian paradigm. We illustrate the method by application to ...
Large scale inference in the Infinite Relational Model: Gibbs sampling is not enough
Albers, Kristoffer Jon; Moth, Andreas Leon Aagard; Mørup, Morten
2013-01-01
. We find that Gibbs sampling can be computationally scaled to handle millions of nodes and billions of links. Investigating the behavior of the Gibbs sampler for different sizes of networks we find that the mixing ability decreases drastically with the network size, clearly indicating a need...
Hansen, Thomas Mejer; Mosegaard, Klaus; Cordua, Knud Skou
2010-01-01
Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample the solutions to non-linear inverse problems. In principle these methods allow incorporation of arbitrarily complex a priori information, but current methods allow only relatively simple...... this algorithm with the Metropolis algorithm to obtain an efficient method for sampling posterior probability densities for nonlinear inverse problems....
Near-Optimal Detection in MIMO Systems using Gibbs Sampling
Hansen, Morten; Hassibi, Babak; Dimakis, Georgios Alexandros
2009-01-01
In this paper we study a Markov Chain Monte Carlo (MCMC) Gibbs sampler for solving the integer least-squares problem. In digital communication the problem is equivalent to preforming Maximum Likelihood (ML) detection in Multiple-Input Multiple-Output (MIMO) systems. While the use of MCMC methods...... sampler provides a computationally efficient way of achieving approximative ML detection in MIMO systems having a huge number of transmit and receive dimensions. In fact, they further suggest that the Markov chain is rapidly mixing. Thus, it has been observed that even in cases were ML detection using, e...
Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width.
De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher
2015-12-01
Gibbs sampling on factor graphs is a widely used inference technique, which often produces good empirical results. Theoretical guarantees for its performance are weak: even for tree structured graphs, the mixing time of Gibbs may be exponential in the number of variables. To help understand the behavior of Gibbs sampling, we introduce a new (hyper)graph property, called hierarchy width . We show that under suitable conditions on the weights, bounded hierarchy width ensures polynomial mixing time. Our study of hierarchy width is in part motivated by a class of factor graph templates, hierarchical templates , which have bounded hierarchy width-regardless of the data used to instantiate them. We demonstrate a rich application from natural language processing in which Gibbs sampling provably mixes rapidly and achieves accuracy that exceeds human volunteers.
Improved prediction of MHC class I and class II epitopes using a novel Gibbs sampling approach
Nielsen, Morten; Lundegaard, Claus; Worning, Peder
2004-01-01
Prediction of which peptides will bind a specific major histocompatibility complex (MHC) constitutes an important step in identifying potential T-cell epitopes suitable as vaccine candidates. MHC class II binding peptides have a broad length distribution complicating such predictions. Thus......, identifying the correct alignment is a crucial part of identifying the core of an MHC class II binding motif. In this context, we wish to describe a novel Gibbs motif sampler method ideally suited for recognizing such weak sequence motifs. The method is based on the Gibbs sampling method, and it incorporates...
Priede, Imants G.; Billett, David S. M.; Brierley, Andrew S.; Hoelzel, A. Rus; Inall, Mark; Miller, Peter I.; Cousins, Nicola J.; Shields, Mark A.; Fujii, Toyonobu
2013-12-01
The ECOMAR project investigated photosynthetically-supported life on the North Mid-Atlantic Ridge (MAR) between the Azores and Iceland focussing on the Charlie-Gibbs Fracture Zone area in the vicinity of the sub-polar front where the North Atlantic Current crosses the MAR. Repeat visits were made to four stations at 2500 m depth on the flanks of the MAR in the years 2007-2010; a pair of northern stations at 54°N in cold water north of the sub-polar front and southern stations at 49°N in warmer water influenced by eddies from the North Atlantic Current. At each station an instrumented mooring was deployed with current meters and sediment traps (100 and 1000 m above the sea floor) to sample downward flux of particulate matter. The patterns of water flow, fronts, primary production and export flux in the region were studied by a combination of remote sensing and in situ measurements. Sonar, tow nets and profilers sampled pelagic fauna over the MAR. Swath bathymetry surveys across the ridge revealed sediment-covered flat terraces parallel to the axis of the MAR with intervening steep rocky slopes. Otter trawls, megacores, baited traps and a suite of tools carried by the R.O.V. Isis including push cores, grabs and a suction device collected benthic fauna. Video and photo surveys were also conducted using the SHRIMP towed vehicle and the R.O.V. Isis. Additional surveying and sampling by landers and R.O.V. focussed on the summit of a seamount (48°44‧N, 28°10‧W) on the western crest of the MAR between the two southern stations.
inverse gaussian model for small area estimation via gibbs sampling
ADMIN
For example, MacGibbon and Tomberlin. (1989) have considered estimating small area rates and binomial parameters using empirical Bayes methods. Stroud (1991) used hierarchical Bayes approach for univariate natural exponential families with quadratic variance functions in sample survey applications, while Chaubey ...
Global, exact cosmic microwave background data analysis using Gibbs sampling
Wandelt, Benjamin D.; Larson, David L.; Lakshminarayanan, Arun
2004-01-01
We describe an efficient and exact method that enables global Bayesian analysis of cosmic microwave background (CMB) data. The method reveals the joint posterior density (or likelihood for flat priors) of the power spectrum C l and the CMB signal. Foregrounds and instrumental parameters can be simultaneously inferred from the data. The method allows the specification of a wide range of foreground priors. We explicitly show how to propagate the non-Gaussian dependency structure of the C l posterior through to the posterior density of the parameters. If desired, the analysis can be coupled to theoretical (cosmological) priors and can yield the posterior density of cosmological parameter estimates directly from the time-ordered data. The method does not hinge on special assumptions about the survey geometry or noise properties, etc., It is based on a Monte Carlo approach and hence parallelizes trivially. No trace or determinant evaluations are necessary. The feasibility of this approach rests on the ability to solve the systems of linear equations which arise. These are of the same size and computational complexity as the map-making equations. We describe a preconditioned conjugate gradient technique that solves this problem and demonstrate in a numerical example that the computational time required for each Monte Carlo sample scales as n p 3/2 with the number of pixels n p . We use our method to analyze the data from the Differential Microwave Radiometer on the Cosmic Background Explorer and explore the non-Gaussian joint posterior density of the C l from the Differential Microwave Radiometer on the Cosmic Background Explorer in several projections
Nan, Ning; Chen, Qi; Wang, Yu; Zhai, Xu; Yang, Chuan-Ce; Cao, Bin; Chong, Tie
2017-10-01
To explore the disturbed molecular functions and pathways in clear cell renal cell carcinoma (ccRCC) using Gibbs sampling. Gene expression data of ccRCC samples and adjacent non-tumor renal tissues were recruited from public available database. Then, molecular functions of expression changed genes in ccRCC were classed to Gene Ontology (GO) project, and these molecular functions were converted into Markov chains. Markov chain Monte Carlo (MCMC) algorithm was implemented to perform posterior inference and identify probability distributions of molecular functions in Gibbs sampling. Differentially expressed molecular functions were selected under posterior value more than 0.95, and genes with the appeared times in differentially expressed molecular functions ≥5 were defined as pivotal genes. Functional analysis was employed to explore the pathways of pivotal genes and their strongly co-regulated genes. In this work, we obtained 396 molecular functions, and 13 of them were differentially expressed. Oxidoreductase activity showed the highest posterior value. Gene composition analysis identified 79 pivotal genes, and survival analysis indicated that these pivotal genes could be used as a strong independent predictor of poor prognosis in patients with ccRCC. Pathway analysis identified one pivotal pathway - oxidative phosphorylation. We identified the differentially expressed molecular functions and pivotal pathway in ccRCC using Gibbs sampling. The results could be considered as potential signatures for early detection and therapy of ccRCC. Copyright © 2017 Elsevier Ltd. All rights reserved.
Inverse problems with non-trivial priors: efficient solution through sequential Gibbs sampling
Hansen, Thomas Mejer; Cordua, Knud Skou; Mosegaard, Klaus
2012-01-01
Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample solutions to non-linear inverse problems. In principle, these methods allow incorporation of prior information of arbitrary complexity. If an analytical closed form description of the prior...... is available, which is the case when the prior can be described by a multidimensional Gaussian distribution, such prior information can easily be considered. In reality, prior information is often more complex than can be described by the Gaussian model, and no closed form expression of the prior can be given....... We propose an algorithm, called sequential Gibbs sampling, allowing the Metropolis algorithm to efficiently incorporate complex priors into the solution of an inverse problem, also for the case where no closed form description of the prior exists. First, we lay out the theoretical background...
Simultaneous alignment and clustering of peptide data using a Gibbs sampling approach
Andreatta, Massimo; Lund, Ole; Nielsen, Morten
2013-01-01
Motivation: Proteins recognizing short peptide fragments play a central role in cellular signaling. As a result of high-throughput technologies, peptide-binding protein specificities can be studied using large peptide libraries at dramatically lower cost and time. Interpretation of such large...... peptide datasets, however, is a complex task, especially when the data contain multiple receptor binding motifs, and/or the motifs are found at different locations within distinct peptides.Results: The algorithm presented in this article, based on Gibbs sampling, identifies multiple specificities...... of unaligned peptide datasets of variable length. Example applications described in this article include mixtures of binders to different MHC class I and class II alleles, distinct classes of ligands for SH3 domains and sub-specificities of the HLA-A*02:01 molecule.Availability: The Gibbs clustering method...
Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much.
He, Bryan; De Sa, Christopher; Mitliagkas, Ioannis; Ré, Christopher
2016-01-01
Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions. To prove these relative bounds, we introduce a method of augmenting the state space to study systematic scan using conductance.
Zohreh Yousefi
2016-11-01
Full Text Available Introduction Small ruminants, especially native breed types, play an important role in livelihoods of a considerable part of human population in the tropics from socio-economic aspects. Therefore, integrated attempt in terms of management and genetic improvement to enhance production is of crucial importance. Knowledge of genetic variation and co-variation among traits is required for both the design of effective sheep breeding programs and the accurate prediction of genetic progress from these programs. Body weight and growth traits are one of the economically important traits in sheep production, especially in Iran where lamb sale is the main source of income for sheep breeders while other products are in secondary importance. Although mutton is the most important source of protein in Iran, meat production from the sheep does not cover the increasing consumer demand. On the other hand, increase in sheep number to increase meat production has been limited by low quality and quantity of forage range. Therefore, enhancing meat production should be achieved by selecting the animals that have maximum genetic merit as next generation parents. To design an efficient improvement program and genetic evaluation system for maximization response to selection for economically important traits, accurate estimates of the genetic parameters and the genetic relationships between the traits are necessary. Studies of various sheep breeds have shown that both direct and maternal genetic influences are of importance for lamb growth. When growth traits are included in the breeding goal, both direct and maternal genetic effects should be taken into account in order to achieve optimum genetic progress. The objective of this study was to estimate the variance components and heritability, for growth traits, by fitting six animal models in the Sangsari sheep using Gibbs sampling. Material and Method Sangsari is a fat-tailed and relatively small sized breed of sheep
Statistical sampling strategies
Andres, T.H.
1987-01-01
Systems assessment codes use mathematical models to simulate natural and engineered systems. Probabilistic systems assessment codes carry out multiple simulations to reveal the uncertainty in values of output variables due to uncertainty in the values of the model parameters. In this paper, methods are described for sampling sets of parameter values to be used in a probabilistic systems assessment code. Three Monte Carlo parameter selection methods are discussed: simple random sampling, Latin hypercube sampling, and sampling using two-level orthogonal arrays. Three post-selection transformations are also described: truncation, importance transformation, and discretization. Advantages and disadvantages of each method are summarized
The younger Gibbs grew up in the liberal and academic atmos- phere at Yale, where .... research in the premier European universities at the time when a similar culture ... tion in obscure journals, Gibbs' work did not receive wide recognition in ...
Spent nuclear fuel sampling strategy
Bergmann, D.W.
1995-01-01
This report proposes a strategy for sampling the spent nuclear fuel (SNF) stored in the 105-K Basins (105-K East and 105-K West). This strategy will support decisions concerning the path forward SNF disposition efforts in the following areas: (1) SNF isolation activities such as repackaging/overpacking to a newly constructed staging facility; (2) conditioning processes for fuel stabilization; and (3) interim storage options. This strategy was developed without following the Data Quality Objective (DQO) methodology. It is, however, intended to augment the SNF project DQOS. The SNF sampling is derived by evaluating the current storage condition of the SNF and the factors that effected SNF corrosion/degradation
DARIUSZ Piwczynski
2013-03-01
Full Text Available The research was carried out on 4,030 Polish Merino ewes born in the years 1991- 2001, kept in 15 flocks from the Pomorze and Kujawy region. Fertility of ewes in subsequent reproduction seasons was analysed with the use of multiple logistic regression. The research showed that there is a statistical influence of the flock, year of birth, age of dam, flock year interaction of birth on the ewes fertility. In order to estimate the genetic parameters, the Gibbs sampling method was applied, using the univariate animal models, both linear as well as threshold. Estimates of fertility depending on the model equalled 0.067 to 0.104, whereas the estimates of repeatability equalled respectively: 0.076 and 0.139. The obtained genetic parameters were then used to estimate the breeding values of the animals in terms of controlled trait (Best Linear Unbiased Prediction method using linear and threshold models. The obtained animal breeding values rankings in respect of the same trait with the use of linear and threshold models were strongly correlated with each other (rs = 0.972. Negative genetic trends of fertility (0.01-0.08% per year were found.
Gibbs-non-Gibbs transitions and vector-valued integration
Zuijlen, van W.B.
2016-01-01
This thesis consists of two distinct topics. The first part of the thesis con- siders Gibbs-non-Gibbs transitions. Gibbs measures describe the macro- scopic state of a system of a large number of components that is in equilib- rium. It may happen that when the system is transformed, for example, by
Soil sampling strategies: Evaluation of different approaches
De Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia
2008-01-01
The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2σ, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies
Soil sampling strategies: Evaluation of different approaches
De Zorzi, Paolo [Agenzia per la Protezione dell' Ambiente e per i Servizi Tecnici (APAT), Servizio Metrologia Ambientale, Via di Castel Romano, 100-00128 Roma (Italy)], E-mail: paolo.dezorzi@apat.it; Barbizzi, Sabrina; Belli, Maria [Agenzia per la Protezione dell' Ambiente e per i Servizi Tecnici (APAT), Servizio Metrologia Ambientale, Via di Castel Romano, 100-00128 Roma (Italy); Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia [Agenzia Regionale per la Prevenzione e Protezione dell' Ambiente del Veneto, ARPA Veneto, U.O. Centro Qualita Dati, Via Spalato, 14-36045 Vicenza (Italy)
2008-11-15
The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2{sigma}, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies.
Soil sampling strategies: evaluation of different approaches.
de Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia
2008-11-01
The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2sigma, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies.
A logistic regression estimating function for spatial Gibbs point processes
Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege
We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related to the p......We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...
Quantum Gibbs Samplers: The Commuting Case
Kastoryano, Michael J.; Brandão, Fernando G. S. L.
2016-06-01
We analyze the problem of preparing quantum Gibbs states of lattice spin Hamiltonians with local and commuting terms on a quantum computer and in nature. Our central result is an equivalence between the behavior of correlations in the Gibbs state and the mixing time of the semigroup which drives the system to thermal equilibrium (the Gibbs sampler). We introduce a framework for analyzing the correlation and mixing properties of quantum Gibbs states and quantum Gibbs samplers, which is rooted in the theory of non-commutative {mathbb{L}_p} spaces. We consider two distinct classes of Gibbs samplers, one of them being the well-studied Davies generator modelling the dynamics of a system due to weak-coupling with a large Markovian environment. We show that their spectral gap is independent of system size if, and only if, a certain strong form of clustering of correlations holds in the Gibbs state. Therefore every Gibbs state of a commuting Hamiltonian that satisfies clustering of correlations in this strong sense can be prepared efficiently on a quantum computer. As concrete applications of our formalism, we show that for every one-dimensional lattice system, or for systems in lattices of any dimension at temperatures above a certain threshold, the Gibbs samplers of commuting Hamiltonians are always gapped, giving an efficient way of preparing the associated Gibbs states on a quantum computer.
Sampling strategies for indoor radon investigations
Prichard, H.M.
1983-01-01
Recent investigations prompted by concern about the environmental effects of residential energy conservation have produced many accounts of indoor radon concentrations far above background levels. In many instances time-normalized annual exposures exceeded the 4 WLM per year standard currently used for uranium mining. Further investigations of indoor radon exposures are necessary to judge the extent of the problem and to estimate the practicality of health effects studies. A number of trends can be discerned as more indoor surveys are reported. It is becoming increasingly clear that local geological factors play a major, if not dominant role in determining the distribution of indoor radon concentrations in a given area. Within a giving locale, indoor radon concentrations tend to be log-normally distributed, and sample means differ markedly from one region to another. The appreciation of geological factors and the general log-normality of radon distributions will improve the accuracy of population dose estimates and facilitate the design of preliminary health effects studies. The relative merits of grab samples, short and long term integrated samples, and more complicated dose assessment strategies are discussed in the context of several types of epidemiological investigations. A new passive radon sampler with a 24 hour integration time is described and evaluated as a tool for pilot investigations
An Introduction to the DA-T Gibbs Sampler for the Two-Parameter Logistic (2PL Model and Beyond
Gunter Maris
2005-01-01
Full Text Available The DA-T Gibbs sampler is proposed by Maris and Maris (2002 as a Bayesian estimation method for a wide variety of Item Response Theory (IRT models. The present paper provides an expository account of the DAT Gibbs sampler for the 2PL model. However, the scope is not limited to the 2PL model. It is demonstrated how the DA-T Gibbs sampler for the 2PL may be used to build, quite easily, Gibbs samplers for other IRT models. Furthermore, the paper contains a novel, intuitive derivation of the Gibbs sampler and could be read for a graduate course on sampling.
Enzyme Catalysis and the Gibbs Energy
Ault, Addison
2009-01-01
Gibbs-energy profiles are often introduced during the first semester of organic chemistry, but are less often presented in connection with enzyme-catalyzed reactions. In this article I show how the Gibbs-energy profile corresponds to the characteristic kinetics of a simple enzyme-catalyzed reaction. (Contains 1 figure and 1 note.)
Methodology Series Module 5: Sampling Strategies.
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.
Methodology series module 5: Sampling strategies
Maninder Singh Setia
2016-01-01
Full Text Available Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the 'Sampling Method'. There are essentially two types of sampling methods: 1 probability sampling – based on chance events (such as random numbers, flipping a coin etc.; and 2 non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term 'random sample' when the researcher has used convenience sample. The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the 'generalizability' of these results. In such a scenario, the researcher may want to use 'purposive sampling' for the study.
Methodology Series Module 5: Sampling Strategies
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ? Sampling Method?. There are essentially two types of sampling methods: 1) probability sampling ? based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling ? based on researcher's choice, population that accessible & available. Some of the non-probabilit...
Methodology Series Module 5: Sampling Strategies
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438
Optimal sampling strategy for data mining
Ghaffar, A.; Shahbaz, M.; Mahmood, W.
2013-01-01
Latest technology like Internet, corporate intranets, data warehouses, ERP's, satellites, digital sensors, embedded systems, mobiles networks all are generating such a massive amount of data that it is getting very difficult to analyze and understand all these data, even using data mining tools. Huge datasets are becoming a difficult challenge for classification algorithms. With increasing amounts of data, data mining algorithms are getting slower and analysis is getting less interactive. Sampling can be a solution. Using a fraction of computing resources, Sampling can often provide same level of accuracy. The process of sampling requires much care because there are many factors involved in the determination of correct sample size. The approach proposed in this paper tries to find a solution to this problem. Based on a statistical formula, after setting some parameters, it returns a sample size called s ufficient sample size , which is then selected through probability sampling. Results indicate the usefulness of this technique in coping with the problem of huge datasets. (author)
Sampling strategies for estimating brook trout effective population size
Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher
2012-01-01
The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...
Psychoanalytic Interpretation of Blueberries by Susan Gibb
Maya Zalbidea Paniagua
2014-06-01
Full Text Available Blueberries (2009 by Susan Gibb, published in the ELO (Electronic Literature Organization, invites the reader to travel inside the protagonist’s mind to discover real and imaginary experiences examining notions of gender, sex, body and identity of a traumatised woman. This article explores the verbal and visual modes in this digital short fiction following semiotic patterns as well as interpreting the psychological states that are expressed through poetical and technological components. A comparative study of the consequences of trauma in the protagonist will be developed including psychoanalytic theories by Sigmund Freud, Jacques Lacan and the feminist psychoanalysts: Melanie Klein and Bracha Ettinger. The reactions of the protagonist will be studied: loss of reality, hallucinations and Electra Complex, as well as the rise of defence mechanisms and her use of the artistic creativity as a healing therapy. The interactivity of the hypermedia, multiple paths and endings will be analyzed as a literary strategy that increases the reader’s capacity of empathizing with the speaker.
Sampling strategies for millipedes (Diplopoda), centipedes ...
At present considerable effort is being made to document and describe invertebrate diversity as part of numerous biodiversity conservation research projects. In order to determine diversity, rapid and effective sampling and estimation procedures are required and these need to be standardized for a particular group of ...
Validated sampling strategy for assessing contaminants in soil stockpiles
Lame, Frank; Honders, Ton; Derksen, Giljam; Gadella, Michiel
2005-01-01
Dutch legislation on the reuse of soil requires a sampling strategy to determine the degree of contamination. This sampling strategy was developed in three stages. Its main aim is to obtain a single analytical result, representative of the true mean concentration of the soil stockpile. The development process started with an investigation into how sample pre-treatment could be used to obtain representative results from composite samples of heterogeneous soil stockpiles. Combining a large number of random increments allows stockpile heterogeneity to be fully represented in the sample. The resulting pre-treatment method was then combined with a theoretical approach to determine the necessary number of increments per composite sample. At the second stage, the sampling strategy was evaluated using computerised models of contaminant heterogeneity in soil stockpiles. The now theoretically based sampling strategy was implemented by the Netherlands Centre for Soil Treatment in 1995. It was applied to all types of soil stockpiles, ranging from clean to heavily contaminated, over a period of four years. This resulted in a database containing the analytical results of 2570 soil stockpiles. At the final stage these results were used for a thorough validation of the sampling strategy. It was concluded that the model approach has indeed resulted in a sampling strategy that achieves analytical results representative of the mean concentration of soil stockpiles. - A sampling strategy that ensures analytical results representative of the mean concentration in soil stockpiles is presented and validated
Inferring the Gibbs state of a small quantum system
Rau, Jochen
2011-01-01
Gibbs states are familiar from statistical mechanics, yet their use is not limited to that domain. For instance, they also feature in the maximum entropy reconstruction of quantum states from incomplete measurement data. Outside the macroscopic realm, however, estimating a Gibbs state is a nontrivial inference task, due to two complicating factors: the proper set of relevant observables might not be evident a priori; and whenever data are gathered from a small sample only, the best estimate for the Lagrange parameters is invariably affected by the experimenter's prior bias. I show how the two issues can be tackled with the help of Bayesian model selection and Bayesian interpolation, respectively, and illustrate the use of these Bayesian techniques with a number of simple examples.
Notes on the development of the gibbs potential; Sur le developpement du potentiel de gibbs
Bloch, C; Dominicis, C de [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires
1959-07-01
A short account is given of some recent work on the perturbation expansion of the Gibbs potential of quantum statistical mechanics. (author) [French] Expose en resume de quelques travaux sur le developpement dans la theorie des perturbations du potentiel de Gibbs de la Mecanique Statistique. (auteur)
Evolution algebras generated by Gibbs measures
Rozikov, Utkir A.; Tian, Jianjun Paul
2009-03-01
In this article we study algebraic structures of function spaces defined by graphs and state spaces equipped with Gibbs measures by associating evolution algebras. We give a constructive description of associating evolution algebras to the function spaces (cell spaces) defined by graphs and state spaces and Gibbs measure μ. For finite graphs we find some evolution subalgebras and other useful properties of the algebras. We obtain a structure theorem for evolution algebras when graphs are finite and connected. We prove that for a fixed finite graph, the function spaces have a unique algebraic structure since all evolution algebras are isomorphic to each other for whichever Gibbs measures are assigned. When graphs are infinite graphs then our construction allows a natural introduction of thermodynamics in studying of several systems of biology, physics and mathematics by theory of evolution algebras. (author)
Effective sampling strategy to detect food and feed contamination
Bouzembrak, Yamine; Fels, van der Ine
2018-01-01
Sampling plans for food safety hazards are aimed to be used to determine whether a lot of food is contaminated (with microbiological or chemical hazards) or not. One of the components of sampling plans is the sampling strategy. The aim of this study was to compare the performance of three
Chan, M.T.; Herman, G.T.; Levitan, E.
1996-01-01
We demonstrate that (i) classical methods of image reconstruction from projections can be improved upon by considering the output of such a method as a distorted version of the original image and applying a Bayesian approach to estimate from it the original image (based on a model of distortion and on a Gibbs distribution as the prior) and (ii) by selecting an open-quotes image-modelingclose quotes prior distribution (i.e., one which is such that it is likely that a random sample from it shares important characteristics of the images of the application area) one can improve over another Gibbs prior formulated using only pairwise interactions. We illustrate our approach using simulated Positron Emission Tomography (PET) data from realistic brain phantoms. Since algorithm performance ultimately depends on the diagnostic task being performed. we examine a number of different medically relevant figures of merit to give a fair comparison. Based on a training-and-testing evaluation strategy, we demonstrate that statistically significant improvements can be obtained using the proposed approach
Sampling strategy to develop a primary core collection of apple ...
PRECIOUS
2010-01-11
Jan 11, 2010 ... Physiology and Molecular Biology for Fruit, Tree, Beijing 100193, China. ... analyzed on genetic diversity to ensure their represen- .... strategy, cluster and random sampling. .... on isozyme data―A simulation study, Theor.
Generalization of Gibbs Entropy and Thermodynamic Relation
Park, Jun Chul
2010-01-01
In this paper, we extend Gibbs's approach of quasi-equilibrium thermodynamic processes, and calculate the microscopic expression of entropy for general non-equilibrium thermodynamic processes. Also, we analyze the formal structure of thermodynamic relation in non-equilibrium thermodynamic processes.
Finite Cycle Gibbs Measures on Permutations of
Armendáriz, Inés; Ferrari, Pablo A.; Groisman, Pablo; Leonardi, Florencia
2015-03-01
We consider Gibbs distributions on the set of permutations of associated to the Hamiltonian , where is a permutation and is a strictly convex potential. Call finite-cycle those permutations composed by finite cycles only. We give conditions on ensuring that for large enough temperature there exists a unique infinite volume ergodic Gibbs measure concentrating mass on finite-cycle permutations; this measure is equal to the thermodynamic limit of the specifications with identity boundary conditions. We construct as the unique invariant measure of a Markov process on the set of finite-cycle permutations that can be seen as a loss-network, a continuous-time birth and death process of cycles interacting by exclusion, an approach proposed by Fernández, Ferrari and Garcia. Define as the shift permutation . In the Gaussian case , we show that for each , given by is an ergodic Gibbs measure equal to the thermodynamic limit of the specifications with boundary conditions. For a general potential , we prove the existence of Gibbs measures when is bigger than some -dependent value.
Gibbs equilibrium averages and Bogolyubov measure
Sankovich, D.P.
2011-01-01
Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure
Illustrating Enzyme Inhibition Using Gibbs Energy Profiles
Bearne, Stephen L.
2012-01-01
Gibbs energy profiles have great utility as teaching and learning tools because they present students with a visual representation of the energy changes that occur during enzyme catalysis. Unfortunately, most textbooks divorce discussions of traditional kinetic topics, such as enzyme inhibition, from discussions of these same topics in terms of…
Evaluation of sampling strategies to estimate crown biomass
Krishna P Poudel
2015-01-01
Full Text Available Background Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree. Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products, fuel load assessments and fire management strategies, and wildfire modeling. However, crown biomass is difficult to predict because of the variability within and among species and sites. Thus the allometric equations used for predicting crown biomass should be based on data collected with precise and unbiased sampling strategies. In this study, we evaluate the performance different sampling strategies to estimate crown biomass and to evaluate the effect of sample size in estimating crown biomass. Methods Using data collected from 20 destructively sampled trees, we evaluated 11 different sampling strategies using six evaluation statistics: bias, relative bias, root mean square error (RMSE, relative RMSE, amount of biomass sampled, and relative biomass sampled. We also evaluated the performance of the selected sampling strategies when different numbers of branches (3, 6, 9, and 12 are selected from each tree. Tree specific log linear model with branch diameter and branch length as covariates was used to obtain individual branch biomass. Results Compared to all other methods stratified sampling with probability proportional to size estimation technique produced better results when three or six branches per tree were sampled. However, the systematic sampling with ratio estimation technique was the best when at least nine branches per tree were sampled. Under the stratified sampling strategy, selecting unequal number of branches per stratum produced approximately similar results to simple random sampling, but it further decreased RMSE when information on branch diameter is used in the design and estimation phases. Conclusions Use of
User-driven sampling strategies in image exploitation
Harvey, Neal; Porter, Reid
2013-12-01
Visual analytics and interactive machine learning both try to leverage the complementary strengths of humans and machines to solve complex data exploitation tasks. These fields overlap most significantly when training is involved: the visualization or machine learning tool improves over time by exploiting observations of the human-computer interaction. This paper focuses on one aspect of the human-computer interaction that we call user-driven sampling strategies. Unlike relevance feedback and active learning sampling strategies, where the computer selects which data to label at each iteration, we investigate situations where the user selects which data is to be labeled at each iteration. User-driven sampling strategies can emerge in many visual analytics applications but they have not been fully developed in machine learning. User-driven sampling strategies suggest new theoretical and practical research questions for both visualization science and machine learning. In this paper we identify and quantify the potential benefits of these strategies in a practical image analysis application. We find user-driven sampling strategies can sometimes provide significant performance gains by steering tools towards local minima that have lower error than tools trained with all of the data. In preliminary experiments we find these performance gains are particularly pronounced when the user is experienced with the tool and application domain.
Efficient sampling of complex network with modified random walk strategies
Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei
2018-02-01
We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.
Optimal sampling strategies for detecting zoonotic disease epidemics.
Jake M Ferguson
2014-06-01
Full Text Available The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.
Optimal sampling strategies for detecting zoonotic disease epidemics.
Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W
2014-06-01
The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.
A Bayesian sampling strategy for hazardous waste site characterization
Skalski, J.R.
1987-12-01
Prior knowledge based on historical records or physical evidence often suggests the existence of a hazardous waste site. Initial surveys may provide additional or even conflicting evidence of site contamination. This article presents a Bayes sampling strategy that allocates sampling at a site using this prior knowledge. This sampling strategy minimizes the environmental risks of missing chemical or radionuclide hot spots at a waste site. The environmental risk is shown to be proportional to the size of the undetected hot spot or inversely proportional to the probability of hot spot detection. 12 refs., 2 figs
An efficient estimator for Gibbs random fields
Janžura, Martin
2014-01-01
Roč. 50, č. 6 (2014), s. 883-895 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : Gibbs random field * efficient estimator * empirical estimator Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2015/SI/janzura-0441325.pdf
Mars Sample Return - Launch and Detection Strategies for Orbital Rendezvous
Woolley, Ryan C.; Mattingly, Richard L.; Riedel, Joseph E.; Sturm, Erick J.
2011-01-01
This study sets forth conceptual mission design strategies for the ascent and rendezvous phase of the proposed NASA/ESA joint Mars Sample Return Campaign. The current notional mission architecture calls for the launch of an acquisition/cache rover in 2018, an orbiter with an Earth return vehicle in 2022, and a fetch rover and ascent vehicle in 2024. Strategies are presented to launch the sample into a coplanar orbit with the Orbiter which facilitate robust optical detection, orbit determination, and rendezvous. Repeating ground track orbits exist at 457 and 572 km which provide multiple launch opportunities with similar geometries for detection and rendezvous.
Mars Sample Return: Launch and Detection Strategies for Orbital Rendezvous
Woolley, Ryan C.; Mattingly, Richard L.; Riedel, Joseph E.; Sturm, Erick J.
2011-01-01
This study sets forth conceptual mission design strategies for the ascent and rendezvous phase of the proposed NASA/ESA joint Mars Sample Return Campaign. The current notional mission architecture calls for the launch of an acquisition/ caching rover in 2018, an Earth return orbiter in 2022, and a fetch rover with ascent vehicle in 2024. Strategies are presented to launch the sample into a nearly coplanar orbit with the Orbiter which would facilitate robust optical detection, orbit determination, and rendezvous. Repeating ground track orbits existat 457 and 572 km which would provide multiple launch opportunities with similar geometries for detection and rendezvous.
Sampling strategy for estimating human exposure pathways to consumer chemicals
Papadopoulou, Eleni; Padilla-Sanchez, Juan A.; Collins, Chris D.; Cousins, Ian T.; Covaci, Adrian; de Wit, Cynthia A.; Leonards, Pim E.G.; Voorspoels, Stefan; Thomsen, Cathrine; Harrad, Stuart; Haug, Line S.
2016-01-01
Human exposure to consumer chemicals has become a worldwide concern. In this work, a comprehensive sampling strategy is presented, to our knowledge being the first to study all relevant exposure pathways in a single cohort using multiple methods for assessment of exposure from each exposure pathway.
Chapter 2: Sampling strategies in forest hydrology and biogeochemistry
Roger C. Bales; Martha H. Conklin; Branko Kerkez; Steven Glaser; Jan W. Hopmans; Carolyn T. Hunsaker; Matt Meadows; Peter C. Hartsough
2011-01-01
Many aspects of forest hydrology have been based on accurate but not necessarily spatially representative measurements, reflecting the measurement capabilities that were traditionally available. Two developments are bringing about fundamental changes in sampling strategies in forest hydrology and biogeochemistry: (a) technical advances in measurement capability, as is...
Just Another Gibbs Additive Modeler: Interfacing JAGS and mgcv
Simon N. Wood
2016-12-01
Full Text Available The BUGS language offers a very flexible way of specifying complex statistical models for the purposes of Gibbs sampling, while its JAGS variant offers very convenient R integration via the rjags package. However, including smoothers in JAGS models can involve some quite tedious coding, especially for multivariate or adaptive smoothers. Further, if an additive smooth structure is required then some care is needed, in order to centre smooths appropriately, and to find appropriate starting values. R package mgcv implements a wide range of smoothers, all in a manner appropriate for inclusion in JAGS code, and automates centring and other smooth setup tasks. The purpose of this note is to describe an interface between mgcv and JAGS, based around an R function, jagam, which takes a generalized additive model (GAM as specified in mgcv and automatically generates the JAGS model code and data required for inference about the model via Gibbs sampling. Although the auto-generated JAGS code can be run as is, the expectation is that the user would wish to modify it in order to add complex stochastic model components readily specified in JAGS. A simple interface is also provided for visualisation and further inference about the estimated smooth components using standard mgcv functionality. The methods described here will be un-necessarily inefficient if all that is required is fully Bayesian inference about a standard GAM, rather than the full flexibility of JAGS. In that case the BayesX package would be more efficient.
Novel strategies for sample preparation in forensic toxicology.
Samanidou, Victoria; Kovatsi, Leda; Fragou, Domniki; Rentifis, Konstantinos
2011-09-01
This paper provides a review of novel strategies for sample preparation in forensic toxicology. The review initially outlines the principle of each technique, followed by sections addressing each class of abused drugs separately. The novel strategies currently reviewed focus on the preparation of various biological samples for the subsequent determination of opiates, benzodiazepines, amphetamines, cocaine, hallucinogens, tricyclic antidepressants, antipsychotics and cannabinoids. According to our experience, these analytes are the most frequently responsible for intoxications in Greece. The applications of techniques such as disposable pipette extraction, microextraction by packed sorbent, matrix solid-phase dispersion, solid-phase microextraction, polymer monolith microextraction, stir bar sorptive extraction and others, which are rapidly gaining acceptance in the field of toxicology, are currently reviewed.
Adaptive sampling strategies with high-throughput molecular dynamics
Clementi, Cecilia
Despite recent significant hardware and software developments, the complete thermodynamic and kinetic characterization of large macromolecular complexes by molecular simulations still presents significant challenges. The high dimensionality of these systems and the complexity of the associated potential energy surfaces (creating multiple metastable regions connected by high free energy barriers) does not usually allow to adequately sample the relevant regions of their configurational space by means of a single, long Molecular Dynamics (MD) trajectory. Several different approaches have been proposed to tackle this sampling problem. We focus on the development of ensemble simulation strategies, where data from a large number of weakly coupled simulations are integrated to explore the configurational landscape of a complex system more efficiently. Ensemble methods are of increasing interest as the hardware roadmap is now mostly based on increasing core counts, rather than clock speeds. The main challenge in the development of an ensemble approach for efficient sampling is in the design of strategies to adaptively distribute the trajectories over the relevant regions of the systems' configurational space, without using any a priori information on the system global properties. We will discuss the definition of smart adaptive sampling approaches that can redirect computational resources towards unexplored yet relevant regions. Our approaches are based on new developments in dimensionality reduction for high dimensional dynamical systems, and optimal redistribution of resources. NSF CHE-1152344, NSF CHE-1265929, Welch Foundation C-1570.
A proposal of optimal sampling design using a modularity strategy
Simone, A.; Giustolisi, O.; Laucelli, D. B.
2016-08-01
In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
Hampton, Jerrad; Doostan, Alireza
2015-01-01
Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
Hampton, Jerrad; Doostan, Alireza
2015-01-01
Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.
Sampling and analyte enrichment strategies for ambient mass spectrometry.
Li, Xianjiang; Ma, Wen; Li, Hongmei; Ai, Wanpeng; Bai, Yu; Liu, Huwei
2018-01-01
Ambient mass spectrometry provides great convenience for fast screening, and has showed promising potential in analytical chemistry. However, its relatively low sensitivity seriously restricts its practical utility in trace compound analysis. In this review, we summarize the sampling and analyte enrichment strategies coupled with nine modes of representative ambient mass spectrometry (desorption electrospray ionization, paper vhspray ionization, wooden-tip spray ionization, probe electrospray ionization, coated blade spray ionization, direct analysis in real time, desorption corona beam ionization, dielectric barrier discharge ionization, and atmospheric-pressure solids analysis probe) that have dramatically increased the detection sensitivity. We believe that these advances will promote routine use of ambient mass spectrometry. Graphical abstract Scheme of sampling stretagies for ambient mass spectrometry.
A Geostatistical Approach to Indoor Surface Sampling Strategies
Schneider, Thomas; Petersen, Ole Holm; Nielsen, Allan Aasbjerg
1990-01-01
Particulate surface contamination is of concern in production industries such as food processing, aerospace, electronics and semiconductor manufacturing. There is also an increased awareness that surface contamination should be monitored in industrial hygiene surveys. A conceptual and theoretical...... framework for designing sampling strategies is thus developed. The distribution and spatial correlation of surface contamination can be characterized using concepts from geostatistical science, where spatial applications of statistics is most developed. The theory is summarized and particulate surface...... contamination, sampled from small areas on a table, have been used to illustrate the method. First, the spatial correlation is modelled and the parameters estimated from the data. Next, it is shown how the contamination at positions not measured can be estimated with kriging, a minimum mean square error method...
Sampling and analysis strategies to support waste form qualification
Westsik, J.H. Jr.; Pulsipher, B.A.; Eggett, D.L.; Kuhn, W.L.
1989-04-01
As part of the waste acceptance process, waste form producers will be required to (1) demonstrate that their glass waste form will meet minimum specifications, (2) show that the process can be controlled to consistently produce an acceptable waste form, and (3) provide documentation that the waste form produced meets specifications. Key to the success of these endeavors is adequate sampling and chemical and radiochemical analyses of the waste streams from the waste tanks through the process to the final glass product. This paper suggests sampling and analysis strategies for meeting specific statistical objectives of (1) detection of compositions outside specification limits, (2) prediction of final glass product composition, and (3) estimation of composition in process vessels for both reporting and guiding succeeding process steps. 2 refs., 1 fig., 3 tabs
Külske, C
2003-01-01
We derive useful general concentration inequalities for functions of Gibbs fields in the uniqueness regime. We also consider expectations of random Gibbs measures that depend on an additional disorder field, and prove concentration w.r.t the disorder field. Both fields are assumed to be in the uniqueness regime, allowing in particular for non-independent disorder field. The modification of the bounds compared to the case of an independent field can be expressed in terms of constants that resemble the Dobrushin contraction coefficient, and are explicitly computable. On the basis of these inequalities, we obtain bounds on the deviation of a diffraction pattern created by random scatterers located on a general discrete point set in the Euclidean space, restricted to a finite volume. Here we also allow for thermal dislocations of the scatterers around their equilibrium positions. Extending recent results for independent scatterers, we give a universal upper bound on the probability of a deviation of the random sc...
A brief critique of the Adam-Gibbs entropy model
Dyre, J. C.; Hecksher, Tina; Niss, Kristine
2009-01-01
This paper critically discusses the entropy model proposed by Adam and Gibbs in 1965 for the dramatic temperature dependence of glass-forming liquids' average relaxation time, which is one of the most influential models during the last four decades. We discuss the Adam-Gibbs model's theoretical...
Perspectives on land snails - sampling strategies for isotopic analyses
Kwiecien, Ola; Kalinowski, Annika; Kamp, Jessica; Pellmann, Anna
2017-04-01
Since the seminal works of Goodfriend (1992), several substantial studies confirmed a relation between the isotopic composition of land snail shells (d18O, d13C) and environmental parameters like precipitation amount, moisture source, temperature and vegetation type. This relation, however, is not straightforward and site dependent. The choice of sampling strategy (discrete or bulk sampling) and cleaning procedure (several methods can be used, but comparison of their effects in an individual shell has yet not been achieved) further complicate the shell analysis. The advantage of using snail shells as environmental archive lies in the snails' limited mobility, and therefore an intrinsic aptitude of recording local and site-specific conditions. Also, snail shells are often found at dated archaeological sites. An obvious drawback is that shell assemblages rarely make up a continuous record, and a single shell is only a snapshot of the environmental setting at a given time. Shells from archaeological sites might represent a dietary component and cooking would presumably alter the isotopic signature of aragonite material. Consequently, a proper sampling strategy is of great importance and should be adjusted to the scientific question. Here, we compare and contrast different sampling approaches using modern shells collected in Morocco, Spain and Germany. The bulk shell approach (fine-ground material) yields information on mean environmental parameters within the life span of analyzed individuals. However, despite homogenization, replicate measurements of bulk shell material returned results with a variability greater than analytical precision (up to 2‰ for d18O, and up to 1‰ for d13C), calling for caution analyzing only single individuals. Horizontal high-resolution sampling (single drill holes along growth lines) provides insights into the amplitude of seasonal variability, while vertical high-resolution sampling (multiple drill holes along the same growth line
Sampling strategy for estimating human exposure pathways to consumer chemicals
Eleni Papadopoulou
2016-03-01
Full Text Available Human exposure to consumer chemicals has become a worldwide concern. In this work, a comprehensive sampling strategy is presented, to our knowledge being the first to study all relevant exposure pathways in a single cohort using multiple methods for assessment of exposure from each exposure pathway. The selected groups of chemicals to be studied are consumer chemicals whose production and use are currently in a state of transition and are; per- and polyfluorinated alkyl substances (PFASs, traditional and “emerging” brominated flame retardants (BFRs and EBFRs, organophosphate esters (OPEs and phthalate esters (PEs. Information about human exposure to these contaminants is needed due to existing data gaps on human exposure intakes from multiple exposure pathways and relationships between internal and external exposure. Indoor environment, food and biological samples were collected from 61 participants and their households in the Oslo area (Norway on two consecutive days, during winter 2013-14. Air, dust, hand wipes, and duplicate diet (food and drink samples were collected as indicators of external exposure, and blood, urine, blood spots, hair, nails and saliva as indicators of internal exposure. A food diary, food frequency questionnaire (FFQ and indoor environment questionnaire were also implemented. Approximately 2000 samples were collected in total and participant views on their experiences of this campaign were collected via questionnaire. While 91% of our participants were positive about future participation in a similar project, some tasks were viewed as problematic. Completing the food diary and collection of duplicate food/drink portions were the tasks most frequent reported as “hard”/”very hard”. Nevertheless, a strong positive correlation between the reported total mass of food/drinks in the food record and the total weight of the food/drinks in the collection bottles was observed, being an indication of accurate performance
Evaluating sampling strategies for larval cisco (Coregonus artedi)
Myers, J.T.; Stockwell, J.D.; Yule, D.L.; Black, J.A.
2008-01-01
To improve our ability to assess larval cisco (Coregonus artedi) populations in Lake Superior, we conducted a study to compare several sampling strategies. First, we compared density estimates of larval cisco concurrently captured in surface waters with a 2 x 1-m paired neuston net and a 0.5-m (diameter) conical net. Density estimates obtained from the two gear types were not significantly different, suggesting that the conical net is a reasonable alternative to the more cumbersome and costly neuston net. Next, we assessed the effect of tow pattern (sinusoidal versus straight tows) to examine if propeller wash affected larval density. We found no effect of propeller wash on the catchability of larval cisco. Given the availability of global positioning systems, we recommend sampling larval cisco using straight tows to simplify protocols and facilitate straightforward measurements of volume filtered. Finally, we investigated potential trends in larval cisco density estimates by sampling four time periods during the light period of a day at individual sites. Our results indicate no significant trends in larval density estimates during the day. We conclude estimates of larval cisco density across space are not confounded by time at a daily timescale. Well-designed, cost effective surveys of larval cisco abundance will help to further our understanding of this important Great Lakes forage species.
Larsen, Erik Huusfeldt; Löschner, Katrin
2014-01-01
microscopy (TEM) proved to be necessary for trouble shooting of results obtained from AFFF-LS-ICP-MS. Aqueous and enzymatic extraction strategies were tested for thorough sample preparation aiming at degrading the sample matrix and to liberate the AgNPs from chicken meat into liquid suspension. The resulting...... AFFF-ICP-MS fractograms, which corresponded to the enzymatic digests, showed a major nano-peak (about 80 % recovery of AgNPs spiked to the meat) plus new smaller peaks that eluted close to the void volume of the fractograms. Small, but significant shifts in retention time of AFFF peaks were observed...... for the meat sample extracts and the corresponding neat AgNP suspension, and rendered sizing by way of calibration with AgNPs as sizing standards inaccurate. In order to gain further insight into the sizes of the separated AgNPs, or their possible dissolved state, fractions of the AFFF eluate were collected...
Unifying hydrotropy under Gibbs phase rule.
Shimizu, Seishi; Matubayasi, Nobuyuki
2017-09-13
The task of elucidating the mechanism of solubility enhancement using hydrotropes has been hampered by the wide variety of phase behaviour that hydrotropes can exhibit, encompassing near-ideal aqueous solution, self-association, micelle formation, and micro-emulsions. Instead of taking a field guide or encyclopedic approach to classify hydrotropes into different molecular classes, we take a rational approach aiming at constructing a unified theory of hydrotropy based upon the first principles of statistical thermodynamics. Achieving this aim can be facilitated by the two key concepts: (1) the Gibbs phase rule as the basis of classifying the hydrotropes in terms of the degrees of freedom and the number of variables to modulate the solvation free energy; (2) the Kirkwood-Buff integrals to quantify the interactions between the species and their relative contributions to the process of solubilization. We demonstrate that the application of the two key concepts can in principle be used to distinguish the different molecular scenarios at work under apparently similar solubility curves observed from experiments. In addition, a generalization of our previous approach to solutes beyond dilution reveals the unified mechanism of hydrotropy, driven by a strong solute-hydrotrope interaction which overcomes the apparent per-hydrotrope inefficiency due to hydrotrope self-clustering.
inverse gaussian model for small area estimation via gibbs sampling
ADMIN
1 Department of Decision Sciences and MIS, Concordia University, Montréal,. Québec ... method by application to household income survey data, comparing it against the usual lognormal ...... pensions, superannuation and annuities and other.
Hypothesis testing in genetic linkage analysis via Gibbs sampling ( )
hope&shola
2010-12-06
Dec 6, 2010 ... The existing theory assumes an asymptotic normality for score statistics which is violated on boundary ... Monte Carlo approach is proposed to overcome this problem. ... probability, that is, the probability that an individual with.
Gibbs perturbations of a two-dimensional gauge field
Petrova, E.N.
1981-01-01
Small Gibbs perturbations of random fields have been investigated up to now for a few initial fields only. Among them there are independent fields, Gaussian fields and some others. The possibility for the investigation of Gibbs modifications of a random field depends essentially on the existence of good estimates for semiinvariants of this field. This is the reason why the class of random fields for which the investigation of Gibbs perturbations with arbitrary potential of bounded support is possible is rather small. The author takes as initial a well-known model: a two-dimensional gauge field. (Auth.)
A. de Verneil
2018-04-01
Full Text Available Research cruises to quantify biogeochemical fluxes in the ocean require taking measurements at stations lasting at least several days. A popular experimental design is the quasi-Lagrangian drifter, often mounted with in situ incubations or sediment traps that follow the flow of water over time. After initial drifter deployment, the ship tracks the drifter for continuing measurements that are supposed to represent the same water environment. An outstanding question is how to best determine whether this is true. During the Oligotrophy to UlTra-oligotrophy PACific Experiment (OUTPACE cruise, from 18 February to 3 April 2015 in the western tropical South Pacific, three separate stations of long duration (five days over the upper 500 m were conducted in this quasi-Lagrangian sampling scheme. Here we present physical data to provide context for these three stations and to assess whether the sampling strategy worked, i.e., that a single body of water was sampled. After analyzing tracer variability and local water circulation at each station, we identify water layers and times where the drifter risks encountering another body of water. While almost no realization of this sampling scheme will be truly Lagrangian, due to the presence of vertical shear, the depth-resolved observations during the three stations show most layers sampled sufficiently homogeneous physical environments during OUTPACE. By directly addressing the concerns raised by these quasi-Lagrangian sampling platforms, a protocol of best practices can begin to be formulated so that future research campaigns include the complementary datasets and analyses presented here to verify the appropriate use of the drifter platform.
A sampling strategy to establish existing plant configuration baselines
Buchanan, L.P.
1995-01-01
The Department of Energy's Gaseous Diffusion Plants (DOEGDP) are undergoing a Safety Analysis Update Program. As part of this program, critical existing structures are being reevaluated for Natural Phenomena Hazards (NPH) based on the recommendations of UCRL-15910. The Department of Energy has specified that current plant configurations be used in the performance of these reevaluations. This paper presents the process and results of a walkdown program implemented at DOEGDP to establish the current configuration baseline for these existing critical structures for use in subsequent NPH evaluations. These structures are classified as moderate hazard facilities and were constructed in the early 1950's. The process involved a statistical sampling strategy to determine the validity of critical design information as represented on the original design drawings such as member sizes, orientation, connection details and anchorage. A floor load inventory of the dead load of the equipment, both permanently attached and spare, was also performed as well as a walkthrough inspection of the overall structure to identify any other significant anomalies
Reflections on Gibbs: From Critical Phenomena to the Amistad
Kadanoff, Leo P.
2003-03-01
J. Willard Gibbs, the younger was the first American theorist. He was one of the inventors of statistical physics. His introduction and development of the concepts of phase space, phase transitions, and thermodynamic surfaces was remarkably correct and elegant. These three concepts form the basis of different but related areas of physics. The connection among these areas has been a subject of deep reflection from Gibbs' time to our own. I shall talk about these connections by using concepts suggested by the work of Michael Berry and explicitly put forward by the philosopher Robert Batterman. This viewpoint relates theory-connection to the applied mathematics concepts of asymptotic analysis and singular perturbations. J. Willard Gibbs, the younger, had all his achievements concentrated in science. His father, also J. Willard Gibbs, also a Professor at Yale, had one great achievement that remains unmatched in our day. I shall describe it.
Boltzmann, Gibbs and Darwin-Fowler approaches in parastatistics
Ponczek, R.L.; Yan, C.C.
1976-01-01
Derivations of the equilibrium values of occupation numbers are made using three approaches, namely, the Boltzmann 'elementary' one, the ensemble method of Gibbs, and that of Darwin and Fowler as well [pt
Gibbs phenomenon for dispersive PDEs on the line
Biondini, Gino; Trogdon, Thomas
2014-01-01
We investigate the Cauchy problem for linear, constant-coefficient evolution PDEs on the real line with discontinuous initial conditions (ICs) in the small-time limit. The small-time behavior of the solution near discontinuities is expressed in terms of universal, computable special functions. We show that the leading-order behavior of the solution of dispersive PDEs near a discontinuity of the ICs is characterized by Gibbs-type oscillations and gives exactly the Wilbraham-Gibbs constant.
Takehisa Yamamoto
Full Text Available Because antimicrobial resistance in food-producing animals is a major public health concern, many countries have implemented antimicrobial monitoring systems at a national level. When designing a sampling scheme for antimicrobial resistance monitoring, it is necessary to consider both cost effectiveness and statistical plausibility. In this study, we examined how sampling scheme precision and sensitivity can vary with the number of animals sampled from each farm, while keeping the overall sample size constant to avoid additional sampling costs. Five sampling strategies were investigated. These employed 1, 2, 3, 4 or 6 animal samples per farm, with a total of 12 animals sampled in each strategy. A total of 1,500 Escherichia coli isolates from 300 fattening pigs on 30 farms were tested for resistance against 12 antimicrobials. The performance of each sampling strategy was evaluated by bootstrap resampling from the observational data. In the bootstrapping procedure, farms, animals, and isolates were selected randomly with replacement, and a total of 10,000 replications were conducted. For each antimicrobial, we observed that the standard deviation and 2.5-97.5 percentile interval of resistance prevalence were smallest in the sampling strategy that employed 1 animal per farm. The proportion of bootstrap samples that included at least 1 isolate with resistance was also evaluated as an indicator of the sensitivity of the sampling strategy to previously unidentified antimicrobial resistance. The proportion was greatest with 1 sample per farm and decreased with larger samples per farm. We concluded that when the total number of samples is pre-specified, the most precise and sensitive sampling strategy involves collecting 1 sample per farm.
SOME SYSTEMATIC SAMPLING STRATEGIES USING MULTIPLE RANDOM STARTS
Sampath Sundaram; Ammani Sivaraman
2010-01-01
In this paper an attempt is made to extend linear systematic sampling using multiple random starts due to Gautschi(1957)for various types of systematic sampling schemes available in literature, namely(i) Balanced Systematic Sampling (BSS) of Sethi (1965) and (ii) Modified Systematic Sampling (MSS) of Singh, Jindal, and Garg (1968). Further, the proposed methods were compared with Yates corrected estimator developed with reference to Gautschi’s Linear systematic samplin...
Brus, D.J.; Gruijter, de J.J.
1997-01-01
Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based
Sampling strategies for the analysis of reactive low-molecular weight compounds in air
Henneken, H.
2006-01-01
Within this thesis, new sampling and analysis strategies for the determination of airborne workplace contaminants have been developed. Special focus has been directed towards the development of air sampling methods that involve diffusive sampling. In an introductory overview, the current
Sampling strategies to capture single-cell heterogeneity
Satwik Rajaram; Louise E. Heinrich; John D. Gordan; Jayant Avva; Kathy M. Bonness; Agnieszka K. Witkiewicz; James S. Malter; Chloe E. Atreya; Robert S. Warren; Lani F. Wu; Steven J. Altschuler
2017-01-01
Advances in single-cell technologies have highlighted the prevalence and biological significance of cellular heterogeneity. A critical question is how to design experiments that faithfully capture the true range of heterogeneity from samples of cellular populations. Here, we develop a data-driven approach, illustrated in the context of image data, that estimates the sampling depth required for prospective investigations of single-cell heterogeneity from an existing collection of samples. ...
Evaluation of sampling strategies to estimate crown biomass
Krishna P Poudel; Hailemariam Temesgen; Andrew N Gray
2015-01-01
Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree. Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products, fuel load assessments and fire management strategies, and wildfire...
Sampling strategies for efficient estimation of tree foliage biomass
Hailemariam Temesgen; Vicente Monleon; Aaron Weiskittel; Duncan Wilson
2011-01-01
Conifer crowns can be highly variable both within and between trees, particularly with respect to foliage biomass and leaf area. A variety of sampling schemes have been used to estimate biomass and leaf area at the individual tree and stand scales. Rarely has the effectiveness of these sampling schemes been compared across stands or even across species. In addition,...
Time-dependent generalized Gibbs ensembles in open quantum systems
Lange, Florian; Lenarčič, Zala; Rosch, Achim
2018-04-01
Generalized Gibbs ensembles have been used as powerful tools to describe the steady state of integrable many-particle quantum systems after a sudden change of the Hamiltonian. Here, we demonstrate numerically that they can be used for a much broader class of problems. We consider integrable systems in the presence of weak perturbations which break both integrability and drive the system to a state far from equilibrium. Under these conditions, we show that the steady state and the time evolution on long timescales can be accurately described by a (truncated) generalized Gibbs ensemble with time-dependent Lagrange parameters, determined from simple rate equations. We compare the numerically exact time evolutions of density matrices for small systems with a theory based on block-diagonal density matrices (diagonal ensemble) and a time-dependent generalized Gibbs ensemble containing only a small number of approximately conserved quantities, using the one-dimensional Heisenberg model with perturbations described by Lindblad operators as an example.
WRAP Module 1 sampling strategy and waste characterization alternatives study
Bergeson, C.L.
1994-09-30
The Waste Receiving and Processing Module 1 Facility is designed to examine, process, certify, and ship drums and boxes of solid wastes that have a surface dose equivalent of less than 200 mrem/h. These wastes will include low-level and transuranic wastes that are retrievably stored in the 200 Area burial grounds and facilities in addition to newly generated wastes. Certification of retrievably stored wastes processing in WRAP 1 is required to meet the waste acceptance criteria for onsite treatment and disposal of low-level waste and mixed low-level waste and the Waste Isolation Pilot Plant Waste Acceptance Criteria for the disposal of TRU waste. In addition, these wastes will need to be certified for packaging in TRUPACT-II shipping containers. Characterization of the retrievably stored waste is needed to support the certification process. Characterization data will be obtained from historical records, process knowledge, nondestructive examination nondestructive assay, visual inspection of the waste, head-gas sampling, and analysis of samples taken from the waste containers. Sample characterization refers to the method or methods that are used to test waste samples for specific analytes. The focus of this study is the sample characterization needed to accurately identify the hazardous and radioactive constituents present in the retrieved wastes that will be processed in WRAP 1. In addition, some sampling and characterization will be required to support NDA calculations and to provide an over-check for the characterization of newly generated wastes. This study results in the baseline definition of WRAP 1 sampling and analysis requirements and identifies alternative methods to meet these requirements in an efficient and economical manner.
WRAP Module 1 sampling strategy and waste characterization alternatives study
Bergeson, C.L.
1994-01-01
The Waste Receiving and Processing Module 1 Facility is designed to examine, process, certify, and ship drums and boxes of solid wastes that have a surface dose equivalent of less than 200 mrem/h. These wastes will include low-level and transuranic wastes that are retrievably stored in the 200 Area burial grounds and facilities in addition to newly generated wastes. Certification of retrievably stored wastes processing in WRAP 1 is required to meet the waste acceptance criteria for onsite treatment and disposal of low-level waste and mixed low-level waste and the Waste Isolation Pilot Plant Waste Acceptance Criteria for the disposal of TRU waste. In addition, these wastes will need to be certified for packaging in TRUPACT-II shipping containers. Characterization of the retrievably stored waste is needed to support the certification process. Characterization data will be obtained from historical records, process knowledge, nondestructive examination nondestructive assay, visual inspection of the waste, head-gas sampling, and analysis of samples taken from the waste containers. Sample characterization refers to the method or methods that are used to test waste samples for specific analytes. The focus of this study is the sample characterization needed to accurately identify the hazardous and radioactive constituents present in the retrieved wastes that will be processed in WRAP 1. In addition, some sampling and characterization will be required to support NDA calculations and to provide an over-check for the characterization of newly generated wastes. This study results in the baseline definition of WRAP 1 sampling and analysis requirements and identifies alternative methods to meet these requirements in an efficient and economical manner
Statistical sampling strategies for survey of soil contamination
Brus, D.J.
2011-01-01
This chapter reviews methods for selecting sampling locations in contaminated soils for three situations. In the first situation a global estimate of the soil contamination in an area is required. The result of the surey is a number or a series of numbers per contaminant, e.g. the estimated mean
First-Year University Chemistry Textbooks' Misrepresentation of Gibbs Energy
Quilez, Juan
2012-01-01
This study analyzes the misrepresentation of Gibbs energy by college chemistry textbooks. The article reports the way first-year university chemistry textbooks handle the concepts of spontaneity and equilibrium. Problems with terminology are found; confusion arises in the meaning given to [delta]G, [delta][subscript r]G, [delta]G[degrees], and…
Virial theorem and Gibbs thermodynamic potential for Coulomb systems
Bobrov, V. B.; Trigger, S. A.
2014-01-01
Using the grand canonical ensemble and the virial theorem, we show that the Gibbs thermodynamic potential of the non-relativistic system of charged particles is uniquely defined by single-particle Green functions of electrons and nuclei. This result is valid beyond the perturbation theory with respect to the interparticle interaction
Virial theorem and Gibbs thermodynamic potential for Coulomb systems
Bobrov, V. B.; Trigger, S. A.
2013-01-01
Using the grand canonical ensemble and the virial theorem, we show that the Gibbs thermodynamic potential of the non-relativistic system of charged particles is uniquely defined by single-particle Green functions of electrons and nuclei. This result is valid beyond the perturbation theory with respect to the interparticle interaction.
Exploring Fourier Series and Gibbs Phenomenon Using Mathematica
Ghosh, Jonaki B.
2011-01-01
This article describes a laboratory module on Fourier series and Gibbs phenomenon which was undertaken by 32 Year 12 students. It shows how the use of CAS played the role of an "amplifier" by making higher level mathematical concepts accessible to students of year 12. Using Mathematica students were able to visualise Fourier series of…
Thermodynamic fluctuations within the Gibbs and Einstein approaches
Rudoi, Yurii G; Sukhanov, Alexander D
2000-01-01
A comparative analysis of the descriptions of fluctuations in statistical mechanics (the Gibbs approach) and in statistical thermodynamics (the Einstein approach) is given. On this basis solutions are obtained for the Gibbs and Einstein problems that arise in pressure fluctuation calculations for a spatially limited equilibrium (or slightly nonequilibrium) macroscopic system. A modern formulation of the Gibbs approach which allows one to calculate equilibrium pressure fluctuations without making any additional assumptions is presented; to this end the generalized Bogolyubov - Zubarev and Hellmann - Feynman theorems are proved for the classical and quantum descriptions of a macrosystem. A statistical version of the Einstein approach is developed which shows a fundamental difference in pressure fluctuation results obtained within the context of two approaches. Both the 'genetic' relation between the Gibbs and Einstein approaches and the conceptual distinction between their physical grounds are demonstrated. To illustrate the results, which are valid for any thermodynamic system, an ideal nondegenerate gas of microparticles is considered, both classically and quantum mechanically. Based on the results obtained, the correspondence between the micro- and macroscopic descriptions is considered and the prospects of statistical thermodynamics are discussed. (reviews of topical problems)
2007-01-01
This part of ISO 18589 specifies the general requirements, based on ISO 11074 and ISO/IEC 17025, for all steps in the planning (desk study and area reconnaissance) of the sampling and the preparation of samples for testing. It includes the selection of the sampling strategy, the outline of the sampling plan, the presentation of general sampling methods and equipment, as well as the methodology of the pre-treatment of samples adapted to the measurements of the activity of radionuclides in soil. This part of ISO 18589 is addressed to the people responsible for determining the radioactivity present in soil for the purpose of radiation protection. It is applicable to soil from gardens, farmland, urban or industrial sites, as well as soil not affected by human activities. This part of ISO 18589 is applicable to all laboratories regardless of the number of personnel or the range of the testing performed. When a laboratory does not undertake one or more of the activities covered by this part of ISO 18589, such as planning, sampling or testing, the corresponding requirements do not apply. Information is provided on scope, normative references, terms and definitions and symbols, principle, sampling strategy, sampling plan, sampling process, pre-treatment of samples and recorded information. Five annexes inform about selection of the sampling strategy according to the objectives and the radiological characterization of the site and sampling areas, diagram of the evolution of the sample characteristics from the sampling site to the laboratory, example of sampling plan for a site divided in three sampling areas, example of a sampling record for a single/composite sample and example for a sample record for a soil profile with soil description. A bibliography is provided
A sampling strategy for estimating plot average annual fluxes of chemical elements from forest soils
Brus, D.J.; Gruijter, de J.J.; Vries, de W.
2010-01-01
A sampling strategy for estimating spatially averaged annual element leaching fluxes from forest soils is presented and tested in three Dutch forest monitoring plots. In this method sampling locations and times (days) are selected by probability sampling. Sampling locations were selected by
Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.
Blutke, Andreas; Wanke, Rüdiger
2018-03-06
In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical
Dols, W. Stuart; Persily, Andrew K.; Morrow, Jayne B.; Matzke, Brett D.; Sego, Landon H.; Nuffer, Lisa L.; Pulsipher, Brent A.
2010-01-01
In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by vir...
Baird, A. K.; Castro, A. J.; Clark, B. C.; Toulmin, P., III; Rose, H., Jr.; Keil, K.; Gooding, J. L.
1977-01-01
Ten samples of Mars regolith material (six on Viking Lander 1 and four on Viking Lander 2) have been delivered to the X ray fluorescence spectrometers as of March 31, 1977. An additional six samples at least are planned for acquisition in the remaining Extended Mission (to January 1979) for each lander. All samples acquired are Martian fines from the near surface (less than 6-cm depth) of the landing sites except the latest on Viking Lander 1, which is fine material from the bottom of a trench dug to a depth of 25 cm. Several attempts on each lander to acquire fresh rock material (in pebble sizes) for analysis have yielded only cemented surface crustal material (duricrust). Laboratory simulation and experimentation are required both for mission planning of sampling and for interpretation of data returned from Mars. This paper is concerned with the rationale for sample site selections, surface sampler operations, and the supportive laboratory studies needed to interpret X ray results from Mars.
On the Tsallis Entropy for Gibbs Random Fields
Janžura, Martin
2014-01-01
Roč. 21, č. 33 (2014), s. 59-69 ISSN 1212-074X R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional research plan: CEZ:AV0Z1075907 Keywords : Tsallis entropy * Gibbs random fields * phase transitions * Tsallis entropy rate Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2014/SI/janzura-0441885.pdf
Extensitivity of entropy and modern form of Gibbs paradox
Home, D.; Sengupta, S.
1981-01-01
The extensivity property of entropy is clarified in the light of a critical examination of the entropy formula based on quantum statistics and the relevant thermodynamic requirement. The modern form of the Gibbs paradox, related to the discontinuous jump in entropy due to identity or non-identity of particles, is critically investigated. Qualitative framework of a new resolution of this paradox, which analyses the general effect of distinction mark on the Hamiltonian of a system of identical particles, is outlined. (author)
Gibbs' theorem for open systems with incomplete statistics
Bagci, G.B.
2009-01-01
Gibbs' theorem, which is originally intended for canonical ensembles with complete statistics has been generalized to open systems with incomplete statistics. As a result of this generalization, it is shown that the stationary equilibrium distribution of inverse power law form associated with the incomplete statistics has maximum entropy even for open systems with energy or matter influx. The renormalized entropy definition given in this paper can also serve as a measure of self-organization in open systems described by incomplete statistics.
GibbsCluster: unsupervised clustering and alignment of peptide sequences
Andreatta, Massimo; Alvarez, Bruno; Nielsen, Morten
2017-01-01
motif characterizing each cluster. Several parameters are available to customize cluster analysis, including adjustable penalties for small clusters and overlapping groups and a trash cluster to remove outliers. As an example application, we used the server to deconvolute multiple specificities in large......-scale peptidome data generated by mass spectrometry. The server is available at http://www.cbs.dtu.dk/services/GibbsCluster-2.0....
Consistent estimation of Gibbs energy using component contributions.
Elad Noor
Full Text Available Standard Gibbs energies of reactions are increasingly being used in metabolic modeling for applying thermodynamic constraints on reaction rates, metabolite concentrations and kinetic parameters. The increasing scope and diversity of metabolic models has led scientists to look for genome-scale solutions that can estimate the standard Gibbs energy of all the reactions in metabolism. Group contribution methods greatly increase coverage, albeit at the price of decreased precision. We present here a way to combine the estimations of group contribution with the more accurate reactant contributions by decomposing each reaction into two parts and applying one of the methods on each of them. This method gives priority to the reactant contributions over group contributions while guaranteeing that all estimations will be consistent, i.e. will not violate the first law of thermodynamics. We show that there is a significant increase in the accuracy of our estimations compared to standard group contribution. Specifically, our cross-validation results show an 80% reduction in the median absolute residual for reactions that can be derived by reactant contributions only. We provide the full framework and source code for deriving estimates of standard reaction Gibbs energy, as well as confidence intervals, and believe this will facilitate the wide use of thermodynamic data for a better understanding of metabolism.
Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won
2012-01-01
Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.
Cheung, Chi Yuen; van der Heijden, Jaques; Hoogtanders, Karin; Christiaans, Maarten; Liu, Yan Lun; Chan, Yiu Han; Choi, Koon Shing; van de Plas, Afke; Shek, Chi Chung; Chau, Ka Foon; Li, Chun Sang; van Hooff, Johannes; Stolk, Leo
2008-02-01
Dried blood spot (DBS) sampling and high-performance liquid chromatography tandem-mass spectrometry have been developed in monitoring tacrolimus levels. Our center favors the use of limited sampling strategy and abbreviated formula to estimate the area under concentration-time curve (AUC(0-12)). However, it is inconvenient for patients because they have to wait in the center for blood sampling. We investigated the application of DBS method in tacrolimus level monitoring using limited sampling strategy and abbreviated AUC estimation approach. Duplicate venous samples were obtained at each time point (C(0), C(2), and C(4)). To determine the stability of blood samples, one venous sample was sent to our laboratory immediately. The other duplicate venous samples, together with simultaneous fingerprick blood samples, were sent to the University of Maastricht in the Netherlands. Thirty six patients were recruited and 108 sets of blood samples were collected. There was a highly significant relationship between AUC(0-12), estimated from venous blood samples, and fingerprick blood samples (r(2) = 0.96, P AUC(0-12) strategy as drug monitoring.
Dols, W Stuart; Persily, Andrew K; Morrow, Jayne B; Matzke, Brett D; Sego, Landon H; Nuffer, Lisa L; Pulsipher, Brent A
2010-01-01
In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by virtually examining a wide variety of release and dispersion scenarios using computer simulations. This research effort demonstrates the use of two software tools, CONTAM, developed by the National Institute of Standards and Technology (NIST), and Visual Sample Plan (VSP), developed by Pacific Northwest National Laboratory (PNNL). The CONTAM modeling software was used to virtually contaminate a model of the INL test building under various release and dissemination scenarios as well as a range of building design and operation parameters. The results of these CONTAM simulations were then used to investigate the relevance and performance of various sampling strategies using VSP. One of the fundamental outcomes of this project was the demonstration of how CONTAM and VSP can be used together to effectively develop sampling plans to support the various stages of response to an airborne chemical, biological, radiological, or nuclear event. Following such an event (or prior to an event), incident details and the conceptual site model could be used to create an ensemble of CONTAM simulations which model contaminant dispersion within a building. These predictions could then be used to identify priority area zones within the building and then sampling designs and strategies could be developed based on those zones.
Toher, Cormac; Oses, Corey; Plata, Jose J.; Hicks, David; Rose, Frisco; Levy, Ohad; de Jong, Maarten; Asta, Mark; Fornari, Marco; Buongiorno Nardelli, Marco; Curtarolo, Stefano
2017-06-01
Thorough characterization of the thermomechanical properties of materials requires difficult and time-consuming experiments. This severely limits the availability of data and is one of the main obstacles for the development of effective accelerated materials design strategies. The rapid screening of new potential materials requires highly integrated, sophisticated, and robust computational approaches. We tackled the challenge by developing an automated, integrated workflow with robust error-correction within the AFLOW framework which combines the newly developed "Automatic Elasticity Library" with the previously implemented GIBBS method. The first extracts the mechanical properties from automatic self-consistent stress-strain calculations, while the latter employs those mechanical properties to evaluate the thermodynamics within the Debye model. This new thermoelastic workflow is benchmarked against a set of 74 experimentally characterized systems to pinpoint a robust computational methodology for the evaluation of bulk and shear moduli, Poisson ratios, Debye temperatures, Grüneisen parameters, and thermal conductivities of a wide variety of materials. The effect of different choices of equations of state and exchange-correlation functionals is examined and the optimum combination of properties for the Leibfried-Schlömann prediction of thermal conductivity is identified, leading to improved agreement with experimental results than the GIBBS-only approach. The framework has been applied to the AFLOW.org data repositories to compute the thermoelastic properties of over 3500 unique materials. The results are now available online by using an expanded version of the REST-API described in the Appendix.
Mitra, Abhishek; Skrzypczak, Magdalena; Ginalski, Krzysztof; Rowicka, Maga
2015-01-01
Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding). Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants). Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol) that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively, we discuss how
Abhishek Mitra
Full Text Available Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding. Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants. Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively
Recruitment of hard-to-reach population subgroups via adaptations of the snowball sampling strategy.
Sadler, Georgia Robins; Lee, Hau-Chen; Lim, Rod Seung-Hwan; Fullerton, Judith
2010-09-01
Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author's program of research are provided to demonstrate how adaptations of snowball sampling can be used effectively in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more-vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or for research studies when the recruitment of a population-based sample is not essential.
La Iglesia, A.
1989-01-01
The effect of grinding on crystallinity, particle size and solubility of two samples of kaolinite was studied. The standard Gibbs free energies of formation of different ground samples were calculated from solubility measurements, and show a direct relationship between Gibbs free energy and particle size-crystallinity variation. Values of -3752.2 and -3776.4 KJ/mol. were determinated for ÎGÂºl (am) and ÎGÂºl (crys) of kaolinite, respectively. A new th...
Xun-Ping, W; An, Z
2017-07-27
Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.
Impact of sampling strategy on stream load estimates in till landscape of the Midwest
Vidon, P.; Hubbard, L.E.; Soyeux, E.
2009-01-01
Accurately estimating various solute loads in streams during storms is critical to accurately determine maximum daily loads for regulatory purposes. This study investigates the impact of sampling strategy on solute load estimates in streams in the US Midwest. Three different solute types (nitrate, magnesium, and dissolved organic carbon (DOC)) and three sampling strategies are assessed. Regardless of the method, the average error on nitrate loads is higher than for magnesium or DOC loads, and all three methods generally underestimate DOC loads and overestimate magnesium loads. Increasing sampling frequency only slightly improves the accuracy of solute load estimates but generally improves the precision of load calculations. This type of investigation is critical for water management and environmental assessment so error on solute load calculations can be taken into account by landscape managers, and sampling strategies optimized as a function of monitoring objectives. ?? 2008 Springer Science+Business Media B.V.
Quantitative Boltzmann-Gibbs Principles via Orthogonal Polynomial Duality
Ayala, Mario; Carinci, Gioia; Redig, Frank
2018-06-01
We study fluctuation fields of orthogonal polynomials in the context of particle systems with duality. We thereby obtain a systematic orthogonal decomposition of the fluctuation fields of local functions, where the order of every term can be quantified. This implies a quantitative generalization of the Boltzmann-Gibbs principle. In the context of independent random walkers, we complete this program, including also fluctuation fields in non-stationary context (local equilibrium). For other interacting particle systems with duality such as the symmetric exclusion process, similar results can be obtained, under precise conditions on the n particle dynamics.
Excess Gibbs Energy for Ternary Lattice Solutions of Nonrandom Mixing
Jung, Hae Young [DukSung Womens University, Seoul (Korea, Republic of)
2008-12-15
It is assumed for three components lattice solution that the number of ways of arranging particles randomly on the lattice follows a normal distribution of a linear combination of N{sub 12}, N{sub 23}, N{sub 13} which are the number of the nearest neighbor interactions between different molecules. It is shown by random number simulations that this assumption is reasonable. From this distribution, an approximate equation for the excess Gibbs energy of three components lattice solution is derived. Using this equation, several liquid-vapor equilibria are calculated and compared with the results from other equations.
Sample preparation composite and replicate strategy for assay of solid oral drug products.
Harrington, Brent; Nickerson, Beverly; Guo, Michele Xuemei; Barber, Marc; Giamalva, David; Lee, Carlos; Scrivens, Garry
2014-12-16
In pharmaceutical analysis, the results of drug product assay testing are used to make decisions regarding the quality, efficacy, and stability of the drug product. In order to make sound risk-based decisions concerning drug product potency, an understanding of the uncertainty of the reportable assay value is required. Utilizing the most restrictive criteria in current regulatory documentation, a maximum variability attributed to method repeatability is defined for a drug product potency assay. A sampling strategy that reduces the repeatability component of the assay variability below this predefined maximum is demonstrated. The sampling strategy consists of determining the number of dosage units (k) to be prepared in a composite sample of which there may be a number of equivalent replicate (r) sample preparations. The variability, as measured by the standard error (SE), of a potency assay consists of several sources such as sample preparation and dosage unit variability. A sampling scheme that increases the number of sample preparations (r) and/or number of dosage units (k) per sample preparation will reduce the assay variability and thus decrease the uncertainty around decisions made concerning the potency of the drug product. A maximum allowable repeatability component of the standard error (SE) for the potency assay is derived using material in current regulatory documents. A table of solutions for the number of dosage units per sample preparation (r) and number of replicate sample preparations (k) is presented for any ratio of sample preparation and dosage unit variability.
Potential-Decomposition Strategy in Markov Chain Monte Carlo Sampling Algorithms
Shangguan Danhua; Bao Jingdong
2010-01-01
We introduce the potential-decomposition strategy (PDS), which can he used in Markov chain Monte Carlo sampling algorithms. PDS can be designed to make particles move in a modified potential that favors diffusion in phase space, then, by rejecting some trial samples, the target distributions can be sampled in an unbiased manner. Furthermore, if the accepted trial samples are insufficient, they can be recycled as initial states to form more unbiased samples. This strategy can greatly improve efficiency when the original potential has multiple metastable states separated by large barriers. We apply PDS to the 2d Ising model and a double-well potential model with a large barrier, demonstrating in these two representative examples that convergence is accelerated by orders of magnitude.
Chaeyoung Lee
2012-11-01
Full Text Available Epistasis that may explain a large portion of the phenotypic variation for complex economic traits of animals has been ignored in many genetic association studies. A Baysian method was introduced to draw inferences about multilocus genotypic effects based on their marginal posterior distributions by a Gibbs sampler. A simulation study was conducted to provide statistical powers under various unbalanced designs by using this method. Data were simulated by combined designs of number of loci, within genotype variance, and sample size in unbalanced designs with or without null combined genotype cells. Mean empirical statistical power was estimated for testing posterior mean estimate of combined genotype effect. A practical example for obtaining empirical statistical power estimates with a given sample size was provided under unbalanced designs. The empirical statistical powers would be useful for determining an optimal design when interactive associations of multiple loci with complex phenotypes were examined.
Inference with minimal Gibbs free energy in information field theory
Ensslin, Torsten A.; Weig, Cornelius
2010-01-01
Non-linear and non-Gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the Gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from Poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a Gaussian signal with unknown spectrum, and (iii) inference of a Poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how Gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-Gaussian posterior.
Reflections on Gibbs: From Statistical Physics to the Amistad V3.0
Kadanoff, Leo P.
2014-07-01
This note is based upon a talk given at an APS meeting in celebration of the achievements of J. Willard Gibbs. J. Willard Gibbs, the younger, was the first American physical sciences theorist. He was one of the inventors of statistical physics. He introduced and developed the concepts of phase space, phase transitions, and thermodynamic surfaces in a remarkably correct and elegant manner. These three concepts form the basis of different areas of physics. The connection among these areas has been a subject of deep reflection from Gibbs' time to our own. This talk therefore celebrated Gibbs by describing modern ideas about how different parts of physics fit together. I finished with a more personal note. Our own J. Willard Gibbs had all his many achievements concentrated in science. His father, also J. Willard Gibbs, also a Professor at Yale, had one great non-academic achievement that remains unmatched in our day. I describe it.
Sampling strategy for a large scale indoor radiation survey - a pilot project
Strand, T.; Stranden, E.
1986-01-01
Optimisation of a stratified random sampling strategy for large scale indoor radiation surveys is discussed. It is based on the results from a small scale pilot project where variances in dose rates within different categories of houses were assessed. By selecting a predetermined precision level for the mean dose rate in a given region, the number of measurements needed can be optimised. The results of a pilot project in Norway are presented together with the development of the final sampling strategy for a planned large scale survey. (author)
Daria Sanna
2011-01-01
Full Text Available We report a sampling strategy based on Mendelian Breeding Units (MBUs, representing an interbreeding group of individuals sharing a common gene pool. The identification of MBUs is crucial for case-control experimental design in association studies. The aim of this work was to evaluate the possible existence of bias in terms of genetic variability and haplogroup frequencies in the MBU sample, due to severe sample selection. In order to reach this goal, the MBU sampling strategy was compared to a standard selection of individuals according to their surname and place of birth. We analysed mitochondrial DNA variation (first hypervariable segment and coding region in unrelated healthy subjects from two different areas of Sardinia: the area around the town of Cabras and the western Campidano area. No statistically significant differences were observed when the two sampling methods were compared, indicating that the stringent sample selection needed to establish a MBU does not alter original genetic variability and haplogroup distribution. Therefore, the MBU sampling strategy can be considered a useful tool in association studies of complex traits.
F. Raicich
2003-01-01
Full Text Available For the first time in the Mediterranean Sea various temperature sampling strategies are studied and compared to each other by means of the Observing System Simulation Experiment technique. Their usefulness in the framework of the Mediterranean Forecasting System (MFS is assessed by quantifying their impact in a Mediterranean General Circulation Model in numerical twin experiments via univariate data assimilation of temperature profiles in summer and winter conditions. Data assimilation is performed by means of the optimal interpolation algorithm implemented in the SOFA (System for Ocean Forecasting and Analysis code. The sampling strategies studied here include various combinations of eXpendable BathyThermograph (XBT profiles collected along Volunteer Observing Ship (VOS tracks, Airborne XBTs (AXBTs and sea surface temperatures. The actual sampling strategy adopted in the MFS Pilot Project during the Targeted Operational Period (TOP, winter-spring 2000 is also studied. The data impact is quantified by the error reduction relative to the free run. The most effective sampling strategies determine 25–40% error reduction, depending on the season, the geographic area and the depth range. A qualitative relationship can be recognized in terms of the spread of information from the data positions, between basin circulation features and spatial patterns of the error reduction fields, as a function of different spatial and seasonal characteristics of the dynamics. The largest error reductions are observed when samplings are characterized by extensive spatial coverages, as in the cases of AXBTs and the combination of XBTs and surface temperatures. The sampling strategy adopted during the TOP is characterized by little impact, as a consequence of a sampling frequency that is too low. Key words. Oceanography: general (marginal and semi-enclosed seas; numerical modelling
F. Raicich
Full Text Available For the first time in the Mediterranean Sea various temperature sampling strategies are studied and compared to each other by means of the Observing System Simulation Experiment technique. Their usefulness in the framework of the Mediterranean Forecasting System (MFS is assessed by quantifying their impact in a Mediterranean General Circulation Model in numerical twin experiments via univariate data assimilation of temperature profiles in summer and winter conditions. Data assimilation is performed by means of the optimal interpolation algorithm implemented in the SOFA (System for Ocean Forecasting and Analysis code. The sampling strategies studied here include various combinations of eXpendable BathyThermograph (XBT profiles collected along Volunteer Observing Ship (VOS tracks, Airborne XBTs (AXBTs and sea surface temperatures. The actual sampling strategy adopted in the MFS Pilot Project during the Targeted Operational Period (TOP, winter-spring 2000 is also studied.
The data impact is quantified by the error reduction relative to the free run. The most effective sampling strategies determine 25–40% error reduction, depending on the season, the geographic area and the depth range. A qualitative relationship can be recognized in terms of the spread of information from the data positions, between basin circulation features and spatial patterns of the error reduction fields, as a function of different spatial and seasonal characteristics of the dynamics. The largest error reductions are observed when samplings are characterized by extensive spatial coverages, as in the cases of AXBTs and the combination of XBTs and surface temperatures. The sampling strategy adopted during the TOP is characterized by little impact, as a consequence of a sampling frequency that is too low.
Key words. Oceanography: general (marginal and semi-enclosed seas; numerical modelling
A census-weighted, spatially-stratified household sampling strategy for urban malaria epidemiology
Slutsker Laurence
2008-02-01
Full Text Available Abstract Background Urban malaria is likely to become increasingly important as a consequence of the growing proportion of Africans living in cities. A novel sampling strategy was developed for urban areas to generate a sample simultaneously representative of population and inhabited environments. Such a strategy should facilitate analysis of important epidemiological relationships in this ecological context. Methods Census maps and summary data for Kisumu, Kenya, were used to create a pseudo-sampling frame using the geographic coordinates of census-sampled structures. For every enumeration area (EA designated as urban by the census (n = 535, a sample of structures equal to one-tenth the number of households was selected. In EAs designated as rural (n = 32, a geographically random sample totalling one-tenth the number of households was selected from a grid of points at 100 m intervals. The selected samples were cross-referenced to a geographic information system, and coordinates transferred to handheld global positioning units. Interviewers found the closest eligible household to the sampling point and interviewed the caregiver of a child aged Results 4,336 interviews were completed in 473 of the 567 study area EAs from June 2002 through February 2003. EAs without completed interviews were randomly distributed, and non-response was approximately 2%. Mean distance from the assigned sampling point to the completed interview was 74.6 m, and was significantly less in urban than rural EAs, even when controlling for number of households. The selected sample had significantly more children and females of childbearing age than the general population, and fewer older individuals. Conclusion This method selected a sample that was simultaneously population-representative and inclusive of important environmental variation. The use of a pseudo-sampling frame and pre-programmed handheld GPS units is more efficient and may yield a more complete sample than
McCarthy, David T; Zhang, Kefeng; Westerlund, Camilla; Viklander, Maria; Bertrand-Krajewski, Jean-Luc; Fletcher, Tim D; Deletic, Ana
2018-02-01
The estimation of stormwater pollutant concentrations is a primary requirement of integrated urban water management. In order to determine effective sampling strategies for estimating pollutant concentrations, data from extensive field measurements at seven different catchments was used. At all sites, 1-min resolution continuous flow measurements, as well as flow-weighted samples, were taken and analysed for total suspend solids (TSS), total nitrogen (TN) and Escherichia coli (E. coli). For each of these parameters, the data was used to calculate the Event Mean Concentrations (EMCs) for each event. The measured Site Mean Concentrations (SMCs) were taken as the volume-weighted average of these EMCs for each parameter, at each site. 17 different sampling strategies, including random and fixed strategies were tested to estimate SMCs, which were compared with the measured SMCs. The ratios of estimated/measured SMCs were further analysed to determine the most effective sampling strategies. Results indicate that the random sampling strategies were the most promising method in reproducing SMCs for TSS and TN, while some fixed sampling strategies were better for estimating the SMC of E. coli. The differences in taking one, two or three random samples were small (up to 20% for TSS, and 10% for TN and E. coli), indicating that there is little benefit in investing in collection of more than one sample per event if attempting to estimate the SMC through monitoring of multiple events. It was estimated that an average of 27 events across the studied catchments are needed for characterising SMCs of TSS with a 90% confidence interval (CI) width of 1.0, followed by E.coli (average 12 events) and TN (average 11 events). The coefficient of variation of pollutant concentrations was linearly and significantly correlated to the 90% confidence interval ratio of the estimated/measured SMCs (R 2 = 0.49; P sampling frequency needed to accurately estimate SMCs of pollutants. Crown
Limited sampling strategy for determining metformin area under the plasma concentration-time curve
Santoro, Ana Beatriz; Stage, Tore Bjerregaard; Struchiner, Claudio José
2016-01-01
AIM: The aim was to develop and validate limited sampling strategy (LSS) models to predict the area under the plasma concentration-time curve (AUC) for metformin. METHODS: Metformin plasma concentrations (n = 627) at 0-24 h after a single 500 mg dose were used for LSS development, based on all su...
Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models
Debasish Saha; Armen R. Kemanian; Benjamin M. Rau; Paul R. Adler; Felipe Montes
2017-01-01
Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (...
Pet, J.S.; Densen, van W.L.T.; Machiels, M.A.M.; Sukkel, M.; Setyohady, D.; Tumuljadi, A.
1997-01-01
Temporal and spatial patterns in the fishery for Sardinella spp. around East Java, Indonesia, were studied in an attempt to develop an efficient catch and effort sampling strategy for this highly variable fishery. The inter-annual and monthly variation in catch, effort and catch per unit of effort
Zheng, Naiyu; Jiang, Hao; Zeng, Jianing
2014-09-01
Robotic liquid handlers (RLHs) have been widely used in automated sample preparation for liquid chromatography-tandem mass spectrometry (LC-MS/MS) bioanalysis. Automated sample preparation for regulated bioanalysis offers significantly higher assay efficiency, better data quality and potential bioanalytical cost-savings. For RLHs that are used for regulated bioanalysis, there are additional requirements, including 21 CFR Part 11 compliance, software validation, system qualification, calibration verification and proper maintenance. This article reviews recent advances in automated sample preparation for regulated bioanalysis in the last 5 years. Specifically, it covers the following aspects: regulated bioanalysis requirements, recent advances in automation hardware and software development, sample extraction workflow simplification, strategies towards fully automated sample extraction, and best practices in automated sample preparation for regulated bioanalysis.
Gibbs free energy of formation of UPb(s) compound
Samui, Pradeep; Agarwal, Renu; Mishra, Ratikanta
2012-01-01
Liquid lead and lead-bismuth eutectic (LBE) are being explored as primary candidates for coolants in accelerator driven systems and in advanced nuclear reactors due to their favorable thermo-physical and chemical properties. They are also proposed to be used as spallation neutron source in ADS Reactor Systems. However, corrosion of structural materials (i.e. steel) presents a critical challenge for the use of liquid lead or LBE in advanced nuclear reactors. The interactions of liquid lead or LBE with clad and fuel is of great scientific and technological importance in the development of advanced nuclear reactors. Clad failure/breach can lead to reaction of coolant elements with fuel components. Thus the study of fuel-coolant interaction of U with Pb/Bi is important. The paper deals with the determination of Gibbs free energy of formation of U-rich phase i.e. UPb in Pb-U system, employing Knudsen effusion mass loss technique
Work and entropy production in generalised Gibbs ensembles
Perarnau-Llobet, Martí; Riera, Arnau; Gallego, Rodrigo; Wilming, Henrik; Eisert, Jens
2016-01-01
Recent years have seen an enormously revived interest in the study of thermodynamic notions in the quantum regime. This applies both to the study of notions of work extraction in thermal machines in the quantum regime, as well as to questions of equilibration and thermalisation of interacting quantum many-body systems as such. In this work we bring together these two lines of research by studying work extraction in a closed system that undergoes a sequence of quenches and equilibration steps concomitant with free evolutions. In this way, we incorporate an important insight from the study of the dynamics of quantum many body systems: the evolution of closed systems is expected to be well described, for relevant observables and most times, by a suitable equilibrium state. We will consider three kinds of equilibration, namely to (i) the time averaged state, (ii) the Gibbs ensemble and (iii) the generalised Gibbs ensemble, reflecting further constants of motion in integrable models. For each effective description, we investigate notions of entropy production, the validity of the minimal work principle and properties of optimal work extraction protocols. While we keep the discussion general, much room is dedicated to the discussion of paradigmatic non-interacting fermionic quantum many-body systems, for which we identify significant differences with respect to the role of the minimal work principle. Our work not only has implications for experiments with cold atoms, but also can be viewed as suggesting a mindset for quantum thermodynamics where the role of the external heat baths is instead played by the system itself, with its internal degrees of freedom bringing coarse-grained observables to equilibrium. (paper)
Gu, Jinghua; Xuan, Jianhua; Riggins, Rebecca B; Chen, Li; Wang, Yue; Clarke, Robert
2012-08-01
Identification of transcriptional regulatory networks (TRNs) is of significant importance in computational biology for cancer research, providing a critical building block to unravel disease pathways. However, existing methods for TRN identification suffer from the inclusion of excessive 'noise' in microarray data and false-positives in binding data, especially when applied to human tumor-derived cell line studies. More robust methods that can counteract the imperfection of data sources are therefore needed for reliable identification of TRNs in this context. In this article, we propose to establish a link between the quality of one target gene to represent its regulator and the uncertainty of its expression to represent other target genes. Specifically, an outlier sum statistic was used to measure the aggregated evidence for regulation events between target genes and their corresponding transcription factors. A Gibbs sampling method was then developed to estimate the marginal distribution of the outlier sum statistic, hence, to uncover underlying regulatory relationships. To evaluate the effectiveness of our proposed method, we compared its performance with that of an existing sampling-based method using both simulation data and yeast cell cycle data. The experimental results show that our method consistently outperforms the competing method in different settings of signal-to-noise ratio and network topology, indicating its robustness for biological applications. Finally, we applied our method to breast cancer cell line data and demonstrated its ability to extract biologically meaningful regulatory modules related to estrogen signaling and action in breast cancer. The Gibbs sampler MATLAB package is freely available at http://www.cbil.ece.vt.edu/software.htm. xuan@vt.edu Supplementary data are available at Bioinformatics online.
Assoumani, Azziz; Margoum, Christelle; Guillemain, Céline; Coquery, Marina
2014-05-01
The monitoring of water bodies regarding organic contaminants, and the determination of reliable estimates of concentrations are challenging issues, in particular for the implementation of the Water Framework Directive. Several strategies can be applied to collect water samples for the determination of their contamination level. Grab sampling is fast, easy, and requires little logistical and analytical needs in case of low frequency sampling campaigns. However, this technique lacks of representativeness for streams with high variations of contaminant concentrations, such as pesticides in rivers located in small agricultural watersheds. Increasing the representativeness of this sampling strategy implies greater logistical needs and higher analytical costs. Average automated sampling is therefore a solution as it allows, in a single analysis, the determination of more accurate and more relevant estimates of concentrations. Two types of automatic samplings can be performed: time-related sampling allows the assessment of average concentrations, whereas flow-dependent sampling leads to average flux concentrations. However, the purchase and the maintenance of automatic samplers are quite expensive. Passive sampling has recently been developed as an alternative to grab or average automated sampling, to obtain at lower cost, more realistic estimates of the average concentrations of contaminants in streams. These devices allow the passive accumulation of contaminants from large volumes of water, resulting in ultratrace level detection and smoothed integrative sampling over periods ranging from days to weeks. They allow the determination of time-weighted average (TWA) concentrations of the dissolved fraction of target contaminants, but they need to be calibrated in controlled conditions prior to field applications. In other words, the kinetics of the uptake of the target contaminants into the sampler must be studied in order to determine the corresponding sampling rate
Metabolomic analysis of urine samples by UHPLC-QTOF-MS: Impact of normalization strategies.
Gagnebin, Yoric; Tonoli, David; Lescuyer, Pierre; Ponte, Belen; de Seigneux, Sophie; Martin, Pierre-Yves; Schappler, Julie; Boccard, Julien; Rudaz, Serge
2017-02-22
Among the various biological matrices used in metabolomics, urine is a biofluid of major interest because of its non-invasive collection and its availability in large quantities. However, significant sources of variability in urine metabolomics based on UHPLC-MS are related to the analytical drift and variation of the sample concentration, thus requiring normalization. A sequential normalization strategy was developed to remove these detrimental effects, including: (i) pre-acquisition sample normalization by individual dilution factors to narrow the concentration range and to standardize the analytical conditions, (ii) post-acquisition data normalization by quality control-based robust LOESS signal correction (QC-RLSC) to correct for potential analytical drift, and (iii) post-acquisition data normalization by MS total useful signal (MSTUS) or probabilistic quotient normalization (PQN) to prevent the impact of concentration variability. This generic strategy was performed with urine samples from healthy individuals and was further implemented in the context of a clinical study to detect alterations in urine metabolomic profiles due to kidney failure. In the case of kidney failure, the relation between creatinine/osmolality and the sample concentration is modified, and relying only on these measurements for normalization could be highly detrimental. The sequential normalization strategy was demonstrated to significantly improve patient stratification by decreasing the unwanted variability and thus enhancing data quality. Copyright © 2016 Elsevier B.V. All rights reserved.
Vargas, Francisco M.
2014-01-01
The temperature dependence of the Gibbs energy and important quantities such as Henry's law constants, activity coefficients, and chemical equilibrium constants is usually calculated by using the Gibbs-Helmholtz equation. Although, this is a well-known approach and traditionally covered as part of any physical chemistry course, the required…
Continuous spin mean-field models : Limiting kernels and Gibbs properties of local transforms
Kulske, Christof; Opoku, Alex A.
2008-01-01
We extend the notion of Gibbsianness for mean-field systems to the setup of general (possibly continuous) local state spaces. We investigate the Gibbs properties of systems arising from an initial mean-field Gibbs measure by application of given local transition kernels. This generalizes previous
One of Gibbs's ideas that has gone unnoticed (comment on chapter IX of his classic book)
Sukhanov, Alexander D; Rudoi, Yurii G
2006-01-01
We show that contrary to the commonly accepted view, Chapter IX of Gibbs's book [1] contains the prolegomena to a macroscopic statistical theory that is qualitatively different from his own microscopic statistical mechanics. The formulas obtained by Gibbs were the first results in the history of physics related to the theory of fluctuations in any macroparameters, including temperature. (from the history of physics)
Askerov, Bahram M
2010-01-01
This book deals with theoretical thermodynamics and the statistical physics of electron and particle gases. While treating the laws of thermodynamics from both classical and quantum theoretical viewpoints, it posits that the basis of the statistical theory of macroscopic properties of a system is the microcanonical distribution of isolated systems, from which all canonical distributions stem. To calculate the free energy, the Gibbs method is applied to ideal and non-ideal gases, and also to a crystalline solid. Considerable attention is paid to the Fermi-Dirac and Bose-Einstein quantum statistics and its application to different quantum gases, and electron gas in both metals and semiconductors is considered in a nonequilibrium state. A separate chapter treats the statistical theory of thermodynamic properties of an electron gas in a quantizing magnetic field.
Sampling strategies to measure the prevalence of common recurrent infections in longitudinal studies
Luby Stephen P
2010-08-01
Full Text Available Abstract Background Measuring recurrent infections such as diarrhoea or respiratory infections in epidemiological studies is a methodological challenge. Problems in measuring the incidence of recurrent infections include the episode definition, recall error, and the logistics of close follow up. Longitudinal prevalence (LP, the proportion-of-time-ill estimated by repeated prevalence measurements, is an alternative measure to incidence of recurrent infections. In contrast to incidence which usually requires continuous sampling, LP can be measured at intervals. This study explored how many more participants are needed for infrequent sampling to achieve the same study power as frequent sampling. Methods We developed a set of four empirical simulation models representing low and high risk settings with short or long episode durations. The model was used to evaluate different sampling strategies with different assumptions on recall period and recall error. Results The model identified three major factors that influence sampling strategies: (1 the clustering of episodes in individuals; (2 the duration of episodes; (3 the positive correlation between an individual's disease incidence and episode duration. Intermittent sampling (e.g. 12 times per year often requires only a slightly larger sample size compared to continuous sampling, especially in cluster-randomized trials. The collection of period prevalence data can lead to highly biased effect estimates if the exposure variable is associated with episode duration. To maximize study power, recall periods of 3 to 7 days may be preferable over shorter periods, even if this leads to inaccuracy in the prevalence estimates. Conclusion Choosing the optimal approach to measure recurrent infections in epidemiological studies depends on the setting, the study objectives, study design and budget constraints. Sampling at intervals can contribute to making epidemiological studies and trials more efficient, valid
Odéen, Henrik; Todd, Nick; Diakite, Mahamadou; Minalga, Emilee; Payne, Allison; Parker, Dennis L.
2014-01-01
Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemes utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes
Limited-sampling strategies for anti-infective agents: systematic review.
Sprague, Denise A; Ensom, Mary H H
2009-09-01
Area under the concentration-time curve (AUC) is a pharmacokinetic parameter that represents overall exposure to a drug. For selected anti-infective agents, pharmacokinetic-pharmacodynamic parameters, such as AUC/MIC (where MIC is the minimal inhibitory concentration), have been correlated with outcome in a few studies. A limited-sampling strategy may be used to estimate pharmacokinetic parameters such as AUC, without the frequent, costly, and inconvenient blood sampling that would be required to directly calculate the AUC. To discuss, by means of a systematic review, the strengths, limitations, and clinical implications of published studies involving a limited-sampling strategy for anti-infective agents and to propose improvements in methodology for future studies. The PubMed and EMBASE databases were searched using the terms "anti-infective agents", "limited sampling", "optimal sampling", "sparse sampling", "AUC monitoring", "abbreviated AUC", "abbreviated sampling", and "Bayesian". The reference lists of retrieved articles were searched manually. Included studies were classified according to modified criteria from the US Preventive Services Task Force. Twenty studies met the inclusion criteria. Six of the studies (involving didanosine, zidovudine, nevirapine, ciprofloxacin, efavirenz, and nelfinavir) were classified as providing level I evidence, 4 studies (involving vancomycin, didanosine, lamivudine, and lopinavir-ritonavir) provided level II-1 evidence, 2 studies (involving saquinavir and ceftazidime) provided level II-2 evidence, and 8 studies (involving ciprofloxacin, nelfinavir, vancomycin, ceftazidime, ganciclovir, pyrazinamide, meropenem, and alpha interferon) provided level III evidence. All of the studies providing level I evidence used prospectively collected data and proper validation procedures with separate, randomly selected index and validation groups. However, most of the included studies did not provide an adequate description of the methods or
Dynamical predictive power of the generalized Gibbs ensemble revealed in a second quench.
Zhang, J M; Cui, F C; Hu, Jiangping
2012-04-01
We show that a quenched and relaxed completely integrable system is hardly distinguishable from the corresponding generalized Gibbs ensemble in a dynamical sense. To be specific, the response of the quenched and relaxed system to a second quench can be accurately reproduced by using the generalized Gibbs ensemble as a substitute. Remarkably, as demonstrated with the transverse Ising model and the hard-core bosons in one dimension, not only the steady values but even the transient, relaxation dynamics of the physical variables can be accurately reproduced by using the generalized Gibbs ensemble as a pseudoinitial state. This result is an important complement to the previously established result that a quenched and relaxed system is hardly distinguishable from the generalized Gibbs ensemble in a static sense. The relevance of the generalized Gibbs ensemble in the nonequilibrium dynamics of completely integrable systems is then greatly strengthened.
Using Linked Survey Paradata to Improve Sampling Strategies in the Medical Expenditure Panel Survey
Mirel Lisa B.
2017-06-01
Full Text Available Using paradata from a prior survey that is linked to a new survey can help a survey organization develop more effective sampling strategies. One example of this type of linkage or subsampling is between the National Health Interview Survey (NHIS and the Medical Expenditure Panel Survey (MEPS. MEPS is a nationally representative sample of the U.S. civilian, noninstitutionalized population based on a complex multi-stage sample design. Each year a new sample is drawn as a subsample of households from the prior year’s NHIS. The main objective of this article is to examine how paradata from a prior survey can be used in developing a sampling scheme in a subsequent survey. A framework for optimal allocation of the sample in substrata formed for this purpose is presented and evaluated for the relative effectiveness of alternative substratification schemes. The framework is applied, using real MEPS data, to illustrate how utilizing paradata from the linked survey offers the possibility of making improvements to the sampling scheme for the subsequent survey. The improvements aim to reduce the data collection costs while maintaining or increasing effective responding sample sizes and response rates for a harder to reach population.
A cache-friendly sampling strategy for texture-based volume rendering on GPU
Junpeng Wang
2017-06-01
Full Text Available The texture-based volume rendering is a memory-intensive algorithm. Its performance relies heavily on the performance of the texture cache. However, most existing texture-based volume rendering methods blindly map computational resources to texture memory and result in incoherent memory access patterns, causing low cache hit rates in certain cases. The distance between samples taken by threads of an atomic scheduling unit (e.g. a warp of 32 threads in CUDA of the GPU is a crucial factor that affects the texture cache performance. Based on this fact, we present a new sampling strategy, called Warp Marching, for the ray-casting algorithm of texture-based volume rendering. The effects of different sample organizations and different thread-pixel mappings in the ray-casting algorithm are thoroughly analyzed. Also, a pipeline manner color blending approach is introduced and the power of warp-level GPU operations is leveraged to improve the efficiency of parallel executions on the GPU. In addition, the rendering performance of the Warp Marching is view-independent, and it outperforms existing empty space skipping techniques in scenarios that need to render large dynamic volumes in a low resolution image. Through a series of micro-benchmarking and real-life data experiments, we rigorously analyze our sampling strategies and demonstrate significant performance enhancements over existing sampling methods.
Limited sampling strategy models for estimating the AUC of gliclazide in Chinese healthy volunteers.
Huang, Ji-Han; Wang, Kun; Huang, Xiao-Hui; He, Ying-Chun; Li, Lu-Jin; Sheng, Yu-Cheng; Yang, Juan; Zheng, Qing-Shan
2013-06-01
The aim of this work is to reduce the cost of required sampling for the estimation of the area under the gliclazide plasma concentration versus time curve within 60 h (AUC0-60t ). The limited sampling strategy (LSS) models were established and validated by the multiple regression model within 4 or fewer gliclazide concentration values. Absolute prediction error (APE), root of mean square error (RMSE) and visual prediction check were used as criterion. The results of Jack-Knife validation showed that 10 (25.0 %) of the 40 LSS based on the regression analysis were not within an APE of 15 % using one concentration-time point. 90.2, 91.5 and 92.4 % of the 40 LSS models were capable of prediction using 2, 3 and 4 points, respectively. Limited sampling strategies were developed and validated for estimating AUC0-60t of gliclazide. This study indicates that the implementation of an 80 mg dosage regimen enabled accurate predictions of AUC0-60t by the LSS model. This study shows that 12, 6, 4, 2 h after administration are the key sampling times. The combination of (12, 2 h), (12, 8, 2 h) or (12, 8, 4, 2 h) can be chosen as sampling hours for predicting AUC0-60t in practical application according to requirement.
Strategy for thermo-gravimetric analysis of K East fuel samples
Lawrence, L.A.
1997-01-01
A strategy was developed for the Thermo-Gravimetric Analysis (TGA) testing of K East fuel samples for oxidation rate determinations. Tests will first establish if there are any differences for dry air oxidation between the K West and K East fuel. These tests will be followed by moist inert gas oxidation rate measurements. The final series of tests will consider pure water vapor i.e., steam
Yousef, A M; Melhem, M; Xue, B; Arafat, T; Reynolds, D K; Van Wart, S A
2013-05-01
Clopidogrel is metabolized primarily into an inactive carboxyl metabolite (clopidogrel-IM) or to a lesser extent an active thiol metabolite. A population pharmacokinetic (PK) model was developed using NONMEM(®) to describe the time course of clopidogrel-IM in plasma and to design a sparse-sampling strategy to predict clopidogrel-IM exposures for use in characterizing anti-platelet activity. Serial blood samples from 76 healthy Jordanian subjects administered a single 75 mg oral dose of clopidogrel were collected and assayed for clopidogrel-IM using reverse phase high performance liquid chromatography. A two-compartment (2-CMT) PK model with first-order absorption and elimination plus an absorption lag-time was evaluated, as well as a variation of this model designed to mimic enterohepatic recycling (EHC). Optimal PK sampling strategies (OSS) were determined using WinPOPT based upon collection of 3-12 post-dose samples. A two-compartment model with EHC provided the best fit and reduced bias in C(max) (median prediction error (PE%) of 9.58% versus 12.2%) relative to the basic two-compartment model, AUC(0-24) was similar for both models (median PE% = 1.39%). The OSS for fitting the two-compartment model with EHC required the collection of seven samples (0.25, 1, 2, 4, 5, 6 and 12 h). Reasonably unbiased and precise exposures were obtained when re-fitting this model to a reduced dataset considering only these sampling times. A two-compartment model considering EHC best characterized the time course of clopidogrel-IM in plasma. Use of the suggested OSS will allow for the collection of fewer PK samples when assessing clopidogrel-IM exposures. Copyright © 2013 John Wiley & Sons, Ltd.
Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z.; Schlein, Alexandra N.; Hooker, Jonathan C.; Dehkordy, Soudabeh Fazeli; Hamilton, Gavin; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.
2017-01-01
BACKGROUND Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. PURPOSE To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. STUDY TYPE Retrospective secondary analysis of prospectively acquired clinical research data. POPULATION A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. FIELD STRENGTH/SEQUENCE Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradientrecalled echo technique. ASSESSMENT An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. STATISTICAL TESTING Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland–Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland–Altman analyses. RESULTS The study population’s mean whole-liver PDFF was 10.1±8.9% (range: 1.1–44.1%). Although there was no significant difference in average segmental (P=0.452) or lobar (P=0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥ 4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. DATA CONCLUSION Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. Level of
Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z; Schlein, Alexandra N; Hooker, Jonathan C; Fazeli Dehkordy, Soudabeh; Hamilton, Gavin; Reeder, Scott B; Loomba, Rohit; Sirlin, Claude B
2018-04-01
Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. Retrospective secondary analysis of prospectively acquired clinical research data. A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradient-recalled echo technique. An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland-Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland-Altman analyses. The study population's mean whole-liver PDFF was 10.1 ± 8.9% (range: 1.1-44.1%). Although there was no significant difference in average segmental (P = 0.452) or lobar (P = 0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:988-994. © 2017 International Society for Magnetic Resonance
Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin
2017-08-15
Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.
Edelbring, Samuel
2012-08-15
The degree of learners' self-regulated learning and dependence on external regulation influence learning processes in higher education. These regulation strategies are commonly measured by questionnaires developed in other settings than in which they are being used, thereby requiring renewed validation. The aim of this study was to psychometrically evaluate the learning regulation strategy scales from the Inventory of Learning Styles with Swedish medical students (N = 206). The regulation scales were evaluated regarding their reliability, scale dimensionality and interrelations. The primary evaluation focused on dimensionality and was performed with Mokken scale analysis. To assist future scale refinement, additional item analysis, such as item-to-scale correlations, was performed. Scale scores in the Swedish sample displayed good reliability in relation to published results: Cronbach's alpha: 0.82, 0.72, and 0.65 for self-regulation, external regulation and lack of regulation scales respectively. The dimensionalities in scales were adequate for self-regulation and its subscales, whereas external regulation and lack of regulation displayed less unidimensionality. The established theoretical scales were largely replicated in the exploratory analysis. The item analysis identified two items that contributed little to their respective scales. The results indicate that these scales have an adequate capacity for detecting the three theoretically proposed learning regulation strategies in the medical education sample. Further construct validity should be sought by interpreting scale scores in relation to specific learning activities. Using established scales for measuring students' regulation strategies enables a broad empirical base for increasing knowledge on regulation strategies in relation to different disciplinary settings and contributes to theoretical development.
Edelbring Samuel
2012-08-01
Full Text Available Abstract Background The degree of learners’ self-regulated learning and dependence on external regulation influence learning processes in higher education. These regulation strategies are commonly measured by questionnaires developed in other settings than in which they are being used, thereby requiring renewed validation. The aim of this study was to psychometrically evaluate the learning regulation strategy scales from the Inventory of Learning Styles with Swedish medical students (N = 206. Methods The regulation scales were evaluated regarding their reliability, scale dimensionality and interrelations. The primary evaluation focused on dimensionality and was performed with Mokken scale analysis. To assist future scale refinement, additional item analysis, such as item-to-scale correlations, was performed. Results Scale scores in the Swedish sample displayed good reliability in relation to published results: Cronbach’s alpha: 0.82, 0.72, and 0.65 for self-regulation, external regulation and lack of regulation scales respectively. The dimensionalities in scales were adequate for self-regulation and its subscales, whereas external regulation and lack of regulation displayed less unidimensionality. The established theoretical scales were largely replicated in the exploratory analysis. The item analysis identified two items that contributed little to their respective scales. Discussion The results indicate that these scales have an adequate capacity for detecting the three theoretically proposed learning regulation strategies in the medical education sample. Further construct validity should be sought by interpreting scale scores in relation to specific learning activities. Using established scales for measuring students’ regulation strategies enables a broad empirical base for increasing knowledge on regulation strategies in relation to different disciplinary settings and contributes to theoretical development.
Locatello, Lisa; Rasotto, Maria B.
2017-08-01
Emerging evidence suggests the occurrence of comparative decision-making processes in mate choice, questioning the traditional idea of female choice based on rules of absolute preference. In such a scenario, females are expected to use a typical best-of- n sampling strategy, being able to recall previous sampled males based on memory of their quality and location. Accordingly, the quality of preferred mate is expected to be unrelated to both the number and the sequence of female visits. We found support for these predictions in the peacock blenny, Salaria pavo, a fish where females have the opportunity to evaluate the attractiveness of many males in a short time period and in a restricted spatial range. Indeed, even considering the variability in preference among females, most of them returned to previous sampled males for further evaluations; thus, the preferred male did not represent the last one in the sequence of visited males. Moreover, there was no relationship between the attractiveness of the preferred male and the number of further visits assigned to the other males. Our results suggest the occurrence of a best-of- n mate sampling strategy in the peacock blenny.
Elena GarcÃa-GonzÃ¡lez
2016-04-01
Full Text Available Objectives: Endogenous antibodies (EA may interfere with immunoassays, causing erroneous results for hormone analyses. As (in most cases this interference arises from the assay format and most immunoassays, even from different manufacturers, are constructed in a similar way, it is possible for a single type of EA to interfere with different immunoassays. Here we describe the case of a patient whose serum sample contains EA that interfere several hormones tests. We also discuss the strategies deployed to detect interference. Subjects and methods: Over a period of four years, a 30-year-old man was subjected to a plethora of laboratory and imaging diagnostic procedures as a consequence of elevated hormone results, mainly of pituitary origin, which did not correlate with the overall clinical picture. Results: Once analytical interference was suspected, the best laboratory approaches to investigate it were sample reanalysis on an alternative platform and sample incubation with antibody blocking tubes. Construction of an in-house ânonsenseâ sandwich assay was also a valuable strategy to confirm interference. In contrast, serial sample dilutions were of no value in our case, while polyethylene glycol (PEG precipitation gave inconclusive results, probably due to the use of inappropriate PEG concentrations for several of the tests assayed. Conclusions: Clinicians and laboratorians must be aware of the drawbacks of immunometric assays, and alert to the possibility of EA interference when results do not fit the clinical pattern. Keywords: Endogenous antibodies, Immunoassay, Interference, Pituitary hormones, Case report
Nickerson, Beverly; Harrington, Brent; Li, Fasheng; Guo, Michele Xuemei
2017-11-30
Drug product assay is one of several tests required for new drug products to ensure the quality of the product at release and throughout the life cycle of the product. Drug product assay testing is typically performed by preparing a composite sample of multiple dosage units to obtain an assay value representative of the batch. In some cases replicate composite samples may be prepared and the reportable assay value is the average value of all the replicates. In previously published work by Harrington et al. (2014) [5], a sample preparation composite and replicate strategy for assay was developed to provide a systematic approach which accounts for variability due to the analytical method and dosage form with a standard error of the potency assay criteria based on compendia and regulatory requirements. In this work, this sample preparation composite and replicate strategy for assay is applied to several case studies to demonstrate the utility of this approach and its application at various stages of pharmaceutical drug product development. Copyright © 2017 Elsevier B.V. All rights reserved.
Rats track odour trails accurately using a multi-layered strategy with near-optimal sampling.
Khan, Adil Ghani; Sarangi, Manaswini; Bhalla, Upinder Singh
2012-02-28
Tracking odour trails is a crucial behaviour for many animals, often leading to food, mates or away from danger. It is an excellent example of active sampling, where the animal itself controls how to sense the environment. Here we show that rats can track odour trails accurately with near-optimal sampling. We trained rats to follow odour trails drawn on paper spooled through a treadmill. By recording local field potentials (LFPs) from the olfactory bulb, and sniffing rates, we find that sniffing but not LFPs differ between tracking and non-tracking conditions. Rats can track odours within ~1 cm, and this accuracy is degraded when one nostril is closed. Moreover, they show path prediction on encountering a fork, wide 'casting' sweeps on encountering a gap and detection of reappearance of the trail in 1-2 sniffs. We suggest that rats use a multi-layered strategy, and achieve efficient sampling and high accuracy in this complex task.
Clinical usefulness of limited sampling strategies for estimating AUC of proton pump inhibitors.
Niioka, Takenori
2011-03-01
Cytochrome P450 (CYP) 2C19 (CYP2C19) genotype is regarded as a useful tool to predict area under the blood concentration-time curve (AUC) of proton pump inhibitors (PPIs). In our results, however, CYP2C19 genotypes had no influence on AUC of all PPIs during fluvoxamine treatment. These findings suggest that CYP2C19 genotyping is not always a good indicator for estimating AUC of PPIs. Limited sampling strategies (LSS) were developed to estimate AUC simply and accurately. It is important to minimize the number of blood samples because of patient's acceptance. This article reviewed the usefulness of LSS for estimating AUC of three PPIs (omeprazole: OPZ, lansoprazole: LPZ and rabeprazole: RPZ). The best prediction formulas in each PPI were AUC(OPZ)=9.24 x C(6h)+2638.03, AUC(LPZ)=12.32 x C(6h)+3276.09 and AUC(RPZ)=1.39 x C(3h)+7.17 x C(6h)+344.14, respectively. In order to optimize the sampling strategy of LPZ, we tried to establish LSS for LPZ using a time point within 3 hours through the property of pharmacokinetics of its enantiomers. The best prediction formula using the fewest sampling points (one point) was AUC(racemic LPZ)=6.5 x C(3h) of (R)-LPZ+13.7 x C(3h) of (S)-LPZ-9917.3 x G1-14387.2×G2+7103.6 (G1: homozygous extensive metabolizer is 1 and the other genotypes are 0; G2: heterozygous extensive metabolizer is 1 and the other genotypes are 0). Those strategies, plasma concentration monitoring at one or two time-points, might be more suitable for AUC estimation than reference to CYP2C19 genotypes, particularly in the case of coadministration of CYP mediators.
A comparison of various Gibbs energy dissipation correlations for predicting microbial growth yields
Liu, J.-S. [Laboratory of Chemical and Biochemical Engineering, Swiss Federal Institute of Technology, EPFL, CH-1015 Lausanne (Switzerland); Vojinovic, V. [Laboratory of Chemical and Biochemical Engineering, Swiss Federal Institute of Technology, EPFL, CH-1015 Lausanne (Switzerland); Patino, R. [Cinvestav-Merida, Departamento de Fisica Aplicada, Km. 6 carretera antigua a Progreso, AP 73 Cordemex, 97310 Merida, Yucatan (Mexico); Maskow, Th. [UFZ Centre for Environmental Research, Department of Environmental Microbiology, Permoserstrasse 15, D-04318 Leipzig (Germany); Stockar, U. von [Laboratory of Chemical and Biochemical Engineering, Swiss Federal Institute of Technology, EPFL, CH-1015 Lausanne (Switzerland)]. E-mail: urs.vonStockar@epfl.ch
2007-06-25
Thermodynamic analysis may be applied in order to predict microbial growth yields roughly, based on an empirical correlation of the Gibbs energy of the overall growth reaction or Gibbs energy dissipation. Due to the well-known trade-off between high biomass yield and high Gibbs energy dissipation necessary for fast growth, an optimal range of Gibbs energy dissipation exists and it can be correlated to physical characteristics of the growth substrates. A database previously available in the literature has been extended significantly in order to test such correlations. An analysis of the relationship between biomass yield and Gibbs energy dissipation reveals that one does not need a very precise estimation of the latter to predict the former roughly. Approximating the Gibbs energy dissipation with a constant universal value of -500 kJ C-mol{sup -1} of dry biomass grown predicts many experimental growth yields nearly as well as a carefully designed, complex correlation available from the literature, even though a number of predictions are grossly out of range. A new correlation for Gibbs energy dissipation is proposed which is just as accurate as the complex literature correlation despite its dramatically simpler structure.
A comparison of various Gibbs energy dissipation correlations for predicting microbial growth yields
Liu, J.-S.; Vojinovic, V.; Patino, R.; Maskow, Th.; Stockar, U. von
2007-01-01
Thermodynamic analysis may be applied in order to predict microbial growth yields roughly, based on an empirical correlation of the Gibbs energy of the overall growth reaction or Gibbs energy dissipation. Due to the well-known trade-off between high biomass yield and high Gibbs energy dissipation necessary for fast growth, an optimal range of Gibbs energy dissipation exists and it can be correlated to physical characteristics of the growth substrates. A database previously available in the literature has been extended significantly in order to test such correlations. An analysis of the relationship between biomass yield and Gibbs energy dissipation reveals that one does not need a very precise estimation of the latter to predict the former roughly. Approximating the Gibbs energy dissipation with a constant universal value of -500 kJ C-mol -1 of dry biomass grown predicts many experimental growth yields nearly as well as a carefully designed, complex correlation available from the literature, even though a number of predictions are grossly out of range. A new correlation for Gibbs energy dissipation is proposed which is just as accurate as the complex literature correlation despite its dramatically simpler structure
Söderström, Hanna; Lindberg, Richard H; Fick, Jerker
2009-01-16
Although polar organic contaminants (POCs) such as pharmaceuticals are considered as some of today's most emerging contaminants few of them are regulated or included in on-going monitoring programs. However, the growing concern among the public and researchers together with the new legislature within the European Union, the registration, evaluation and authorisation of chemicals (REACH) system will increase the future need of simple, low cost strategies for monitoring and risk assessment of POCs in aquatic environments. In this article, we overview the advantages and shortcomings of traditional and novel sampling techniques available for monitoring the emerging POCs in water. The benefits and drawbacks of using active and biological sampling were discussed and the principles of organic passive samplers (PS) presented. A detailed overview of type of polar organic PS available, and their classes of target compounds and field of applications were given, and the considerations involved in using them such as environmental effects and quality control were discussed. The usefulness of biological sampling of POCs in water was found to be limited. Polar organic PS was considered to be the only available, but nevertheless, an efficient alternative to active water sampling due to its simplicity, low cost, no need of power supply or maintenance, and the ability of collecting time-integrative samples with one sample collection. However, the polar organic PS need to be further developed before they can be used as standard in water quality monitoring programs.
Benz-de Bretagne, I; Le Guellec, C; Halimi, J M; Gatault, P; Barbet, C; Alnajjar, A; Büchler, M; Lebranchu, Y; Andres, Christian Robert; Vourcʼh, P; Blasco, H
2012-06-01
Glomerular filtration rate (GFR) measurement is a major issue in kidney transplant recipients for clinicians. GFR can be determined by estimating the plasma clearance of iohexol, a nonradiolabeled compound. For practical and convenient application for patients and caregivers, it is important that a minimal number of samples are drawn. The aim of this study was to develop and validate a Bayesian model with fewer samples for reliable prediction of GFR in kidney transplant recipients. Iohexol plasma concentration-time curves from 95 patients were divided into an index (n = 63) and a validation set (n = 32). Samples (n = 4-6 per patient) were obtained during the elimination phase, that is, between 120 and 270 minutes. Individual reference values of iohexol clearance (CL(iohexol)) were calculated from k (elimination slope) and V (volume of distribution from intercept). Individual CL(iohexol) values were then introduced into the Bröchner-Mortensen equation to obtain the GFR (reference value). A population pharmacokinetic model was developed from the index set and validated using standard methods. For the validation set, we tested various combinations of 1, 2, or 3 sampling time to estimate CL(iohexol). According to the different combinations tested, a maximum a posteriori Bayesian estimation of CL(iohexol) was obtained from population parameters. Individual estimates of GFR were compared with individual reference values through analysis of bias and precision. A capability analysis allowed us to determine the best sampling strategy for Bayesian estimation. A 1-compartment model best described our data. Covariate analysis showed that uremia, serum creatinine, and age were significantly associated with k(e), and weight with V. The strategy, including samples drawn at 120 and 270 minutes, allowed accurate prediction of GFR (mean bias: -3.71%, mean imprecision: 7.77%). With this strategy, about 20% of individual predictions were outside the bounds of acceptance set at ± 10
Yi, Amelia Lee Zhi; Dercon, Gerd
2017-01-01
In the event of a severe nuclear or radiological accident, the release of radionuclides results in contamination of land surfaces affecting agricultural and food resources. Speedy accumulation of information and guidance on decision making is essential in enhancing the ability of stakeholders to strategize for immediate countermeasure strategies. Support tools such as decision trees and sampling protocols allow for swift response by governmental bodies and assist in proper management of the situation. While such tools exist, they focus mainly on protecting public well-being and not food safety management strategies. Consideration of the latter is necessary as it has long-term implications especially to agriculturally dependent Member States. However, it is a research gap that remains to be filled.
A preliminary evaluation of comminution and sampling strategies for radioactive cemented waste
Bilodeau, M.; Lastra, R.; Bouzoubaa, N. [Natural Resources Canada, Ottawa, ON (Canada); Chapman, M. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)
2011-07-01
Lixiviation of Hg, U and Cs contaminants and micro-encapsulation of cemented radioactive waste (CRW) are the two main components of a CRW stabilization research project carried out at Natural Resources Canada in collaboration with Atomic Energy of Canada Limited. Unmolding CRW from the storage pail, its fragmentation into a size range suitable for both processes and the collection of a representative sample are three essential steps for providing optimal material conditions for the two studies. Separation of wires, metals and plastic incorporated into CRW samples is also required. A comminution and sampling strategy was developed to address all those needs. Dust emissions and other health and safety concerns were given full consideration. Surrogate cemented waste (SCW) was initially used for this comminution study where Cu was used as a substitute for U and Hg. SCW was characterized as a friable material through the measurement of the Bond work index of 7.7 kWh/t. A mineralogical investigation and the calibration of material heterogeneity parameters of the sampling error model showed that Cu, Hg and Cs are finely disseminated in the cement matrix. A sampling strategy was built from the model and successfully validated with radioactive waste. A larger than expected sampling error was observed with U due to the formation of large U solid phases, which were not observed with the Cu tracer. SCW samples were crushed and ground under different rock fragmentation mechanisms: compression (jaw and cone crushers, rod mill), impact (ball mill), attrition, high voltage disintegration and high pressure water (and liquid nitrogen) jetting. Cryogenic grinding was also tested with the attrition mill. Crushing and grinding technologies were assessed against criteria that were gathered from literature surveys, experiential know-how and discussion with the client and field experts. Water jetting and its liquid nitrogen variant were retained for pail cutting and waste unmolding while
A preliminary evaluation of comminution and sampling strategies for radioactive cemented waste
Bilodeau, M.; Lastra, R.; Bouzoubaa, N.; Chapman, M.
2011-01-01
Lixiviation of Hg, U and Cs contaminants and micro-encapsulation of cemented radioactive waste (CRW) are the two main components of a CRW stabilization research project carried out at Natural Resources Canada in collaboration with Atomic Energy of Canada Limited. Unmolding CRW from the storage pail, its fragmentation into a size range suitable for both processes and the collection of a representative sample are three essential steps for providing optimal material conditions for the two studies. Separation of wires, metals and plastic incorporated into CRW samples is also required. A comminution and sampling strategy was developed to address all those needs. Dust emissions and other health and safety concerns were given full consideration. Surrogate cemented waste (SCW) was initially used for this comminution study where Cu was used as a substitute for U and Hg. SCW was characterized as a friable material through the measurement of the Bond work index of 7.7 kWh/t. A mineralogical investigation and the calibration of material heterogeneity parameters of the sampling error model showed that Cu, Hg and Cs are finely disseminated in the cement matrix. A sampling strategy was built from the model and successfully validated with radioactive waste. A larger than expected sampling error was observed with U due to the formation of large U solid phases, which were not observed with the Cu tracer. SCW samples were crushed and ground under different rock fragmentation mechanisms: compression (jaw and cone crushers, rod mill), impact (ball mill), attrition, high voltage disintegration and high pressure water (and liquid nitrogen) jetting. Cryogenic grinding was also tested with the attrition mill. Crushing and grinding technologies were assessed against criteria that were gathered from literature surveys, experiential know-how and discussion with the client and field experts. Water jetting and its liquid nitrogen variant were retained for pail cutting and waste unmolding while
Oxidation potentials, Gibbs energies, enthalpies and entropies of actinide ions in aqueous solutions
1977-01-01
The values of the Gibbs energy, enthalpy, and entropy of different actinide ions, thermodynamic characteristics of the processes of hydration of these ions, and the presently known ionization potentials of actinides are given. The enthalpy and entropy components of the oxidation potentials of actinide elements are considered. The curves of the dependence of the Gibbs energy of ion formation on the atomic number of the element and the Frost diagrams are analyzed. The diagram proposed by Frost represents the graphical dependence of the Gibbs energy of hydrated ions on the degree of oxidation of the element. Using the Frost diagram it is easy to establish whether a given ion is stable to disproportioning
Modeling adsorption of cationic surfactants at air/water interface without using the Gibbs equation.
Phan, Chi M; Le, Thu N; Nguyen, Cuong V; Yusa, Shin-ichi
2013-04-16
The Gibbs adsorption equation has been indispensable in predicting the surfactant adsorption at the interfaces, with many applications in industrial and natural processes. This study uses a new theoretical framework to model surfactant adsorption at the air/water interface without the Gibbs equation. The model was applied to two surfactants, C14TAB and C16TAB, to determine the maximum surface excesses. The obtained values demonstrated a fundamental change, which was verified by simulations, in the molecular arrangement at the interface. The new insights, in combination with recent discoveries in the field, expose the limitations of applying the Gibbs adsorption equation to cationic surfactants at the air/water interface.
Goyet, Catherine; Davis, Daniel; Peltzer, Edward T.; Brewer, Peter G.
1995-01-01
Large-scale ocean observing programs such as the Joint Global Ocean Flux Study (JGOFS) and the World Ocean Circulation Experiment (WOCE) today, must face the problem of designing an adequate sampling strategy. For ocean chemical variables, the goals and observing technologies are quite different from ocean physical variables (temperature, salinity, pressure). We have recently acquired data on the ocean CO2 properties on WOCE cruises P16c and P17c that are sufficiently dense to test for sampling redundancy. We use linear and quadratic interpolation methods on the sampled field to investigate what is the minimum number of samples required to define the deep ocean total inorganic carbon (TCO2) field within the limits of experimental accuracy (+/- 4 micromol/kg). Within the limits of current measurements, these lines were oversampled in the deep ocean. Should the precision of the measurement be improved, then a denser sampling pattern may be desirable in the future. This approach rationalizes the efficient use of resources for field work and for estimating gridded (TCO2) fields needed to constrain geochemical models.
Jomaa, Seifeddine; Jiang, Sanyuan; Yang, Xiaoqiang; Rode, Michael
2016-04-01
It is known that a good evaluation and prediction of surface water pollution is mainly limited by the monitoring strategy and the capability of the hydrological water quality model to reproduce the internal processes. To this end, a compromise sampling frequency, which can reflect the dynamical behaviour of leached nutrient fluxes responding to changes in land use, agriculture practices and point sources, and appropriate process-based water quality model are required. The objective of this study was to test the identification of hydrological water quality model parameters (nitrogen and phosphorus) under two different monitoring strategies: (1) regular grab-sampling approach and (2) regular grab-sampling with additional monitoring during the hydrological events using automatic samplers. First, the semi-distributed hydrological water quality HYPE (Hydrological Predictions for the Environment) model was successfully calibrated (1994-1998) for discharge (NSE = 0.86), nitrate-N (lowest NSE for nitrate-N load = 0.69), particulate phosphorus and soluble phosphorus in the Selke catchment (463 km2, central Germany) for the period 1994-1998 using regular grab-sampling approach (biweekly to monthly for nitrogen and phosphorus concentrations). Second, the model was successfully validated during the period 1999-2010 for discharge, nitrate-N, particulate-phosphorus and soluble-phosphorus (lowest NSE for soluble phosphorus load = 0.54). Results, showed that when additional sampling during the events with random grab-sampling approach was used (period 2011-2013), the hydrological model could reproduce only the nitrate-N and soluble phosphorus concentrations reasonably well. However, when additional sampling during the hydrological events was considered, the HYPE model could not represent the measured particulate phosphorus. This reflects the importance of suspended sediment during the hydrological events increasing the concentrations of particulate phosphorus. The HYPE model could
Chemical Disequilibria and Sources of Gibbs Free Energy Inside Enceladus
Zolotov, M. Y.
2010-12-01
Non-photosynthetic organisms use chemical disequilibria in the environment to gain metabolic energy from enzyme catalyzed oxidation-reduction (redox) reactions. The presence of carbon dioxide, ammonia, formaldehyde, methanol, methane and other hydrocarbons in the eruptive plume of Enceladus [1] implies diverse redox disequilibria in the interior. In the history of the moon, redox disequilibria could have been activated through melting of a volatile-rich ice and following water-rock-organic interactions. Previous and/or present aqueous processes are consistent with the detection of NaCl and Na2CO3/NaHCO3-bearing grains emitted from Enceladus [2]. A low K/Na ratio in the grains [2] and a low upper limit for N2 in the plume [3] indicate low temperature (possibly enzymes if organisms were (are) present. The redox conditions in aqueous systems and amounts of available Gibbs free energy should have been affected by the production, consumption and escape of hydrogen. Aqueous oxidation of minerals (Fe-Ni metal, Fe-Ni phosphides, etc.) accreted on Enceladus should have led to H2 production, which is consistent with H2 detection in the plume [1]. Numerical evaluations based on concentrations of plume gases [1] reveal sufficient energy sources available to support metabolically diverse life at a wide range of activities (a) of dissolved H2 (log aH2 from 0 to -10). Formaldehyde, carbon dioxide [c.f. 4], HCN (if it is present), methanol, acetylene and other hydrocarbons have the potential to react with H2 to form methane. Aqueous hydrogenations of acetylene, HCN and formaldehyde to produce methanol are energetically favorable as well. Both favorable hydrogenation and hydration of HCN lead to formation of ammonia. Condensed organic species could also participate in redox reactions. Methane and ammonia are the final products of these putative redox transformations. Sulfates may have not formed in cold and/or short-term aqueous environments with a limited H2 escape. In contrast to
Compressed sensing of roller bearing fault based on multiple down-sampling strategy
Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang
2016-01-01
Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults. (paper)
Compressed sensing of roller bearing fault based on multiple down-sampling strategy
Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang
2016-02-01
Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults.
Bias of shear wave elasticity measurements in thin layer samples and a simple correction strategy.
Mo, Jianqiang; Xu, Hao; Qiang, Bo; Giambini, Hugo; Kinnick, Randall; An, Kai-Nan; Chen, Shigao; Luo, Zongping
2016-01-01
Shear wave elastography (SWE) is an emerging technique for measuring biological tissue stiffness. However, the application of SWE in thin layer tissues is limited by bias due to the influence of geometry on measured shear wave speed. In this study, we investigated the bias of Young's modulus measured by SWE in thin layer gelatin-agar phantoms, and compared the result with finite element method and Lamb wave model simulation. The result indicated that the Young's modulus measured by SWE decreased continuously when the sample thickness decreased, and this effect was more significant for smaller thickness. We proposed a new empirical formula which can conveniently correct the bias without the need of using complicated mathematical modeling. In summary, we confirmed the nonlinear relation between thickness and Young's modulus measured by SWE in thin layer samples, and offered a simple and practical correction strategy which is convenient for clinicians to use.
GARN: Sampling RNA 3D Structure Space with Game Theory and Knowledge-Based Scoring Strategies.
Boudard, Mélanie; Bernauer, Julie; Barth, Dominique; Cohen, Johanne; Denise, Alain
2015-01-01
Cellular processes involve large numbers of RNA molecules. The functions of these RNA molecules and their binding to molecular machines are highly dependent on their 3D structures. One of the key challenges in RNA structure prediction and modeling is predicting the spatial arrangement of the various structural elements of RNA. As RNA folding is generally hierarchical, methods involving coarse-grained models hold great promise for this purpose. We present here a novel coarse-grained method for sampling, based on game theory and knowledge-based potentials. This strategy, GARN (Game Algorithm for RNa sampling), is often much faster than previously described techniques and generates large sets of solutions closely resembling the native structure. GARN is thus a suitable starting point for the molecular modeling of large RNAs, particularly those with experimental constraints. GARN is available from: http://garn.lri.fr/.
Determination of Gibbs energies of formation in aqueous solution using chemical engineering tools.
Toure, Oumar; Dussap, Claude-Gilles
2016-08-01
Standard Gibbs energies of formation are of primary importance in the field of biothermodynamics. In the absence of any directly measured values, thermodynamic calculations are required to determine the missing data. For several biochemical species, this study shows that the knowledge of the standard Gibbs energy of formation of the pure compounds (in the gaseous, solid or liquid states) enables to determine the corresponding standard Gibbs energies of formation in aqueous solutions. To do so, using chemical engineering tools (thermodynamic tables and a model enabling to predict activity coefficients, solvation Gibbs energies and pKa data), it becomes possible to determine the partial chemical potential of neutral and charged components in real metabolic conditions, even in concentrated mixtures. Copyright © 2016 Elsevier Ltd. All rights reserved.
Extension of Gibbs-Duhem equation including influences of external fields
Guangze, Han; Jianjia, Meng
2018-03-01
Gibbs-Duhem equation is one of the fundamental equations in thermodynamics, which describes the relation among changes in temperature, pressure and chemical potential. Thermodynamic system can be affected by external field, and this effect should be revealed by thermodynamic equations. Based on energy postulate and the first law of thermodynamics, the differential equation of internal energy is extended to include the properties of external fields. Then, with homogeneous function theorem and a redefinition of Gibbs energy, a generalized Gibbs-Duhem equation with influences of external fields is derived. As a demonstration of the application of this generalized equation, the influences of temperature and external electric field on surface tension, surface adsorption controlled by external electric field, and the derivation of a generalized chemical potential expression are discussed, which show that the extended Gibbs-Duhem equation developed in this paper is capable to capture the influences of external fields on a thermodynamic system.
Recommended Immunological Strategies to Screen for Botulinum Neurotoxin-Containing Samples
Stéphanie Simon
2015-11-01
Full Text Available Botulinum neurotoxins (BoNTs cause the life-threatening neurological illness botulism in humans and animals and are divided into seven serotypes (BoNT/A–G, of which serotypes A, B, E, and F cause the disease in humans. BoNTs are classified as “category A” bioterrorism threat agents and are relevant in the context of the Biological Weapons Convention. An international proficiency test (PT was conducted to evaluate detection, quantification and discrimination capabilities of 23 expert laboratories from the health, food and security areas. Here we describe three immunological strategies that proved to be successful for the detection and quantification of BoNT/A, B, and E considering the restricted sample volume (1 mL distributed. To analyze the samples qualitatively and quantitatively, the first strategy was based on sensitive immunoenzymatic and immunochromatographic assays for fast qualitative and quantitative analyses. In the second approach, a bead-based suspension array was used for screening followed by conventional ELISA for quantification. In the third approach, an ELISA plate format assay was used for serotype specific immunodetection of BoNT-cleaved substrates, detecting the activity of the light chain, rather than the toxin protein. The results provide guidance for further steps in quality assurance and highlight problems to address in the future.
Recommended Immunological Strategies to Screen for Botulinum Neurotoxin-Containing Samples.
Simon, Stéphanie; Fiebig, Uwe; Liu, Yvonne; Tierney, Rob; Dano, Julie; Worbs, Sylvia; Endermann, Tanja; Nevers, Marie-Claire; Volland, Hervé; Sesardic, Dorothea; Dorner, Martin B
2015-11-26
Botulinum neurotoxins (BoNTs) cause the life-threatening neurological illness botulism in humans and animals and are divided into seven serotypes (BoNT/A-G), of which serotypes A, B, E, and F cause the disease in humans. BoNTs are classified as "category A" bioterrorism threat agents and are relevant in the context of the Biological Weapons Convention. An international proficiency test (PT) was conducted to evaluate detection, quantification and discrimination capabilities of 23 expert laboratories from the health, food and security areas. Here we describe three immunological strategies that proved to be successful for the detection and quantification of BoNT/A, B, and E considering the restricted sample volume (1 mL) distributed. To analyze the samples qualitatively and quantitatively, the first strategy was based on sensitive immunoenzymatic and immunochromatographic assays for fast qualitative and quantitative analyses. In the second approach, a bead-based suspension array was used for screening followed by conventional ELISA for quantification. In the third approach, an ELISA plate format assay was used for serotype specific immunodetection of BoNT-cleaved substrates, detecting the activity of the light chain, rather than the toxin protein. The results provide guidance for further steps in quality assurance and highlight problems to address in the future.
A Cost-Constrained Sampling Strategy in Support of LAI Product Validation in Mountainous Areas
Gaofei Yin
2016-08-01
Full Text Available Increasing attention is being paid on leaf area index (LAI retrieval in mountainous areas. Mountainous areas present extreme topographic variability, and are characterized by more spatial heterogeneity and inaccessibility compared with flat terrain. It is difficult to collect representative ground-truth measurements, and the validation of LAI in mountainous areas is still problematic. A cost-constrained sampling strategy (CSS in support of LAI validation was presented in this study. To account for the influence of rugged terrain on implementation cost, a cost-objective function was incorporated to traditional conditioned Latin hypercube (CLH sampling strategy. A case study in Hailuogou, Sichuan province, China was used to assess the efficiency of CSS. Normalized difference vegetation index (NDVI, land cover type, and slope were selected as auxiliary variables to present the variability of LAI in the study area. Results show that CSS can satisfactorily capture the variability across the site extent, while minimizing field efforts. One appealing feature of CSS is that the compromise between representativeness and implementation cost can be regulated according to actual surface heterogeneity and budget constraints, and this makes CSS flexible. Although the proposed method was only validated for the auxiliary variables rather than the LAI measurements, it serves as a starting point for establishing the locations of field plots and facilitates the preparation of field campaigns in mountainous areas.
A systematic examination of a random sampling strategy for source apportionment calculations.
Andersson, August
2011-12-15
Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.
The MaxEnt extension of a quantum Gibbs family, convex geometry and geodesics
Weis, Stephan
2015-01-01
We discuss methods to analyze a quantum Gibbs family in the ultra-cold regime where the norm closure of the Gibbs family fails due to discontinuities of the maximum-entropy inference. The current discussion of maximum-entropy inference and irreducible correlation in the area of quantum phase transitions is a major motivation for this research. We extend a representation of the irreducible correlation from finite temperatures to absolute zero
Uniqueness of Gibbs Measure for Models with Uncountable Set of Spin Values on a Cayley Tree
Eshkabilov, Yu. Kh.; Haydarov, F. H.; Rozikov, U. A.
2013-01-01
We consider models with nearest-neighbor interactions and with the set [0, 1] of spin values, on a Cayley tree of order K ≥ 1. It is known that the ‘splitting Gibbs measures’ of the model can be described by solutions of a nonlinear integral equation. For arbitrary k ≥ 2 we find a sufficient condition under which the integral equation has unique solution, hence under the condition the corresponding model has unique splitting Gibbs measure.
Efficiency of alternative MCMC strategies illustrated using the reaction norm model
Shariati, Mohammad Mahdi; Sørensen, D.
2008-01-01
The Markov chain Monte Carlo (MCMC) strategy provides remarkable flexibility for fitting complex hierarchical models. However, when parameters are highly correlated in their posterior distributions and their number is large, a particular MCMC algorithm may perform poorly and the resulting...... in the low correlation scenario where SG was the best strategy. The two LH proposals could not compete with any of the Gibbs sampling algorithms. In this study it was not possible to find an MCMC strategy that performs optimally across the range of target distributions and across all possible values...
Sadler, Georgia Robins; Lee, Hau-Chen; Seung-Hwan Lim, Rod; Fullerton, Judith
2011-01-01
Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author’s program of research are provided to demonstrate how adaptations of snowball sampling can be effectively used in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or subjects for research studies when recruitment of a population based sample is not essential. PMID:20727089
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Nay, S. M.; D'Amore, D. V.
2009-12-01
The coastal temperate rainforest (CTR) along the northwest coast of North America is a large and complex mosaic of forests and wetlands located on an undulating terrain ranging from sea level to thousands of meters in elevation. This biome stores a dynamic portion of the total carbon stock of North America. The fate of the terrestrial carbon stock is of concern due to the potential for mobilization and export of this store to both the atmosphere as carbon respiration flux and ocean as dissolved organic and inorganic carbon flux. Soil respiration is the largest export vector in the system and must be accurately measured to gain any comprehensive understanding of how carbon moves though this system. Suitable monitoring tools capable of measuring carbon fluxes at small spatial scales are essential for our understanding of carbon dynamics at larger spatial scales within this complex assemblage of ecosystems. We have adapted instrumentation and developed a sampling strategy for optimizing replication of soil respiration measurements to quantify differences among spatially complex landscape units of the CTR. We start with the design of the instrument to ease the technological, ergonomic and financial barriers that technicians encounter in monitoring the efflux of CO2 from the soil. Our sampling strategy optimizes the physical efforts of the field work and manages for the high variation of flux measurements encountered in this difficult environment of rough terrain, dense vegetation and wet climate. Our soil respirometer incorporates an infra-red gas analyzer (LiCor Inc. LI-820) and an 8300 cm3 soil respiration chamber; the device is durable, lightweight, easy to operate and can be built for under $5000 per unit. The modest unit price allows for a multiple unit fleet to be deployed and operated in an intensive field monitoring campaign. We use a large 346 cm2 collar to accommodate as much micro spatial variation as feasible and to facilitate repeated measures for tracking
R Drew Carleton
Full Text Available Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with "pre-sampling" data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n ∼ 100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand was the most efficient, with sample means converging on true mean density for sample sizes of n ∼ 25-40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods.
Antipas, A.; Hopkins, A.M.; Wasemiller, M.A.; McCain, R.G.
1996-01-01
This document provides a sampling/analytical strategy and methodology for Resource Conservation and Recovery Act (RCRA) closure of the 183-H Solar Evaporation Basins within the boundaries and requirements identified in the initial Phase II Sampling and Analysis Plan for RCRA Closure of the 183-H Solar Evaporation Basins
Casas, Monica Escolà; Hansen, Martin; Krogh, Kristine A
2014-01-01
the available sample preparation strategies combined with liquid chromatographic (LC) analysis to determine antimalarials in whole blood, plasma and urine published over the last decade. Sample preparation can be done by protein precipitation, solid-phase extraction, liquid-liquid extraction or dilution. After...
A hybrid computational strategy to address WGS variant analysis in >5000 samples.
Huang, Zhuoyi; Rustagi, Navin; Veeraraghavan, Narayanan; Carroll, Andrew; Gibbs, Richard; Boerwinkle, Eric; Venkata, Manjunath Gorentla; Yu, Fuli
2016-09-10
The decreasing costs of sequencing are driving the need for cost effective and real time variant calling of whole genome sequencing data. The scale of these projects are far beyond the capacity of typical computing resources available with most research labs. Other infrastructures like the cloud AWS environment and supercomputers also have limitations due to which large scale joint variant calling becomes infeasible, and infrastructure specific variant calling strategies either fail to scale up to large datasets or abandon joint calling strategies. We present a high throughput framework including multiple variant callers for single nucleotide variant (SNV) calling, which leverages hybrid computing infrastructure consisting of cloud AWS, supercomputers and local high performance computing infrastructures. We present a novel binning approach for large scale joint variant calling and imputation which can scale up to over 10,000 samples while producing SNV callsets with high sensitivity and specificity. As a proof of principle, we present results of analysis on Cohorts for Heart And Aging Research in Genomic Epidemiology (CHARGE) WGS freeze 3 dataset in which joint calling, imputation and phasing of over 5300 whole genome samples was produced in under 6 weeks using four state-of-the-art callers. The callers used were SNPTools, GATK-HaplotypeCaller, GATK-UnifiedGenotyper and GotCloud. We used Amazon AWS, a 4000-core in-house cluster at Baylor College of Medicine, IBM power PC Blue BioU at Rice and Rhea at Oak Ridge National Laboratory (ORNL) for the computation. AWS was used for joint calling of 180 TB of BAM files, and ORNL and Rice supercomputers were used for the imputation and phasing step. All other steps were carried out on the local compute cluster. The entire operation used 5.2 million core hours and only transferred a total of 6 TB of data across the platforms. Even with increasing sizes of whole genome datasets, ensemble joint calling of SNVs for low
Analytical strategies for uranium determination in natural water and industrial effluents samples
Santos, Juracir Silva
2011-01-01
The work was developed under the project 993/2007 - 'Development of analytical strategies for uranium determination in environmental and industrial samples - Environmental monitoring in the Caetite city, Bahia, Brazil' and made possible through a partnership established between Universidade Federal da Bahia and the Comissao Nacional de Energia Nuclear. Strategies were developed to uranium determination in natural water and effluents of uranium mine. The first one was a critical evaluation of the determination of uranium by inductively coupled plasma optical emission spectrometry (ICP OES) performed using factorial and Doehlert designs involving the factors: acid concentration, radio frequency power and nebuliser gas flow rate. Five emission lines were simultaneously studied (namely: 367.007, 385.464, 385.957, 386.592 and 409.013 nm), in the presence of HN0 3 , H 3 C 2 00H or HCI. The determinations in HN0 3 medium were the most sensitive. Among the factors studied, the gas flow rate was the most significant for the five emission lines. Calcium caused interference in the emission intensity for some lines and iron did not interfere (at least up to 10 mg L -1 ) in the five lines studied. The presence of 13 other elements did not affect the emission intensity of uranium for the lines chosen. The optimized method, using the line at 385.957 nm, allows the determination of uranium with limit of quantification of 30 μg L -1 and precision expressed as RSD lower than 2.2% for uranium concentrations of either 500 and 1000 μg L -1 . In second one, a highly sensitive flow-based procedure for uranium determination in natural waters is described. A 100-cm optical path flow cell based on a liquid-core waveguide (LCW) was exploited to increase sensitivity of the arsenazo 111 method, aiming to achieve the limits established by environmental regulations. The flow system was designed with solenoid micro-pumps in order to improve mixing and minimize reagent consumption, as well as
Sampling strategies for tropical forest nutrient cycling studies: a case study in São Paulo, Brazil
G. Sparovek
1997-12-01
Full Text Available The precise sampling of soil, biological or micro climatic attributes in tropical forests, which are characterized by a high diversity of species and complex spatial variability, is a difficult task. We found few basic studies to guide sampling procedures. The objective of this study was to define a sampling strategy and data analysis for some parameters frequently used in nutrient cycling studies, i. e., litter amount, total nutrient amounts in litter and its composition (Ca, Mg, Κ, Ν and P, and soil attributes at three depths (organic matter, Ρ content, cation exchange capacity and base saturation. A natural remnant forest in the West of São Paulo State (Brazil was selected as study area and samples were collected in July, 1989. The total amount of litter and its total nutrient amounts had a high spatial independent variance. Conversely, the variance of litter composition was lower and the spatial dependency was peculiar to each nutrient. The sampling strategy for the estimation of litter amounts and the amount of nutrient in litter should be different than the sampling strategy for nutrient composition. For the estimation of litter amounts and the amount of nutrients in litter (related to quantity a large number of randomly distributed determinations are needed. Otherwise, for the estimation of litter nutrient composition (related to quality a smaller amount of spatially located samples should be analyzed. The determination of sampling for soil attributes differed according to the depth. Overall, surface samples (0-5 cm showed high short distance spatial dependent variance, whereas, subsurface samples exhibited spatial dependency in longer distances. Short transects with sampling interval of 5-10 m are recommended for surface sampling. Subsurface samples must also be spatially located, but with transects or grids with longer distances between sampling points over the entire area. Composite soil samples would not provide a complete
Tuck, Adrian F
2017-09-07
There is no widely agreed definition of entropy, and consequently Gibbs energy, in open systems far from equilibrium. One recent approach has sought to formulate an entropy and Gibbs energy based on observed scale invariances in geophysical variables, particularly in atmospheric quantities, including the molecules constituting stratospheric chemistry. The Hamiltonian flux dynamics of energy in macroscopic open nonequilibrium systems maps to energy in equilibrium statistical thermodynamics, and corresponding equivalences of scale invariant variables with other relevant statistical mechanical variables such as entropy, Gibbs energy, and 1/(k Boltzmann T), are not just formally analogous but are also mappings. Three proof-of-concept representative examples from available adequate stratospheric chemistry observations-temperature, wind speed and ozone-are calculated, with the aim of applying these mappings and equivalences. Potential applications of the approach to scale invariant observations from the literature, involving scales from molecular through laboratory to astronomical, are considered. Theoretical support for the approach from the literature is discussed.
R. Feistel
2005-01-01
Full Text Available The 2003 Gibbs thermodynamic potential function represents a very accurate, compact, consistent and comprehensive formulation of equilibrium properties of seawater. It is expressed in the International Temperature Scale ITS-90 and is fully consistent with the current scientific pure water standard, IAPWS-95. Source code examples in FORTRAN, C++ and Visual Basic are presented for the numerical implementation of the potential function and its partial derivatives, as well as for potential temperature. A collection of thermodynamic formulas and relations is given for possible applications in oceanography, from density and chemical potential over entropy and potential density to mixing heat and entropy production. For colligative properties like vapour pressure, freezing points, and for a Gibbs potential of sea ice, the equations relating the Gibbs function of seawater to those of vapour and ice are presented.
Numerical implementation and oceanographic application of the Gibbs potential of ice
R. Feistel
2005-01-01
Full Text Available The 2004 Gibbs thermodynamic potential function of naturally abundant water ice is based on much more experimental data than its predecessors, is therefore significantly more accurate and reliable, and for the first time describes the entire temperature and pressure range of existence of this ice phase. It is expressed in the ITS-90 temperature scale and is consistent with the current scientific pure water standard, IAPWS-95, and the 2003 Gibbs potential of seawater. The combination of these formulations provides sublimation pressures, freezing points, and sea ice properties covering the parameter ranges of oceanographic interest. This paper provides source code examples in Visual Basic, Fortran and C++ for the computation of the Gibbs function of ice and its partial derivatives. It reports the most important related thermodynamic equations for ice and sea ice properties.
Gibbs Energy Modeling of Digenite and Adjacent Solid-State Phases
Waldner, Peter
2017-08-01
All sulfur potential and phase diagram data available in the literature for solid-state equilibria related to digenite have been assessed. Thorough thermodynamic analysis at 1 bar total pressure has been performed. A three-sublattice approach has been developed to model the Gibbs energy of digenite as a function of composition and temperature using the compound energy formalism. The Gibbs energies of the adjacent solid-state phases covelitte and high-temperature chalcocite are also modeled treating both sulfides as stoichiometric compounds. The novel model for digenite offers new interpretation of experimental data, may contribute from a thermodynamic point of view to the elucidation of the role of copper species within the crystal structure and allows extrapolation to composition regimes richer in copper than stoichiometric digenite Cu2S. Preliminary predictions into the ternary Cu-Fe-S system at 1273 K (1000 °C) using the Gibbs energy model of digenite for calculating its iron solubility are promising.
Existence and uniqueness of Gibbs states for a statistical mechanical polyacetylene model
Park, Y.M.
1987-01-01
One-dimensional polyacetylene is studied as a model of statistical mechanics. In a semiclassical approximation the system is equivalent to a quantum XY model interacting with unbounded classical spins in one-dimensional lattice space Z. By establishing uniform estimates, an infinite-volume-limit Hilbert space, a strongly continuous time evolution group of unitary operators, and an invariant vector are constructed. Moreover, it is proven that any infinite-limit state satisfies Gibbs conditions. Finally, a modification of Araki's relative entropy method is used to establish the uniqueness of Gibbs states
Determination of standard molar Gibbs energy of formation of Sm6UO12(s)
Sahu, Manjulata; Dash, Smruti
2015-01-01
The standard molar Gibbs energies of formation of Sm 6 UO 12 (s) have been measured using an oxygen concentration cell with yttria stabilized zirconia as solid electrolyte. Δ f G o m (T) for Sm 6 UO 12 (s) has been calculated using the measured and required thermodynamic data from the literature. The calculated Gibbs energy expression in the temperature range 899 to 1127 K can be given as: Δ f G o m (Nd 6 UO 12 , s,T)/(±2.3) kJ∙ mol -1 = -6681 +1.099 (T/K) (899-1127 K)(T/K). (author)
Gibbs free energy of formation of liquid lanthanide-bismuth alloys
Sheng Jiawei; Yamana, Hajimu; Moriyama, Hirotake
2001-01-01
The linear free energy relationship developed by Sverjensky and Molling provides a way to predict Gibbs free energies of liquid Ln-Bi alloys formation from the known thermodynamic properties of aqueous trivalent lanthanides (Ln 3(5(6+ ). The Ln-Bi alloys are divided into two isostructural families named as the LnBi 2 (Ln=La, Ce, Pr, Nd and Pm) and LnBi (Ln=Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm and Yb). The calculated Gibbs free energy values are well agreed with experimental data
Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine
2017-09-01
According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.
Solving Person Re-identification in Non-overlapping Camera using Efficient Gibbs Sampling
John, V.; Englebienne, G.; Krose, B.; Burghardt, T.; Damen, D.; Mayol-Cuevas, W.; Mirmehdi, M.
2013-01-01
This paper proposes a novel probabilistic approach for appearance-based person reidentification in non-overlapping camera networks. It accounts for varying illumination, varying camera gain and has low computational complexity. More specifically, we present a graphical model where we model the
Sampling strategies to improve passive optical remote sensing of river bathymetry
Legleiter, Carl; Overstreet, Brandon; Kinzel, Paul J.
2018-01-01
Passive optical remote sensing of river bathymetry involves establishing a relation between depth and reflectance that can be applied throughout an image to produce a depth map. Building upon the Optimal Band Ratio Analysis (OBRA) framework, we introduce sampling strategies for constructing calibration data sets that lead to strong relationships between an image-derived quantity and depth across a range of depths. Progressively excluding observations that exceed a series of cutoff depths from the calibration process improved the accuracy of depth estimates and allowed the maximum detectable depth ($d_{max}$) to be inferred directly from an image. Depth retrieval in two distinct rivers also was enhanced by a stratified version of OBRA that partitions field measurements into a series of depth bins to avoid biases associated with under-representation of shallow areas in typical field data sets. In the shallower, clearer of the two rivers, including the deepest field observations in the calibration data set did not compromise depth retrieval accuracy, suggesting that $d_{max}$ was not exceeded and the reach could be mapped without gaps. Conversely, in the deeper and more turbid stream, progressive truncation of input depths yielded a plausible estimate of $d_{max}$ consistent with theoretical calculations based on field measurements of light attenuation by the water column. This result implied that the entire channel, including pools, could not be mapped remotely. However, truncation improved the accuracy of depth estimates in areas shallower than $d_{max}$, which comprise the majority of the channel and are of primary interest for many habitat-oriented applications.
Lee, Joseph G L; Shook-Sa, Bonnie E; Bowling, J Michael; Ribisl, Kurt M
2017-06-23
In the United States, tens of thousands of inspections of tobacco retailers are conducted each year. Various sampling choices can reduce travel costs, emphasize enforcement in areas with greater non-compliance, and allow for comparability between states and over time. We sought to develop a model sampling strategy for state tobacco retailer inspections. Using a 2014 list of 10,161 North Carolina tobacco retailers, we compared results from simple random sampling; stratified, clustered at the ZIP code sampling; and, stratified, clustered at the census tract sampling. We conducted a simulation of repeated sampling and compared approaches for their comparative level of precision, coverage, and retailer dispersion. While maintaining an adequate design effect and statistical precision appropriate for a public health enforcement program, both stratified, clustered ZIP- and tract-based approaches were feasible. Both ZIP and tract strategies yielded improvements over simple random sampling, with relative improvements, respectively, of average distance between retailers (reduced 5.0% and 1.9%), percent Black residents in sampled neighborhoods (increased 17.2% and 32.6%), percent Hispanic residents in sampled neighborhoods (reduced 2.2% and increased 18.3%), percentage of sampled retailers located near schools (increased 61.3% and 37.5%), and poverty rate in sampled neighborhoods (increased 14.0% and 38.2%). States can make retailer inspections more efficient and targeted with stratified, clustered sampling. Use of statistically appropriate sampling strategies like these should be considered by states, researchers, and the Food and Drug Administration to improve program impact and allow for comparisons over time and across states. The authors present a model tobacco retailer sampling strategy for promoting compliance and reducing costs that could be used by U.S. states and the Food and Drug Administration (FDA). The design is feasible to implement in North Carolina. Use of
Khosro Mehdi Khanlou
2011-01-01
Full Text Available Cost reduction in plant breeding and conservation programs depends largely on correctly defining the minimal sample size required for the trustworthy assessment of intra- and inter-cultivar genetic variation. White clover, an important pasture legume, was chosen for studying this aspect. In clonal plants, such as the aforementioned, an appropriate sampling scheme eliminates the redundant analysis of identical genotypes. The aim was to define an optimal sampling strategy, i.e., the minimum sample size and appropriate sampling scheme for white clover cultivars, by using AFLP data (283 loci from three popular types. A grid-based sampling scheme, with an interplant distance of at least 40 cm, was sufficient to avoid any excess in replicates. Simulations revealed that the number of samples substantially influenced genetic diversity parameters. When using less than 15 per cultivar, the expected heterozygosity (He and Shannon diversity index (I were greatly underestimated, whereas with 20, more than 95% of total intra-cultivar genetic variation was covered. Based on AMOVA, a 20-cultivar sample was apparently sufficient to accurately quantify individual genetic structuring. The recommended sampling strategy facilitates the efficient characterization of diversity in white clover, for both conservation and exploitation.
Rodek, L.; Knudsen, E.; Poulsen, H.F.
2005-01-01
discrete tomographic algorithm, applying image-modelling Gibbs priors and a homogeneity condition. The optimization of the objective function is accomplished via the Gibbs Sampler in conjunction with simulated annealing. In order to express the structure of the orientation map, the similarity...
Experimental Determination of Third Derivative of the Gibbs Free Energy, G II
Koga, Yoshikata; Westh, Peter; Inaba, Akira
2010-01-01
We have been evaluating third derivative quantities of the Gibbs free energy, G, by graphically differentiating the second derivatives that are accessible experimentally, and demonstrated their power in elucidating the mixing schemes in aqueous solutions. Here we determine directly one of the third...
Del' Pino, Kh.; Chukurov, P.M.; Drakin, S.I.
1980-01-01
Analyzed are the results of experimental depermination of formation enthalpies of waterless nitrates of lanthane cerium, praseodymium, neodymium and samarium. Using method of comparative calculation computed are enthalpies of formation of waterless lanthanide and yttrium nitrates. Calculated values of enthalpies and Gibbs energies of waterless lanthanide nitrate formation are tabulated
Experimental Pragmatics and What Is Said: A Response to Gibbs and Moise.
Nicolle, Steve; Clark, Billy
1999-01-01
Attempted replication of Gibbs and Moise (1997) experiments regarding the recognition of a distinction between what is said and what is implicated. Results showed that, under certain conditions, subject selected implicatures when asked to select the paraphrase best reflecting what a speaker has said. Suggests that results can be explained with the…
Uniqueness of Gibbs states and global Markov property for Euclidean fields
Albeverio, S.; Hoeegh-Krohn, R.
1981-01-01
The authors briefly discuss the proof of the uniqueness of solutions of the DLR equations (uniqueness of Gibbs states) in the class of regular generalized random fields (in the sense of having second moments bounded by those of some Euclidean field), for the Euclidean fields with trigonometric interaction. (Auth.)
Estimates of Gibbs free energies of formation of chlorinated aliphatic compounds
Dolfing, Jan; Janssen, Dick B.
1994-01-01
The Gibbs free energy of formation of chlorinated aliphatic compounds was estimated with Mavrovouniotis' group contribution method. The group contribution of chlorine was estimated from the scarce data available on chlorinated aliphatics in the literature, and found to vary somewhat according to the
The Gibbs Energy Basis and Construction of Boiling Point Diagrams in Binary Systems
Smith, Norman O.
2004-01-01
An illustration of how excess Gibbs energies of the components in binary systems can be used to construct boiling point diagrams is given. The underlying causes of the various types of behavior of the systems in terms of intermolecular forces and the method of calculating the coexisting liquid and vapor compositions in boiling point diagrams with…
Fast covariance estimation for innovations computed from a spatial Gibbs point process
Coeurjolly, Jean-Francois; Rubak, Ege
In this paper, we derive an exact formula for the covariance of two innovations computed from a spatial Gibbs point process and suggest a fast method for estimating this covariance. We show how this methodology can be used to estimate the asymptotic covariance matrix of the maximum pseudo...
Jo, Jungmin [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of); Cheon, Mun Seong [ITER Korea, National Fusion Research Institute, Daejeon (Korea, Republic of); Chung, Kyoung-Jae, E-mail: jkjlsh1@snu.ac.kr [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of); Hwang, Y.S. [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of)
2016-11-01
Highlights: • Sample design for triton burnup ratio measurement is carried out. • Samples for 14.1 MeV neutron measurements are selected for KSTAR. • Si and Cu are the most suitable materials for d-t neutron measurements. • Appropriate γ-ray counting strategies for each selected sample are established. - Abstract: On the purpose of triton burnup measurements in Korea Superconducting Tokamak Advanced Research (KSTAR) deuterium plasmas, appropriate neutron activation system (NAS) samples for 14.1 MeV d-t neutron measurements have been designed and gamma-ray counting strategy is established. Neutronics calculations are performed with the MCNP5 neutron transport code for the KSTAR neutral beam heated deuterium plasma discharges. Based on those calculations and the assumed d-t neutron yield, the activities induced by d-t neutrons are estimated with the inventory code FISPACT-2007 for candidate sample materials: Si, Cu, Al, Fe, Nb, Co, Ti, and Ni. It is found that Si, Cu, Al, and Fe are suitable for the KSATR NAS in terms of the minimum detectable activity (MDA) calculated based on the standard deviation of blank measurements. Considering background gamma-rays radiated from surrounding structures activated by thermalized fusion neutrons, appropriate gamma-ray counting strategy for each selected sample is established.
On P-Adic Quasi Gibbs Measures for Q + 1-State Potts Model on the Cayley Tree
Mukhamedov, Farrukh
2010-06-01
In the present paper we introduce a new class of p-adic measures, associated with q +1-state Potts model, called p-adic quasi Gibbs measure, which is totally different from the p-adic Gibbs measure. We establish the existence p-adic quasi Gibbs measures for the model on a Cayley tree. If q is divisible by p, then we prove the occurrence of a strong phase transition. If q and p are relatively prime, then there is a quasi phase transition. These results are totally different from the results of [F.M.Mukhamedov, U.A. Rozikov, Indag. Math. N.S. 15(2005) 85-100], since q is divisible by p, which means that q + 1 is not divided by p, so according to a main result of the mentioned paper, there is a unique and bounded p-adic Gibbs measure (different from p-adic quasi Gibbs measure). (author)
Genderen, A.C.G. van; Weijden, C.H. van der
1984-01-01
For a group of minerals containing a common anion there exists a linear relationship between two parameters called ΔO and ΔF.ΔO is defined as the difference between the Gibbs energy of formation of a solid oxide and the Gibbs energy of formation of its aqueous cation, while ΔF is defined as the Gibbs energy of reaction of the formation of a mineral from the constituting oxide(s) and the acid. Using the Gibbs energies of formation of a number of known minerals the corresponding ΔO's and ΔF's were calculated and with the resulting regression equation it is possible to predict values for the Gibbs energies of formation of other minerals containing the same anion. This was done for 29 minerals containing the uranyl-ion together with phosphate, vanadate, arsenate or carbonate. (orig.)
Sandnes, Frode Eika
2010-01-01
A simple and low cost strategy for implementing pervasive objects that identify and track their own geographical location is proposed. The strategy, which is not reliant on any GIS infrastructure such as GPS, is realized using an electronic artifact with a built in clock, a light sensor, or low-cost digital camera, persistent storage such as flash and sufficient computational circuitry to make elementary trigonometric computations. The object monitors the lighting conditions and thereby detec...
Van Kuilenburg, A.; Hausler, P.; Schalhorn, A.; Tanck, M.; Proost, J.H.; Terborg, C.; Behnke, D.; Schwabe, W.; Jabschinsky, K.; Maring, J.G.
Aims: Dihydropyrimidine dehydrogenase (DPD) is the initial enzyme in the catabolism of 5-fluorouracil (5FU) and DPD deficiency is an important pharmacogenetic syndrome. The main purpose of this study was to develop a limited sampling strategy to evaluate the pharmacokinetics of 5FU and to detect
Integrating the Theory of Sampling into Underground Mine Grade Control Strategies
Simon C. Dominy
2018-05-01
Full Text Available Grade control in underground mines aims to deliver quality tonnes to the process plant via the accurate definition of ore and waste. It comprises a decision-making process including data collection and interpretation; local estimation; development and mining supervision; ore and waste destination tracking; and stockpile management. The foundation of any grade control programme is that of high-quality samples collected in a geological context. The requirement for quality samples has long been recognised, where they should be representative and fit-for-purpose. Once a sampling error is introduced, it propagates through all subsequent processes contributing to data uncertainty, which leads to poor decisions and financial loss. Proper application of the Theory of Sampling reduces errors during sample collection, preparation, and assaying. To achieve quality, sampling techniques must minimise delimitation, extraction, and preparation errors. Underground sampling methods include linear (chip and channel, grab (broken rock, and drill-based samples. Grade control staff should be well-trained and motivated, and operating staff should understand the critical need for grade control. Sampling must always be undertaken with a strong focus on safety and alternatives sought if the risk to humans is high. A quality control/quality assurance programme must be implemented, particularly when samples contribute to a reserve estimate. This paper assesses grade control sampling with emphasis on underground gold operations and presents recommendations for optimal practice through the application of the Theory of Sampling.
Bonta, Maximilian; Török, Szilvia; Hegedus, Balazs; Döme, Balazs; Limbeck, Andreas
2017-03-01
Laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) is one of the most commonly applied methods for lateral trace element distribution analysis in medical studies. Many improvements of the technique regarding quantification and achievable lateral resolution have been achieved in the last years. Nevertheless, sample preparation is also of major importance and the optimal sample preparation strategy still has not been defined. While conventional histology knows a number of sample pre-treatment strategies, little is known about the effect of these approaches on the lateral distributions of elements and/or their quantities in tissues. The technique of formalin fixation and paraffin embedding (FFPE) has emerged as the gold standard in tissue preparation. However, the potential use for elemental distribution studies is questionable due to a large number of sample preparation steps. In this work, LA-ICP-MS was used to examine the applicability of the FFPE sample preparation approach for elemental distribution studies. Qualitative elemental distributions as well as quantitative concentrations in cryo-cut tissues as well as FFPE samples were compared. Results showed that some metals (especially Na and K) are severely affected by the FFPE process, whereas others (e.g., Mn, Ni) are less influenced. Based on these results, a general recommendation can be given: FFPE samples are completely unsuitable for the analysis of alkaline metals. When analyzing transition metals, FFPE samples can give comparable results to snap-frozen tissues. Graphical abstract Sample preparation strategies for biological tissues are compared with regard to the elemental distributions and average trace element concentrations.
Peng-Cheng Yao
Full Text Available Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01. These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.
Nischkauer, Winfried; Vanhaecke, Frank; Bernacchi, Sébastien; Herwig, Christoph; Limbeck, Andreas
2014-11-01
Nebulising liquid samples and using the aerosol thus obtained for further analysis is the standard method in many current analytical techniques, also with inductively coupled plasma (ICP)-based devices. With such a set-up, quantification via external calibration is usually straightforward for samples with aqueous or close-to-aqueous matrix composition. However, there is a variety of more complex samples. Such samples can be found in medical, biological, technological and industrial contexts and can range from body fluids, like blood or urine, to fuel additives or fermentation broths. Specialized nebulizer systems or careful digestion and dilution are required to tackle such demanding sample matrices. One alternative approach is to convert the liquid into a dried solid and to use laser ablation for sample introduction. Up to now, this approach required the application of internal standards or matrix-adjusted calibration due to matrix effects. In this contribution, we show a way to circumvent these matrix effects while using simple external calibration for quantification. The principle of representative sampling that we propose uses radial line-scans across the dried residue. This compensates for centro-symmetric inhomogeneities typically observed in dried spots. The effectiveness of the proposed sampling strategy is exemplified via the determination of phosphorus in biochemical fermentation media. However, the universal viability of the presented measurement protocol is postulated. Detection limits using laser ablation-ICP-optical emission spectrometry were in the order of 40 μg mL- 1 with a reproducibility of 10 % relative standard deviation (n = 4, concentration = 10 times the quantification limit). The reported sensitivity is fit-for-purpose in the biochemical context described here, but could be improved using ICP-mass spectrometry, if future analytical tasks would require it. Trueness of the proposed method was investigated by cross-validation with
Yao, Peng-Cheng; Gao, Hai-Yan; Wei, Ya-Nan; Zhang, Jian-Hang; Chen, Xiao-Yong; Li, Hong-Qing
2017-01-01
Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.
Rey, Sergio J.; Stephens, Philip A.; Laura, Jason R.
2017-01-01
Large data contexts present a number of challenges to optimal choropleth map classifiers. Application of optimal classifiers to a sample of the attribute space is one proposed solution. The properties of alternative sampling-based classification methods are examined through a series of Monte Carlo simulations. The impacts of spatial autocorrelation, number of desired classes, and form of sampling are shown to have significant impacts on the accuracy of map classifications. Tradeoffs between improved speed of the sampling approaches and loss of accuracy are also considered. The results suggest the possibility of guiding the choice of classification scheme as a function of the properties of large data sets.
Kirkham, R.; Olsen, K.; Hayes, J. C.; Emer, D. F.
2013-12-01
Underground nuclear tests may be first detected by seismic or air samplers operated by the CTBTO (Comprehensive Nuclear-Test-Ban Treaty Organization). After initial detection of a suspicious event, member nations may call for an On-Site Inspection (OSI) that in part, will sample for localized releases of radioactive noble gases and particles. Although much of the commercially available equipment and methods used for surface and subsurface environmental sampling of gases can be used for an OSI scenario, on-site sampling conditions, required sampling volumes and establishment of background concentrations of noble gases require development of specialized methodologies. To facilitate development of sampling equipment and methodologies that address OSI sampling volume and detection objectives, and to collect information required for model development, a field test site was created at a former underground nuclear explosion site located in welded volcanic tuff. A mixture of SF-6, Xe127 and Ar37 was metered into 4400 m3 of air as it was injected into the top region of the UNE cavity. These tracers were expected to move towards the surface primarily in response to barometric pumping or through delayed cavity pressurization (accelerated transport to minimize source decay time). Sampling approaches compared during the field exercise included sampling at the soil surface, inside surface fractures, and at soil vapor extraction points at depths down to 2 m. Effectiveness of various sampling approaches and the results of tracer gas measurements will be presented.
Spatial scan statistics to assess sampling strategy of antimicrobial resistance monitoring programme
Vieira, Antonio; Houe, Hans; Wegener, Henrik Caspar
2009-01-01
Pie collection and analysis of data on antimicrobial resistance in human and animal Populations are important for establishing a baseline of the occurrence of resistance and for determining trends over time. In animals, targeted monitoring with a stratified sampling plan is normally used. However...... sampled by the Danish Integrated Antimicrobial Resistance Monitoring and Research Programme (DANMAP), by identifying spatial Clusters of samples and detecting areas with significantly high or low sampling rates. These analyses were performed for each year and for the total 5-year study period for all...... by an antimicrobial monitoring program....
Rougier, Simon; Puissant, Anne; Stumpf, André; Lachiche, Nicolas
2016-09-01
Vegetation monitoring is becoming a major issue in the urban environment due to the services they procure and necessitates an accurate and up to date mapping. Very High Resolution satellite images enable a detailed mapping of the urban tree and herbaceous vegetation. Several supervised classifications with statistical learning techniques have provided good results for the detection of urban vegetation but necessitate a large amount of training data. In this context, this study proposes to investigate the performances of different sampling strategies in order to reduce the number of examples needed. Two windows based active learning algorithms from state-of-art are compared to a classical stratified random sampling and a third combining active learning and stratified strategies is proposed. The efficiency of these strategies is evaluated on two medium size French cities, Strasbourg and Rennes, associated to different datasets. Results demonstrate that classical stratified random sampling can in some cases be just as effective as active learning methods and that it should be used more frequently to evaluate new active learning methods. Moreover, the active learning strategies proposed in this work enables to reduce the computational runtime by selecting multiple windows at each iteration without increasing the number of windows needed.
Nischkauer, Winfried [Institute of Chemical Technologies and Analytics, Vienna University of Technology, Vienna (Austria); Department of Analytical Chemistry, Ghent University, Ghent (Belgium); Vanhaecke, Frank [Department of Analytical Chemistry, Ghent University, Ghent (Belgium); Bernacchi, Sébastien; Herwig, Christoph [Institute of Chemical Engineering, Vienna University of Technology, Vienna (Austria); Limbeck, Andreas, E-mail: Andreas.Limbeck@tuwien.ac.at [Institute of Chemical Technologies and Analytics, Vienna University of Technology, Vienna (Austria)
2014-11-01
Nebulising liquid samples and using the aerosol thus obtained for further analysis is the standard method in many current analytical techniques, also with inductively coupled plasma (ICP)-based devices. With such a set-up, quantification via external calibration is usually straightforward for samples with aqueous or close-to-aqueous matrix composition. However, there is a variety of more complex samples. Such samples can be found in medical, biological, technological and industrial contexts and can range from body fluids, like blood or urine, to fuel additives or fermentation broths. Specialized nebulizer systems or careful digestion and dilution are required to tackle such demanding sample matrices. One alternative approach is to convert the liquid into a dried solid and to use laser ablation for sample introduction. Up to now, this approach required the application of internal standards or matrix-adjusted calibration due to matrix effects. In this contribution, we show a way to circumvent these matrix effects while using simple external calibration for quantification. The principle of representative sampling that we propose uses radial line-scans across the dried residue. This compensates for centro-symmetric inhomogeneities typically observed in dried spots. The effectiveness of the proposed sampling strategy is exemplified via the determination of phosphorus in biochemical fermentation media. However, the universal viability of the presented measurement protocol is postulated. Detection limits using laser ablation-ICP-optical emission spectrometry were in the order of 40 μg mL{sup −1} with a reproducibility of 10 % relative standard deviation (n = 4, concentration = 10 times the quantification limit). The reported sensitivity is fit-for-purpose in the biochemical context described here, but could be improved using ICP-mass spectrometry, if future analytical tasks would require it. Trueness of the proposed method was investigated by cross-validation with
Technical Note: Comparison of storage strategies of sea surface microlayer samples
K. Schneider-Zapp
2013-07-01
Full Text Available The sea surface microlayer (SML is an important biogeochemical system whose physico-chemical analysis often necessitates some degree of sample storage. However, many SML components degrade with time so the development of optimal storage protocols is paramount. We here briefly review some commonly used treatment and storage protocols. Using freshwater and saline SML samples from a river estuary, we investigated temporal changes in surfactant activity (SA and the absorbance and fluorescence of chromophoric dissolved organic matter (CDOM over four weeks, following selected sample treatment and storage protocols. Some variability in the effectiveness of individual protocols most likely reflects sample provenance. None of the various protocols examined performed any better than dark storage at 4 °C without pre-treatment. We therefore recommend storing samples refrigerated in the dark.
LC-MS analysis of the plasma metabolome–a novel sample preparation strategy
Skov, Kasper; Hadrup, Niels; Smedsgaard, Jørn
2015-01-01
Blood plasma is a well-known body fluid often analyzed in studies on the effects of toxic compounds as physiological or chemical induced changes in the mammalian body are reflected in the plasma metabolome. Sample preparation prior to LC-MS based analysis of the plasma metabolome is a challenge...... as plasma contains compounds with very different properties. Besides, proteins, which usually are precipitated with organic solvent, phospholipids, are known to cause ion suppression in electrospray mass spectrometry. We have compared two different sample preparation techniques prior to LC-qTOF analysis...... of plasma samples: The first is protein precipitation; the second is protein precipitation followed by solid phase extraction with sub-fractionation into three sub-samples; a phospholipid, a lipid and a polar sub-fraction. Molecular feature extraction of the data files from LC-qTOF analysis of the samples...
Arnaud, M.; Charmasson, S.; Calmet, D.; Fernandez, J.M.
1992-01-01
This paper describes the methods used for water and sediments sampling in rivers and sea. The purpose is the study of radionuclide migration (Cesium 134, Cesium 137) in Mediterranean Sea (Gulf of Lion). 20 refs., 11 figs., 1 tab
Groves, K E M; Sketris, I; Tett, S E
2003-08-01
Prescription drug samples, as used by the pharmaceutical industry to market their products, are of current interest because of their influence on prescribing, and their potential impact on consumer safety. Very little research has been conducted into the use and misuse of prescription drug samples, and the influence of samples on health policies designed to improve the rational use of medicines. This is a topical issue in the prescription drug debate, with increasing costs and increasing concerns about optimizing use of medicines. This manuscript critically evaluates the research that has been conducted to date about prescription drug samples, discusses the issues raised in the context of traditional marketing theory, and suggests possible alternatives for the future.
The new strategy for particle identification samples in Run 2 at LHCb
Mathad, Abhijit
2017-01-01
For Run 2 of LHCb data taking, the selection of PID calibration samples is implemented in the high level trigger. A further processing is needed to provide background-subtracted samples to determine the PID performance, or to develop new algorithms for the evaluation of the detector performance in upgrade scenarios. This is achieved through a centralised production which makes efficient use of LHCb computing resources. This poster presents the major steps of the implementation.
Marine anthropogenic radiotracers in the Southern Hemisphere: New sampling and analytical strategies
Levy, I.; Povinec, P.P.; Aoyama, M.
2011-01-01
The Japan Agency for Marine Earth Science and Technology conducted in 2003–2004 the Blue Earth Global Expedition (BEAGLE2003) around the Southern Hemisphere Oceans, which was a rare opportunity to collect many seawater samples for anthropogenic radionuclide studies. We describe here sampling...... showed a reasonable agreement between the participating laboratories. The obtained data on the distribution of 137Cs and plutonium isotopes in seawater represent the most comprehensive results available for the Southern Hemisphere Oceans....
Jongenburger, I; Reij, M W; Boer, E P J; Gorris, L G M; Zwietering, M H
2011-11-15
The actual spatial distribution of microorganisms within a batch of food influences the results of sampling for microbiological testing when this distribution is non-homogeneous. In the case of pathogens being non-homogeneously distributed, it markedly influences public health risk. This study investigated the spatial distribution of Cronobacter spp. in powdered infant formula (PIF) on industrial batch-scale for both a recalled batch as well a reference batch. Additionally, local spatial occurrence of clusters of Cronobacter cells was assessed, as well as the performance of typical sampling strategies to determine the presence of the microorganisms. The concentration of Cronobacter spp. was assessed in the course of the filling time of each batch, by taking samples of 333 g using the most probable number (MPN) enrichment technique. The occurrence of clusters of Cronobacter spp. cells was investigated by plate counting. From the recalled batch, 415 MPN samples were drawn. The expected heterogeneous distribution of Cronobacter spp. could be quantified from these samples, which showed no detectable level (detection limit of -2.52 log CFU/g) in 58% of samples, whilst in the remainder concentrations were found to be between -2.52 and 2.75 log CFU/g. The estimated average concentration in the recalled batch was -2.78 log CFU/g and a standard deviation of 1.10 log CFU/g. The estimated average concentration in the reference batch was -4.41 log CFU/g, with 99% of the 93 samples being below the detection limit. In the recalled batch, clusters of cells occurred sporadically in 8 out of 2290 samples of 1g taken. The two largest clusters contained 123 (2.09 log CFU/g) and 560 (2.75 log CFU/g) cells. Various sampling strategies were evaluated for the recalled batch. Taking more and smaller samples and keeping the total sampling weight constant, considerably improved the performance of the sampling plans to detect such a type of contaminated batch. Compared to random sampling
Evaluation of active sampling strategies for the determination of 1,3-butadiene in air
Vallecillos, Laura; Maceira, Alba; Marcé, Rosa Maria; Borrull, Francesc
2018-03-01
Two analytical methods for determining levels of 1,3-butadiene in urban and industrial atmospheres were evaluated in this study. Both methods are extensively used for determining the concentration of volatile organic compounds in the atmosphere and involve collecting samples by active adsorptive enrichment on solid sorbents. The first method uses activated charcoal as the sorbent and involves liquid desorption with carbon disulfide. The second involves the use of a multi-sorbent bed with two graphitised carbons and a carbon molecular sieve as the sorbent, with thermal desorption. Special attention was paid to the optimization of the sampling procedure through the study of sample volume, the stability of 1,3-butadiene once inside the sampling tube and the humidity effect. In the end, the thermal desorption method showed better repeatability and limits of detection and quantification for 1,3-butadiene than the liquid desorption method, which makes the thermal desorption method more suitable for analysing air samples from both industrial and urban atmospheres. However, sampling must be performed with a pre-tube filled with a drying agent to prevent the loss of the adsorption capacity of the solid adsorbent caused by water vapour. The thermal desorption method has successfully been applied to determine of 1,3-butadiene inside a 1,3-butadiene production plant and at three locations in the vicinity of the same plant.
Gibbs free energy of formation of lanthanum rhodate by quadrupole mass spectrometer
Prasad, R.; Banerjee, Aparna; Venugopal, V.
2003-01-01
The ternary oxide in the system La-Rh-O is of considerable importance because of its application in catalysis. Phase equilibria in the pseudo-binary system La 2 O 3 -Rh 2 O 3 has been investigated by Shevyakov et. al. Gibbs free energy of LaRhO 3 (s) was determined by Jacob et. al. using a solid state Galvanic cell in the temperature range 890 to 1310 K. No other thermodynamic data were available in the literature. Hence it was decided to determine Gibbs free energy of formation of LaRhO 3 (s) by an independent technique, viz. quadrupole mass spectrometer (QMS) coupled with a Knudsen effusion cell and the results are presented
Molar Surface Gibbs Energy of the Aqueous Solution of Ionic Liquid [C4mim][Oac
TONG Jing; ZHENG Xu; TONG Jian; QU Ye; LIU Lu; LI Hui
2017-01-01
The values of density and surface tension for aqueous solution of ionic liquid(IL) 1-butyl-3-methylimidazolium acetate([C4mim][OAc]) with various molalities were measured in the range of 288.15-318.15 K at intervals of 5 K.On the basis of thermodynamics,a semi-empirical model-molar surface Gibbs energy model of the ionic liquid solution that could be used to predict the surface tension or molar volume of solutions was put forward.The predicted values of the surface tension for aqueous [C4im][OAc] and the corresponding experimental ones were highly correlated and extremely similar.In terms of the concept of the molar Gibbs energy,a new E(o)tv(o)s equation was obtained and each parameter of the new equation has a clear physical meaning.
Uniqueness of Gibbs measure for Potts model with countable set of spin values
Ganikhodjaev, N.N.; Rozikov, U.A.
2004-11-01
We consider a nearest-neighbor Potts model with countable spin values 0,1,..., and non zero external field, on a Cayley tree of order k (with k+1 neighbors). We study translation-invariant 'splitting' Gibbs measures. We reduce the problem to the description of the solutions of some infinite system of equations. For any k≥1 and any fixed probability measure ν with ν(i)>0 on the set of all non negative integer numbers Φ={0,1,...} we show that the set of translation-invariant splitting Gibbs measures contains at most one point, independently on parameters of the Potts model with countable set of spin values on Cayley tree. Also we give a full description of the class of measures ν on Φ such that wit respect to each element of this class our infinite system of equations has unique solution {a i =1,2,...}, where a is an element of (0,1). (author)
The thermodynamic properties of the upper continental crust: Exergy, Gibbs free energy and enthalpy
Valero, Alicia; Valero, Antonio; Vieillard, Philippe
2012-01-01
This paper shows a comprehensive database of the thermodynamic properties of the most abundant minerals of the upper continental crust. For those substances whose thermodynamic properties are not listed in the literature, their enthalpy and Gibbs free energy are calculated with 11 different estimation methods described in this study, with associated errors of up to 10% with respect to values published in the literature. Thanks to this procedure we have been able to make a first estimation of the enthalpy, Gibbs free energy and exergy of the bulk upper continental crust and of each of the nearly 300 most abundant minerals contained in it. Finally, the chemical exergy of the continental crust is compared to the exergy of the concentrated mineral resources. The numbers obtained indicate the huge chemical exergy wealth of the crust: 6 × 10 6 Gtoe. However, this study shows that approximately only 0.01% of that amount can be effectively used by man.
Meade, R.H.; Stevens, H.H.
1990-01-01
A Lagrangian strategy for sampling large rivers, which was developed and tested in the Orinoco and Amazon Rivers of South America during the early 1980s, is now being applied to the study of toxic chemicals in the Mississippi River. A series of 15-20 cross-sections of the Mississippi mainstem and its principal tributaries is sampled by boat in downstream sequence, beginning upriver of St. Louis and concluding downriver of New Orleans 3 weeks later. The timing of the downstream sampling sequence approximates the travel time of the river water. Samples at each cross-section are discharge-weighted to provide concentrations of dissolved and suspended constituents that are converted to fluxes. Water-sediment mixtures are collected from 10-40 equally spaced points across the river width by sequential depth integration at a uniform vertical transit rate. Essential equipment includes (i) a hydraulic winch, for sensitive control of vertical transit rates, and (ii) a collapsible-bag sampler, which allows integrated samples to be collected at all depths in the river. A section is usually sampled in 4-8 h, for a total sample recovery of 100-120 l. Sampled concentrations of suspended silt and clay are reproducible within 3%.
Völker, S; Schreiber, C; Müller, H; Zacharias, N; Kistemann, T
2017-05-01
After the amendment of the Drinking Water Ordinance in 2011, the requirements for the hygienic-microbiological monitoring of drinking water installations have increased significantly. In the BMBF-funded project "Biofilm Management" (2010-2014), we examined the extent to which established sampling strategies in practice can uncover drinking water plumbing systems systemically colonized with Legionella. Moreover, we investigated additional parameters that might be suitable for detecting systemic contaminations. We subjected the drinking water plumbing systems of 8 buildings with known microbial contamination (Legionella) to an intensive hygienic-microbiological sampling with high spatial and temporal resolution. A total of 626 drinking hot water samples were analyzed with classical culture-based methods. In addition, comprehensive hygienic observations were conducted in each building and qualitative interviews with operators and users were applied. Collected tap-specific parameters were quantitatively analyzed by means of sensitivity and accuracy calculations. The systemic presence of Legionella in drinking water plumbing systems has a high spatial and temporal variability. Established sampling strategies were only partially suitable to detect long-term Legionella contaminations in practice. In particular, the sampling of hot water at the calorifier and circulation re-entrance showed little significance in terms of contamination events. To detect the systemic presence of Legionella,the parameters stagnation (qualitatively assessed) and temperature (compliance with the 5K-rule) showed better results. © Georg Thieme Verlag KG Stuttgart · New York.
Direct measurements of the Gibbs free energy of OH using a CW tunable laser
Killinger, D. K.; Wang, C. C.
1979-01-01
The paper describes an absorption measurement for determining the Gibbs free energy of OH generated in a mixture of water and oxygen vapor. These measurements afford a direct verification of the accuracy of thermochemical data of H2O at high temperatures and pressures. The results indicate that values for the heat capacity of H2O obtained through numerical computations are correct within an experimental uncertainty of 0.15 cal/mole K.
Standard Gibbs free energies for transfer of actinyl ions at the aqueous/organic solution interface
Kitatsuji, Yoshihiro; Okugaki, Tomohiko; Kasuno, Megumi; Kubota, Hiroki; Maeda, Kohji; Kimura, Takaumi; Yoshida, Zenko; Kihara, Sorin
2011-01-01
Research highlights: → Standard Gibbs free energies for ion-transfer of tri- to hexavalent actinide ions. → Determination is based on distribution method combined with ion-transfer voltammetry. → Organic solvents examined are nitrobenzene, DCE, benzonitrile, acetophenone and NPOE. → Gibbs free energies of U(VI), Np(VI) and Pu(VI) are similar to each other. → Gibbs free energies of Np(V) is very large, comparing with ordinary monovalent cations. - Abstract: Standard Gibbs free energies for transfer (ΔG tr 0 ) of actinyl ions (AnO 2 z+ ; z = 2 or 1; An: U, Np, or Pu) between an aqueous solution and an organic solution were determined based on distribution method combined with voltammetry for ion transfer at the interface of two immiscible electrolyte solutions. The organic solutions examined were nitrobenzene, 1,2-dichloroethane, benzonitrile, acetophenone, and 2-nitrophenyl octyl ether. Irrespective of the type of organic solutions, ΔG tr 0 of UO 2 2+ ,NpO 2 2+ , and PuO 2 2+ were nearly equal to each other and slightly larger than that of Mg 2+ . The ΔG tr 0 of NpO 2 + was extraordinary large compared with those of ordinary monovalent cations. The dependence of ΔG tr 0 of AnO 2 z+ on the type of organic solutions was similar to that of H + or Mg 2+ . The ΔG tr 0 of An 3+ and An 4+ were also discussed briefly.
Entropy Calculation of Reversible Mixing of Ideal Gases Shows Absence of Gibbs Paradox
Oleg Borodiouk
1999-05-01
Full Text Available Abstract: We consider the work of reversible mixing of ideal gases using a real process. Now assumptions were made concerning infinite shifts, infinite number of cycles and infinite work to provide an accurate calculation of entropy resulting from reversible mixing of ideal gases. We derived an equation showing the dependence of this entropy on the difference in potential of mixed gases, which is evidence for the absence of Gibbs' paradox.
Entropy Calculation of Reversible Mixing of Ideal Gases Shows Absence of Gibbs Paradox
Oleg Borodiouk; Vasili Tatarin
1999-01-01
Abstract: We consider the work of reversible mixing of ideal gases using a real process. Now assumptions were made concerning infinite shifts, infinite number of cycles and infinite work to provide an accurate calculation of entropy resulting from reversible mixing of ideal gases. We derived an equation showing the dependence of this entropy on the difference in potential of mixed gases, which is evidence for the absence of Gibbs' paradox.
Rohde, Alexander; Hammerl, Jens Andre; Appel, Bernd; Dieckmann, Ralf; Al Dahouk, Sascha
2015-01-01
Efficient preparation of food samples, comprising sampling and homogenization, for microbiological testing is an essential, yet largely neglected, component of foodstuff control. Salmonella enterica spiked chicken breasts were used as a surface contamination model whereas salami and meat paste acted as models of inner-matrix contamination. A systematic comparison of different homogenization approaches, namely, stomaching, sonication, and milling by FastPrep-24 or SpeedMill, revealed that for surface contamination a broad range of sample pretreatment steps is applicable and loss of culturability due to the homogenization procedure is marginal. In contrast, for inner-matrix contamination long treatments up to 8 min are required and only FastPrep-24 as a large-volume milling device produced consistently good recovery rates. In addition, sampling of different regions of the spiked sausages showed that pathogens are not necessarily homogenously distributed throughout the entire matrix. Instead, in meat paste the core region contained considerably more pathogens compared to the rim, whereas in the salamis the distribution was more even with an increased concentration within the intermediate region of the sausages. Our results indicate that sampling and homogenization as integral parts of food microbiology and monitoring deserve more attention to further improve food safety.
Toernqvist, T.E.; Dijk, G.J. Van
1993-01-01
The authors address the question of how to determine the period of activity (sedimentation) of fossil (Holocene) fluvial systems in vertically aggrading environments. The available data base consists of almost 100 14 C ages from the Rhine-Meuse delta. Radiocarbon samples from the tops of lithostratigraphically correlative organic beds underneath overbank deposits (sample type 1) yield consistent ages, indicating a synchronous onset of overbank deposition over distances of at least up to 20 km along channel belts. Similarly, 14 C ages from the base of organic residual channel fills (sample type 3) generally indicate a clear termination of within-channel sedimentation. In contrast, 14 C ages from the base of organic beds overlying overbank deposits (sample type 2), commonly assumed to represent the end of fluvial sedimentation, show a large scatter reaching up to 1000 14 C years. It is concluded that a combination of sample types 1 and 3 generally yields a satisfactory delimitation of the period of activity of a fossil fluvial system. 30 refs., 11 figs., 4 tabs
Phase relations and gibbs energies in the system Mn-Rh-O
Jacob, K. T.; Sriram, M. V.
1994-07-01
Phase relations in the system Mn-Rh-O are established at 1273 K by equilibrating different compositions either in evacuated quartz ampules or in pure oxygen at a pressure of 1.01 × 105 Pa. The quenched samples are examined by optical microscopy, X-ray diffraction, and energy-dispersive X-ray analysis (EDAX). The alloys and intermetallics in the binary Mn-Rh system are found to be in equilibrium with MnO. There is only one ternary compound, MnRh2O4, with normal spinel structure in the system. The compound Mn3O4 has a tetragonal structure at 1273 K. A solid solution is formed between MnRh2O4 and Mn3O4. The solid solution has the cubic structure over a large range of composition and coexists with metallic rhodium. The partial pressure of oxygen corresponding to this two-phase equilibrium is measured as a function of the composition of the spinel solid solution and temperature. A new solid-state cell, with three separate electrode compartments, is designed to measure accurately the chemical potential of oxygen in the two-phase mixture, Rh + Mn3-2xRh2xO4, which has 1 degree of freedom at constant temperature. From the electromotive force (emf), thermodynamic mixing properties of the Mn3O4-MnRh2O4 solid solution and Gibbs energy of formation of MnRh2O4 are deduced. The activities exhibit negative deviations from Raoult’s law for most of the composition range, except near Mn3O4, where a two-phase region exists. In the cubic phase, the entropy of mixing of the two Rh3+ and Mn3+ ions on the octahedral site of the spinel is ideal, and the enthalpy of mixing is positive and symmetric with respect to composition. For the formation of the spinel (sp) from component oxides with rock salt (rs) and orthorhombic (orth) structures according to the reaction, MnO (rs) + Rh2O3 (orth) → MnRh2O4 (sp), ΔG° = -49,680 + 1.56T (±500) J mol-1 The oxygen potentials corresponding to MnO + Mn3O4 and Rh + Rh2O3 equilibria are also obtained from potentiometric measurements on galvanic
Sample-efficient Strategies for Learning in the Presence of Noise
Cesa-Bianchi, N.; Dichterman, E.; Fischer, Paul
1999-01-01
In this paper, we prove various results about PAC learning in the presence of malicious noise. Our main interest is the sample size behavior of learning algorithms. We prove the first nontrivial sample complexity lower bound in this model by showing that order of &egr;/&Dgr;2 + d/&Dgr; (up...... to logarithmic factors) examples are necessary for PAC learning any target class of {#123;0,1}#125;-valued functions of VC dimension d, where &egr; is the desired accuracy and &eegr; = &egr;/(1 + &egr;) - &Dgr; the malicious noise rate (it is well known that any nontrivial target class cannot be PAC learned...... with accuracy &egr; and malicious noise rate &eegr; &egr;/(1 + &egr;), this irrespective to sample complexity). We also show that this result cannot be significantly improved in general by presenting efficient learning algorithms for the class of all subsets of d elements and the class of unions of at most d...
Successful Sampling Strategy Advances Laboratory Studies of NMR Logging in Unconsolidated Aquifers
Behroozmand, Ahmad A.; Knight, Rosemary; Müller-Petke, Mike; Auken, Esben; Barfod, Adrian A. S.; Ferré, Ty P. A.; Vilhelmsen, Troels N.; Johnson, Carole D.; Christiansen, Anders V.
2017-11-01
The nuclear magnetic resonance (NMR) technique has become popular in groundwater studies because it responds directly to the presence and mobility of water in a porous medium. There is a need to conduct laboratory experiments to aid in the development of NMR hydraulic conductivity models, as is typically done in the petroleum industry. However, the challenge has been obtaining high-quality laboratory samples from unconsolidated aquifers. At a study site in Denmark, we employed sonic drilling, which minimizes the disturbance of the surrounding material, and extracted twelve 7.6 cm diameter samples for laboratory measurements. We present a detailed comparison of the acquired laboratory and logging NMR data. The agreement observed between the laboratory and logging data suggests that the methodologies proposed in this study provide good conditions for studying NMR measurements of unconsolidated near-surface aquifers. Finally, we show how laboratory sample size and condition impact the NMR measurements.
Lin, Shu-Kun
1996-01-01
Gibbs paradox statement of entropy of mixing has been regarded as the theoretical foundation of statistical mechanics, quantum theory and biophysics. However, all the relevant chemical experimental observations and logical analyses indicate that the Gibbs paradox statement is false. I prove that this statement is wrong: Gibbs paradox statement implies that entropy decreases with the increase in symmetry (as represented by a symmetry number σ; see any statistical mechanics textbook). From group theory any system has at least a symmetry number σ=1 which is the identity operation for a strictly asymmetric system. It follows that the entropy of a system is equal to, or less than, zero. However, from either von Neumann-Shannon entropy formula (S(w) =-Σ ω in p 1 ) or the Boltzmann entropy formula (S = in w) and the original definition, entropy is non-negative. Therefore, this statement is false. It should not be a surprise that for the first time, many outstanding problems such as the validity of Pauling's resonance theory, the explanation of second order phase transition phenomena, the biophysical problem of protein folding and the related hydrophobic effect, etc., can be solved. Empirical principles such as Pauli principle (and Hund's rule) and HSAB principle, etc., can also be given a theoretical explanation
The Gibbs-Thomson equation for a spherical coherent precipitate with applications to nucleation
Rottman, C.; Voorhees, P.W.; Johnson, W.C.
1988-01-01
The conditions for interfacial thermodynamic equilibrium form the basis for the derivation of a number of basic equations in materials science, including the various forms of the Gibbs-Thomson equation. The equilibrium conditions pertaining to a curved interface in a two-phase fluid system are well-known. In contrast, the conditions for thermodynamic equilibrium at a curved interface in nonhydrostatically stressed solids have only recently been examined. These conditions can be much different from those at a fluid interface and, as a result, the Gibbs-Thomson equation appropriate to coherent solids is likely to be considerably different from that for fluids. In this paper, the authors first derive the conditions necessary for thermodynamic equilibrium at the precipitate-matrix interface of a coherent spherical precipitate. The authors' derivation of these equilibrium conditions includes a correction to the equilibrium conditions of Johnson and Alexander for a spherical precipitate in an isotropic matrix. They then use these conditions to derive the dependence of the interfacial precipitate and matrix concentrations on precipitate radius (Gibbs-Thomson equation) for a such a precipitate. In addition, these relationships are then used to calculate the critical radius for the nucleation of a coherent misfitting precipitate
Nischkauer, Winfried; Vanhaecke, Frank; Bernacchi, Sébastien; Herwig, Christoph; Limbeck, Andreas
2014-11-01
Nebulising liquid samples and using the aerosol thus obtained for further analysis is the standard method in many current analytical techniques, also with inductively coupled plasma (ICP)-based devices. With such a set-up, quantification via external calibration is usually straightforward for samples with aqueous or close-to-aqueous matrix composition. However, there is a variety of more complex samples. Such samples can be found in medical, biological, technological and industrial contexts and can range from body fluids, like blood or urine, to fuel additives or fermentation broths. Specialized nebulizer systems or careful digestion and dilution are required to tackle such demanding sample matrices. One alternative approach is to convert the liquid into a dried solid and to use laser ablation for sample introduction. Up to now, this approach required the application of internal standards or matrix-adjusted calibration due to matrix effects. In this contribution, we show a way to circumvent these matrix effects while using simple external calibration for quantification. The principle of representative sampling that we propose uses radial line-scans across the dried residue. This compensates for centro-symmetric inhomogeneities typically observed in dried spots. The effectiveness of the proposed sampling strategy is exemplified via the determination of phosphorus in biochemical fermentation media. However, the universal viability of the presented measurement protocol is postulated. Detection limits using laser ablation-ICP-optical emission spectrometry were in the order of 40 μg mL - 1 with a reproducibility of 10 % relative standard deviation (n = 4, concentration = 10 times the quantification limit). The reported sensitivity is fit-for-purpose in the biochemical context described here, but could be improved using ICP-mass spectrometry, if future analytical tasks would require it. Trueness of the proposed method was investigated by cross-validation with
Chen, Xiao; Lu, Bin; Yan, Chao-Gan
2018-01-01
Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Robotic traverse and sample return strategies for a lunar farside mission to the Schrodinger basin
Potts, N.J.; Gullikson, A.L.; Curran, N.M.; Dhaliwal, J.K.; Leader, M.K.; Rege, R.N.; Klaus, K.K.; Kring, D.A.
2015-01-01
Most of the highest priority objectives for lunar science and exploration (e.g.; NRC, 2007) require sample return. Studies of the best places to conduct that work have identified Schrödinger basin as a geologically rich area, able to address a significant number of these scientific concepts. In this
van Rossum, Lyonne K.; Mathot, Ron A. A.; Cransberg, Karlien; Vulto, Arnold G.
2003-01-01
Glomerular filtration rate in patients can be determined by estimating the plasma clearance of inulin with the single-injection method. In this method, a single bolus injection of inulin is administered and several blood samples are collected. For practical and convenient application of this method
Lautenbach, Ebbing; Bilker, Warren B; Tolomeo, Pam; Maslow, Joel N
2008-09-01
Of 49 subjects, 21 were colonized with more than one strain of Escherichia coli and 12 subjects had at least one strain present in fewer than 20% of colonies. The ability to accurately characterize E. coli strain diversity is directly related to the number of colonies sampled and the underlying prevalence of the strain.
Scholtens, Ingrid; Laurensse, Emile; Molenaar, Bonnie; Zaaijer, Stephanie; Gaballo, Heidi; Boleij, Peter; Bak, Arno; Kok, Esther
2013-09-25
Nowadays most animal feed products imported into Europe have a GMO (genetically modified organism) label. This means that they contain European Union (EU)-authorized GMOs. For enforcement of these labeling requirements, it is necessary, with the rising number of EU-authorized GMOs, to perform an increasing number of analyses. In addition to this, it is necessary to test products for the potential presence of EU-unauthorized GMOs. Analysis for EU-authorized and -unauthorized GMOs in animal feed has thus become laborious and expensive. Initial screening steps may reduce the number of GMO identification methods that need to be applied, but with the increasing diversity also screening with GMO elements has become more complex. For the present study, the application of an informative detailed 24-element screening and subsequent identification strategy was applied in 50 animal feed samples. Almost all feed samples were labeled as containing GMO-derived materials. The main goal of the study was therefore to investigate if a detailed screening strategy would reduce the number of subsequent identification analyses. An additional goal was to test the samples in this way for the potential presence of EU-unauthorized GMOs. Finally, to test the robustness of the approach, eight of the samples were tested in a concise interlaboratory study. No significant differences were found between the results of the two laboratories.
Hanson, Robert M.; Riley, Patrick; Schwinefus, Jeff; Fischer, Paul J.
2008-01-01
The use of qualitative graphs of Gibbs energy versus temperature is described in the context of chemical demonstrations involving phase changes and colligative properties at the general chemistry level. (Contains 5 figures and 1 note.)
Solid oxide galvanic cell for determination of Gibbs energy of formation of Tb6UO12(s)
Sahu, Manjulata; Dash, Smruti
2013-01-01
Citrate-nitrate combustion method was used to synthesise Tb 6 UO 12 (s). Gibbs energy of formation of Tb 6 UO 12 (s) was measured using solid oxide galvanic cell in the temperature range 957-1175 K. (author)
Bungay, Vicky; Oliffe, John; Atchison, Chris
2016-06-01
Men, transgender people, and those working in off-street locales have historically been underrepresented in sex work health research. Failure to include all sections of sex worker populations precludes comprehensive understandings about a range of population health issues, including potential variations in the manifestation of such issues within and between population subgroups, which in turn can impede the development of effective services and interventions. In this article, we describe our attempts to define, determine, and recruit a purposeful sample for a qualitative study examining the interrelationships between sex workers' health and the working conditions in the Vancouver off-street sex industry. Detailed is our application of ethnographic mapping approaches to generate information about population diversity and work settings within distinct geographical boundaries. Bearing in mind the challenges and the overwhelming discrimination sex workers experience, we scope recommendations for safe and effective purposeful sampling inclusive of sex workers' heterogeneity. © The Author(s) 2015.
Marine anthropogenic radiotracers in the Southern Hemisphere: New sampling and analytical strategies
Levy, I.; Povinec, P. P.; Aoyama, M.; Hirose, K.; Sanchez-Cabeza, J. A.; Comanducci, J.-F.; Gastaud, J.; Eriksson, M.; Hamajima, Y.; Kim, C. S.; Komura, K.; Osvath, I.; Roos, P.; Yim, S. A.
2011-04-01
The Japan Agency for Marine Earth Science and Technology conducted in 2003-2004 the Blue Earth Global Expedition (BEAGLE2003) around the Southern Hemisphere Oceans, which was a rare opportunity to collect many seawater samples for anthropogenic radionuclide studies. We describe here sampling and analytical methodologies based on radiochemical separations of Cs and Pu from seawater, as well as radiometric and mass spectrometry measurements. Several laboratories took part in radionuclide analyses using different techniques. The intercomparison exercises and analyses of certified reference materials showed a reasonable agreement between the participating laboratories. The obtained data on the distribution of 137Cs and plutonium isotopes in seawater represent the most comprehensive results available for the Southern Hemisphere Oceans.
Glavin, D. P.; Burton, A. S.; Callahan, M. P.; Elsila, J. E.; Stern, J. C.; Dworkin, J. P.
2012-01-01
A key goal in the search for evidence of extinct or extant life on Mars will be the identification of chemical biosignatures including complex organic molecules common to all life on Earth. These include amino acids, the monomer building blocks of proteins and enzymes, and nucleobases, which serve as the structural basis of information storage in DNA and RNA. However, many of these organic compounds can also be formed abiotically as demonstrated by their prevalence in carbonaceous meteorites [1]. Therefore, an important challenge in the search for evidence of life on Mars will be distinguishing between abiotic chemistry of either meteoritic or martian origin from any chemical biosignatures from an extinct or extant martian biota. Although current robotic missions to Mars, including the 2011 Mars Science Laboratory (MSL) and the planned 2018 ExoMars rovers, will have the analytical capability needed to identify these key classes of organic molecules if present [2,3], return of a diverse suite of martian samples to Earth would allow for much more intensive laboratory studies using a broad array of extraction protocols and state-of-theart analytical techniques for bulk and spatially resolved characterization, molecular detection, and isotopic and enantiomeric compositions that may be required for unambiguous confirmation of martian life. Here we will describe current state-of-the-art laboratory analytical techniques that have been used to characterize the abundance and distribution of amino acids and nucleobases in meteorites, Apollo samples, and comet- exposed materials returned by the Stardust mission with an emphasis on their molecular characteristics that can be used to distinguish abiotic chemistry from biochemistry as we know it. The study of organic compounds in carbonaceous meteorites is highly relevant to Mars sample return analysis, since exogenous organic matter should have accumulated in the martian regolith over the last several billion years and the
Sustained attention across the lifespan in a sample of 10,000: Dissociating ability and strategy
Fortenbaugh, Francesca C.; DeGutis, Joseph; Germine, Laura; Wilmer, Jeremy; Grosso, Mallory; Russo, Kathryn; Esterman, Michael
2015-01-01
Normal and abnormal differences in sustained visual attention have long been of interest to scientists, educators, and clinicians. Still lacking, however, is a clear understanding of how sustained visual attention varies across the broad sweep of the human lifespan. Here, we fill this gap in two ways. First, powered by an unprecedentedly large, 10,430-person sample, we model age-related differences with substantially greater precision than prior efforts. Second, using the recently developed g...
Reliability of sampling strategies for measuring dairy cattle welfare on commercial farms.
Van Os, Jennifer M C; Winckler, Christoph; Trieb, Julia; Matarazzo, Soraia V; Lehenbauer, Terry W; Champagne, John D; Tucker, Cassandra B
2018-02-01
Our objective was to evaluate how the proportion of high-producing lactating cows sampled on each farm and the selection method affect prevalence estimates for animal-based measures. We assessed the entire high-producing pen (days in milk size calculations from the Welfare Quality Protocol; and (4) selecting the first, middle, or final third of cows exiting the milking parlor. Estimates were compared with true values using regression analysis and were considered accurate if they met 3 criteria: the coefficient of determination was ≥0.9 and the slope and intercept did not differ significantly from 1 and 0, respectively. All estimates met the slope and intercept criteria, whereas the coefficient of determination increased when more cows were sampled. All estimates were accurate for neck alterations, ocular discharge (22.2 ± 27.4%), and carpal joint hair loss (14.1 ± 17.4%). Selecting a third of the milking order or using the Welfare Quality sample size calculations failed to accurately estimate all measures simultaneously. However, all estimates were accurate when selecting at least 2 of every 3 cows locked at the feed bunk. Using restraint position at the feed bunk did not differ systematically from computer-selecting the same proportion of cows randomly, and the former may be a simpler approach for welfare assessments. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Gottlieb, Jacqueline
2018-05-01
In natural behavior we actively gather information using attention and active sensing behaviors (such as shifts of gaze) to sample relevant cues. However, while attention and decision making are naturally coordinated, in the laboratory they have been dissociated. Attention is studied independently of the actions it serves. Conversely, decision theories make the simplifying assumption that the relevant information is given, and do not attempt to describe how the decision maker may learn and implement active sampling policies. In this paper I review recent studies that address questions of attentional learning, cue validity and information seeking in humans and non-human primates. These studies suggest that learning a sampling policy involves large scale interactions between networks of attention and valuation, which implement these policies based on reward maximization, uncertainty reduction and the intrinsic utility of cognitive states. I discuss the importance of using such paradigms for formalizing the role of attention, as well as devising more realistic theories of decision making that capture a broader range of empirical observations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bladergroen, Marco R.; van der Burgt, Yuri E. M.
2015-01-01
For large-scale and standardized applications in mass spectrometry- (MS-) based proteomics automation of each step is essential. Here we present high-throughput sample preparation solutions for balancing the speed of current MS-acquisitions and the time needed for analytical workup of body fluids. The discussed workflows reduce body fluid sample complexity and apply for both bottom-up proteomics experiments and top-down protein characterization approaches. Various sample preparation methods that involve solid-phase extraction (SPE) including affinity enrichment strategies have been automated. Obtained peptide and protein fractions can be mass analyzed by direct infusion into an electrospray ionization (ESI) source or by means of matrix-assisted laser desorption ionization (MALDI) without further need of time-consuming liquid chromatography (LC) separations. PMID:25692071
Loo, B.W. Jr. [Univ. of California, San Francisco, CA (United States)]|[Univ. of California, Davis, CA (United States)]|[Lawrence Berkeley National Lab., CA (United States); Rothman, S.S. [Univ. of California, San Francisco, CA (United States)]|[Lawrence Berkeley National Lab., CA (United States)
1997-02-01
High resolution x-ray microscopy has been made possible in recent years primarily by two new technologies: microfabricated diffractive lenses for soft x-rays with about 30-50 nm resolution, and high brightness synchrotron x-ray sources. X-ray microscopy occupies a special niche in the array of biological microscopic imaging methods. It extends the capabilities of existing techniques mainly in two areas: a previously unachievable combination of sub-visible resolution and multi-micrometer sample size, and new contrast mechanisms. Because of the soft x-ray wavelengths used in biological imaging (about 1-4 nm), XM is intermediate in resolution between visible light and electron microscopies. Similarly, the penetration depth of soft x-rays in biological materials is such that the ideal sample thickness for XM falls in the range of 0.25 - 10 {mu}m, between that of VLM and EM. XM is therefore valuable for imaging of intermediate level ultrastructure, requiring sub-visible resolutions, in intact cells and subcellular organelles, without artifacts produced by thin sectioning. Many of the contrast producing and sample preparation techniques developed for VLM and EM also work well with XM. These include, for example, molecule specific staining by antibodies with heavy metal or fluorescent labels attached, and sectioning of both frozen and plastic embedded tissue. However, there is also a contrast mechanism unique to XM that exists naturally because a number of elemental absorption edges lie in the wavelength range used. In particular, between the oxygen and carbon absorption edges (2.3 and 4.4 nm wavelength), organic molecules absorb photons much more strongly than does water, permitting element-specific imaging of cellular structure in aqueous media, with no artifically introduced contrast agents. For three-dimensional imaging applications requiring the capabilities of XM, an obvious extension of the technique would therefore be computerized x-ray microtomography (XMT).
Leandro Barbosa
2008-07-01
Full Text Available Um total de 38.865 registros de animais da raça Large White foi usado para estimar componentes de co-variância e parâmetros genéticos das características idade ao atingir 100 kg de peso vivo (IDA e espessura de toucinho ajustada para 100 kg de peso vivo (ET, em análises bicaracterísticas. Para obtenção dos componentes de co-variância, foi utilizado o Amostrador de Gibbs por meio do programa MTGSAM. O modelo misto utilizado continha efeito fixo de grupo contemporâneo e os seguintes efeitos aleatórios: efeito genético aditivo direto, efeito genético aditivo materno, efeito comum de leitegada e efeito residual. As médias das estimativas de herdabilidade aditivas diretas foram 0,33 e 0,44 para IDA e ET, respectivamente. As médias das estimativas do efeito comum de leitegada foram 0,09 e 0,02 para IDA e ET, respectivamente. A estimativa de correlação genética aditiva entre as características foi próxima de zero (-0,015. As herdabilidades obtidas para as características de desempenho avaliadas indicam que ganhos genéticos satisfatórios podem ser obtidos no melhoramento de suínos da raça Large White para essas características e que a seleção simultânea para ambas as características pode ser realizada, uma vez que é baixa a correlação genética aditiva direta.Data consisting of 38,865 records of Large White pigs were used to estimate genetic parameters for days to 100 kg (DAYS and backfat thickness adjusted to 100 kg (BF. Covariance components were estimated by a bivariate mixed model including the fixed effect of contemporary group and the direct and maternal additive genetic, common litter and residual random effects using the Gibbs Sampling algorithm of the MTGSAM program. Estimates of direct and common litter effects for DAYS and BF were 0.33 and 0.44 and 0.09 and 0.02, respectively. Additive genetic correlation between DAYS and BF was close to zero (-0.015. The heritability estimates indicate that genetic gains may
Törnros, Tobias; Dorn, Helen; Reichert, Markus; Ebner-Priemer, Ulrich; Salize, Hans-Joachim; Tost, Heike; Meyer-Lindenberg, Andreas; Zipf, Alexander
2016-11-21
Self-reporting is a well-established approach within the medical and psychological sciences. In order to avoid recall bias, i.e. past events being remembered inaccurately, the reports can be filled out on a smartphone in real-time and in the natural environment. This is often referred to as ambulatory assessment and the reports are usually triggered at regular time intervals. With this sampling scheme, however, rare events (e.g. a visit to a park or recreation area) are likely to be missed. When addressing the correlation between mood and the environment, it may therefore be beneficial to include participant locations within the ambulatory assessment sampling scheme. Based on the geographical coordinates, the database query system then decides if a self-report should be triggered or not. We simulated four different ambulatory assessment sampling schemes based on movement data (coordinates by minute) from 143 voluntary participants tracked for seven consecutive days. Two location-based sampling schemes incorporating the environmental characteristics (land use and population density) at each participant's location were introduced and compared to a time-based sampling scheme triggering a report on the hour as well as to a sampling scheme incorporating physical activity. We show that location-based sampling schemes trigger a report less often, but we obtain more unique trigger positions and a greater spatial spread in comparison to sampling strategies based on time and distance. Additionally, the location-based methods trigger significantly more often at rarely visited types of land use and less often outside the study region where no underlying environmental data are available.
H.-C. Chen
2012-07-01
Full Text Available How to effectively describe ecological patterns in nature over broader spatial scales and build a modeling ecological framework has become an important issue in ecological research. We test four modeling methods (MAXENT, DOMAIN, GLM and ANN to predict the potential habitat of Schima superba (Chinese guger tree, CGT with different spatial scale in the Huisun study area in Taiwan. Then we created three sampling design (from small to large scales for model development and validation by different combinations of CGT samples from aforementioned three sites (Tong-Feng watershed, Yo-Shan Mountain, and Kuan-Dau watershed. These models combine points of known occurrence and topographic variables to infer CGT potential spatial distribution. Our assessment revealed that the method performance from highest to lowest was: MAXENT, DOMAIN, GLM and ANN on small spatial scale. The MAXENT and DOMAIN two models were the most capable for predicting the tree's potential habitat. However, the outcome clearly indicated that the models merely based on topographic variables performed poorly on large spatial extrapolation from Tong-Feng to Kuan-Dau because the humidity and sun illumination of the two watersheds are affected by their microterrains and are quite different from each other. Thus, the models developed from topographic variables can only be applied within a limited geographical extent without a significant error. Future studies will attempt to use variables involving spectral information associated with species extracted from high spatial, spectral resolution remotely sensed data, especially hyperspectral image data, for building a model so that it can be applied on a large spatial scale.
Guilhem Mansion
Full Text Available BACKGROUND: Speciose clades usually harbor species with a broad spectrum of adaptive strategies and complex distribution patterns, and thus constitute ideal systems to disentangle biotic and abiotic causes underlying species diversification. The delimitation of such study systems to test evolutionary hypotheses is difficult because they often rely on artificial genus concepts as starting points. One of the most prominent examples is the bellflower genus Campanula with some 420 species, but up to 600 species when including all lineages to which Campanula is paraphyletic. We generated a large alignment of petD group II intron sequences to include more than 70% of described species as a reference. By comparison with partial data sets we could then assess the impact of selective taxon sampling strategies on phylogenetic reconstruction and subsequent evolutionary conclusions. METHODOLOGY/PRINCIPAL FINDINGS: Phylogenetic analyses based on maximum parsimony (PAUP, PRAP, Bayesian inference (MrBayes, and maximum likelihood (RAxML were first carried out on the large reference data set (D680. Parameters including tree topology, branch support, and age estimates, were then compared to those obtained from smaller data sets resulting from "classification-guided" (D088 and "phylogeny-guided sampling" (D101. Analyses of D088 failed to fully recover the phylogenetic diversity in Campanula, whereas D101 inferred significantly different branch support and age estimates. CONCLUSIONS/SIGNIFICANCE: A short genomic region with high phylogenetic utility allowed us to easily generate a comprehensive phylogenetic framework for the speciose Campanula clade. Our approach recovered 17 well-supported and circumscribed sub-lineages. Knowing these will be instrumental for developing more specific evolutionary hypotheses and guide future research, we highlight the predictive value of a mass taxon-sampling strategy as a first essential step towards illuminating the detailed
Voltarelli, Daniele Cristina; de Alcântara, Brígida Kussumoto; Lunardi, Michele; Alfieri, Alice Fernandes; de Arruda Leme, Raquel; Alfieri, Amauri Alcindo
2018-01-01
Bacteria classified in Mycoplasma (M. bovis and M. bovigenitalium) and Ureaplasma (U. diversum) genera are associated with granular vulvovaginitis that affect heifers and cows at reproductive age. The traditional means for detection and speciation of mollicutes from clinical samples have been culture and serology. However, challenges experienced with these laboratory methods have hampered assessment of their impact in pathogenesis and epidemiology in cattle worldwide. The aim of this study was to develop a PCR strategy to detect and primarily discriminate between the main species of mollicutes associated with reproductive disorders of cattle in uncultured clinical samples. In order to amplify the 16S-23S rRNA internal transcribed spacer region of the genome, a consensual and species-specific nested-PCR assay was developed to identify and discriminate between main species of mollicutes. In addition, 31 vaginal swab samples from dairy and beef affected cows were investigated. This nested-PCR strategy was successfully employed in the diagnosis of single and mixed mollicute infections of diseased cows from cattle herds from Brazil. The developed system enabled the rapid and unambiguous identification of the main mollicute species known to be associated with this cattle reproductive disorder through differential amplification of partial fragments of the ITS region of mollicute genomes. The development of rapid and sensitive tools for mollicute detection and discrimination without the need for previous cultures or sequencing of PCR products is a high priority for accurate diagnosis in animal health. Therefore, the PCR strategy described herein may be helpful for diagnosis of this class of bacteria in genital swabs submitted to veterinary diagnostic laboratories, not demanding expertise in mycoplasma culture and identification. Copyright © 2017 Elsevier B.V. All rights reserved.
Hottes, Travis Salway; Bogaert, Laura; Rhodes, Anne E; Brennan, David J; Gesink, Dionne
2016-05-01
Previous reviews have demonstrated a higher risk of suicide attempts for lesbian, gay, and bisexual (LGB) persons (sexual minorities), compared with heterosexual groups, but these were restricted to general population studies, thereby excluding individuals sampled through LGB community venues. Each sampling strategy, however, has particular methodological strengths and limitations. For instance, general population probability studies have defined sampling frames but are prone to information bias associated with underreporting of LGB identities. By contrast, LGB community surveys may support disclosure of sexuality but overrepresent individuals with strong LGB community attachment. To reassess the burden of suicide-related behavior among LGB adults, directly comparing estimates derived from population- versus LGB community-based samples. In 2014, we searched MEDLINE, EMBASE, PsycInfo, CINAHL, and Scopus databases for articles addressing suicide-related behavior (ideation, attempts) among sexual minorities. We selected quantitative studies of sexual minority adults conducted in nonclinical settings in the United States, Canada, Europe, Australia, and New Zealand. Random effects meta-analysis and meta-regression assessed for a difference in prevalence of suicide-related behavior by sample type, adjusted for study or sample-level variables, including context (year, country), methods (medium, response rate), and subgroup characteristics (age, gender, sexual minority construct). We examined residual heterogeneity by using τ(2). We pooled 30 cross-sectional studies, including 21,201 sexual minority adults, generating the following lifetime prevalence estimates of suicide attempts: 4% (95% confidence interval [CI] = 3%, 5%) for heterosexual respondents to population surveys, 11% (95% CI = 8%, 15%) for LGB respondents to population surveys, and 20% (95% CI = 18%, 22%) for LGB respondents to community surveys (Figure 1). The difference in LGB estimates by sample
Search strategy using LHC pileup interactions as a zero bias sample
Nachman, Benjamin; Rubbo, Francesco
2018-05-01
Due to a limited bandwidth and a large proton-proton interaction cross section relative to the rate of interesting physics processes, most events produced at the Large Hadron Collider (LHC) are discarded in real time. A sophisticated trigger system must quickly decide which events should be kept and is very efficient for a broad range of processes. However, there are many processes that cannot be accommodated by this trigger system. Furthermore, there may be models of physics beyond the standard model (BSM) constructed after data taking that could have been triggered, but no trigger was implemented at run time. Both of these cases can be covered by exploiting pileup interactions as an effective zero bias sample. At the end of high-luminosity LHC operations, this zero bias dataset will have accumulated about 1 fb-1 of data from which a bottom line cross section limit of O (1 ) fb can be set for BSM models already in the literature and those yet to come.
A strategy to sample nutrient dynamics across the terrestrial-aquatic interface at NEON sites
Hinckley, E. S.; Goodman, K. J.; Roehm, C. L.; Meier, C. L.; Luo, H.; Ayres, E.; Parnell, J.; Krause, K.; Fox, A. M.; SanClements, M.; Fitzgerald, M.; Barnett, D.; Loescher, H. W.; Schimel, D.
2012-12-01
The construction of the National Ecological Observatory Network (NEON) across the U.S. creates the opportunity for researchers to investigate biogeochemical transformations and transfers across ecosystems at local-to-continental scales. Here, we examine a subset of NEON sites where atmospheric, terrestrial, and aquatic observations will be collected for 30 years. These sites are located across a range of hydrological regimes, including flashy rain-driven, shallow sub-surface (perched, pipe-flow, etc), and deep groundwater, which likely affect the chemical forms and quantities of reactive elements that are retained and/or mobilized across landscapes. We present a novel spatial and temporal sampling design that enables researchers to evaluate long-term trends in carbon, nitrogen, and phosphorus biogeochemical cycles under these different hydrological regimes. This design focuses on inputs to the terrestrial system (atmospheric deposition, bulk precipitation), transfers (soil-water and groundwater sources/chemistry), and outputs (surface water, and evapotranspiration). We discuss both data that will be collected as part of the current NEON design, as well as how the research community can supplement the NEON design through collaborative efforts, such as providing additional datasets, including soil biogeochemical processes and trace gas emissions, and developing collaborative research networks. Current engagement with the research community working at the terrestrial-aquatic interface is critical to NEON's success as we begin construction, to ensure that high-quality, standardized and useful data are not only made available, but inspire further, cutting-edge research.
Baljuk J.A.
2014-12-01
Full Text Available In work the algorithm of adaptive strategy of optimum spatial sampling for studying of the spatial organisation of communities of soil animals in the conditions of an urbanization have been presented. As operating variables the principal components obtained as a result of the analysis of the field data on soil penetration resistance, soils electrical conductivity and density of a forest stand, collected on a quasiregular grid have been used. The locations of experimental polygons have been stated by means of program ESAP. The sampling has been made on a regular grid within experimental polygons. The biogeocoenological estimation of experimental polygons have been made on a basis of A.L.Belgard's ecomorphic analysis. The spatial configuration of biogeocoenosis types has been established on the basis of the data of earth remote sensing and the analysis of digital elevation model. The algorithm was suggested which allows to reveal the spatial organisation of soil animal communities at investigated point, biogeocoenosis, and landscape.
Wiley, Anne E.; Ostrom, Peggy H.; Stricker, Craig A.; James, Helen F.; Gandhi, Hasand
2010-01-01
We wish to use stable-isotope analysis of flight feathers to understand the feeding behavior of pelagic seabirds, such as the Hawaiian Petrel (Pterodroma sandwichensis) and Newell’s Shearwater (Puffinus auricularis newelli). Analysis of remiges is particularly informative because the sequence and timing of remex molt are often known. The initial step, reported here, is to obtain accurate isotope values from whole remiges by means of a minimally invasive protocol appropriate for live birds or museum specimens. The high variability observed in D13C and D15N values within a feather precludes the use of a small section of vane. We found the average range within 42 Hawaiian Petrel remiges to be 1.3‰ for both D13C and D15N and that within 10 Newell’s Shearwater remiges to be 1.3‰ and 0.7‰ for D13C and D15N, respectively. The D13C of all 52 feathers increased from tip to base, and the majority of Hawaiian Petrel feathers showed an analogous trend in D15N. Although the average range of DD in 21 Hawaiian Petrel remiges was 11‰, we found no longitudinal trend. We discuss influences of trophic level, foraging location, metabolism, and pigmentation on isotope values and compare three methods of obtaining isotope averages of whole feathers. Our novel barb-sampling protocol requires only 1.0 mg of feather and minimal preparation time. Because it leaves the feather nearly intact, this protocol will likely facilitate obtaining isotope values from remiges of live birds and museum specimens. As a consequence, it will help expand the understanding of historical trends in foraging behavior
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans
2015-02-01
The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this
Sustained Attention Across the Life Span in a Sample of 10,000: Dissociating Ability and Strategy.
Fortenbaugh, Francesca C; DeGutis, Joseph; Germine, Laura; Wilmer, Jeremy B; Grosso, Mallory; Russo, Kathryn; Esterman, Michael
2015-09-01
Normal and abnormal differences in sustained visual attention have long been of interest to scientists, educators, and clinicians. Still lacking, however, is a clear understanding of how sustained visual attention varies across the broad sweep of the human life span. In the present study, we filled this gap in two ways. First, using an unprecedentedly large 10,430-person sample, we modeled age-related differences with substantially greater precision than have prior efforts. Second, using the recently developed gradual-onset continuous performance test (gradCPT), we parsed sustained-attention performance over the life span into its ability and strategy components. We found that after the age of 15 years, the strategy and ability trajectories saliently diverge. Strategy becomes monotonically more conservative with age, whereas ability peaks in the early 40s and is followed by a gradual decline in older adults. These observed life-span trajectories for sustained attention are distinct from results of other life-span studies focusing on fluid and crystallized intelligence. © The Author(s) 2015.
Zoeal morphology of Pachygrapsus transversus (Gibbes (Decapoda, Grapsidae reared in the laboratory
Ana Luiza Brossi-Garcia
1997-12-01
Full Text Available Ovigerous females of Pachygrapsus transversus (Gibbes, 1850 were collected on the Praia Dura and Saco da Ribeira beaches, Ubatuba, São Paulo, Brazil. Larvae were individually reared in a climatic room at 25ºC temperature, salinities of 28, 32 and 35‰ and under natural photoperiod conditions. The best rearing results were observed at 35%o salinity. Seven zoeal instars were observed, drawing and described in detail. The data are compared with those obtained for P. gracilis (Saussure, 1858.
ASTEM, Evaluation of Gibbs, Helmholtz and Saturation Line Function for Thermodynamics Calculation
Moore, K.V.; Burgess, M.P.; Fuller, G.L.; Kaiser, A.H.; Jaeger, D.L.
1974-01-01
1 - Description of problem or function: ASTEM is a modular set of FORTRAN IV subroutines to evaluate the Gibbs, Helmholtz, and saturation line functions as published by the American Society of Mechanical Engineers (1967). Any thermodynamic quantity including derivative properties can be obtained from these routines by a user-supplied main program. PROPS is an auxiliary routine available for the IBM360 version which makes it easier to apply the ASTEM routines to power station models. 2 - Restrictions on the complexity of the problem: Unless re-dimensioned by the user, the highest derivative allowed is order 9. All arrays within ASTEM are one-dimensional to save storage area
Xiong Shiyun; Qi Weihong; Huang Baiyun; Wang Mingpu; Li Yejun
2010-01-01
The Debye model of Helmholtz free energy for bulk material is generalized to Gibbs free energy (GFE) model for nanomaterial, while a shape factor is introduced to characterize the shape effect on GFE. The structural transitions of Ti and Zr nanoparticles are predicted based on GFE. It is further found that GFE decreases with the shape factor and increases with decreasing of the particle size. The critical size of structural transformation for nanoparticles goes up as temperature increases in the absence of change in shape factor. For specified temperature, the critical size climbs up with the increase of shape factor. The present predictions agree well with experiment values.
LA CASA GIBBS Y EL MONOPOLIO SALITRERO PERUANO: 1876-1878
Manuel Ravest Mora
2008-06-01
Full Text Available El objeto de este breve trabajo es mostrar la disposición de Anthony Gibbs & Sons, y de sus filiales, a apoyar el proyecto monopólico salitrero del Perú con recursos monetarios y los manejos de sus directores en la única empresa que, dada su capacidad de elaboración, podía hacerlo fracasar: la Compañía de Salitres y Ferrocarril de Antofagasta, de la que Gibbs era el segundo mayor accionista. Para el gobierno chileno la causa primaria de la guerra de 1879 fue el intento del Perú por monopolizar la producción salitrera. Bolivia, su aliada secreta desde 1873, colaboró arrendándole y vendiéndole sus depósitos de nitrato, e imponiendo a la exportación del salitre un tributo que infringió la condición -estipulada en un Tratado de Límites- bajo la cual Chile le cedió territorio. Su recuperación manu militari inició el conflicto. A partir de la segunda mitad del siglo pasado esta tesis economicista-legalista fue cuestionada en Chile y en el exterior, desplazando el acento causal al reordenamiento de los mercados de materias primas -de las que los beligerantes eran exportadores- a consecuencia de la crisis mundial de la década de 1870.This brief study aims at showing Anthony Gibbs & Sons disposition in supporting the Peruvian monopolistic nitrate project with monetary resources and its Director's influences in the only company which, due its production's capacity, could make the project fail: the Chilean Antofagasta Nitrate and Railway Co. in which Gibbs was the second most important stockholder. According to Chilean government the primary cause of 1879's war was Peru's attempt to monopolize nitrate production. Bolivia, its secret allied since 1873, helped renting and selling him her nitrate fields and imposing a tax on the nitrate exports of the Chilean company in Antofagasta, thus violating the condition stated in a Border Treaty by which Chile had ceded territory. Its recovery through the use of military forcé was the first act
Phadke, Sushil; Shrivastava, Bhakt Darshan; Ujle, S K; Mishra, Ashutosh; Dagaonkar, N
2014-01-01
One of the potential driving forces behind a chemical reaction is favourable a new quantity known as the Gibbs free energy (G) of the system, which reflects the balance between these forces. Ultrasonic velocity and absorption measurements in liquids and liquid mixtures find extensive application to study the nature of intermolecular forces. Ultrasonic velocity measurements have been successfully employed to detect weak and strong molecular interactions present in binary and ternary liquid mixtures. After measuring the density and ultrasonic velocity of aqueous solution of 'Borassus Flabellifier' BF and Adansonia digitata And, we calculated Gibb's energy and intermolecular free length. The velocity of ultrasonic waves was measured, using a multi-frequency ultrasonic interferometer with a high degree of accuracy operating Model M-84 by M/s Mittal Enterprises, New Delhi, at a fixed frequency of 2 MHz. Natural sample 'Borassus Flabellifier' BF fruit pulp and Adansonia digitata AnD powder was collected from Dhar, District of MP, India for this study.
Geuna, S
2000-11-20
Quantitative morphology of the nervous system has undergone great developments over recent years, and several new technical procedures have been devised and applied successfully to neuromorphological research. However, a lively debate has arisen on some issues, and a great deal of confusion appears to exist that is definitely responsible for the slow spread of the new techniques among scientists. One such element of confusion is related to uncertainty about the meaning, implications, and advantages of the design-based sampling strategy that characterize the new techniques. In this article, to help remove this uncertainty, morphoquantitative methods are described and contrasted on the basis of the inferential paradigm of the sampling strategy: design-based vs model-based. Moreover, some recommendations are made to help scientists judge the appropriateness of a method used for a given study in relation to its specific goals. Finally, the use of the term stereology to label, more or less expressly, only some methods is critically discussed. Copyright 2000 Wiley-Liss, Inc.
Johnson, J. D. (Prostat, Mesa, AZ); Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)
2006-10-01
Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.
Homem-de-Mello, Tito [University of Illinois at Chicago, Department of Mechanical and Industrial Engineering, Chicago, IL (United States); Matos, Vitor L. de; Finardi, Erlon C. [Universidade Federal de Santa Catarina, LabPlan - Laboratorio de Planejamento de Sistemas de Energia Eletrica, Florianopolis (Brazil)
2011-03-15
The long-term hydrothermal scheduling is one of the most important problems to be solved in the power systems area. This problem aims to obtain an optimal policy, under water (energy) resources uncertainty, for hydro and thermal plants over a multi-annual planning horizon. It is natural to model the problem as a multi-stage stochastic program, a class of models for which algorithms have been developed. The original stochastic process is represented by a finite scenario tree and, because of the large number of stages, a sampling-based method such as the Stochastic Dual Dynamic Programming (SDDP) algorithm is required. The purpose of this paper is two-fold. Firstly, we study the application of two alternative sampling strategies to the standard Monte Carlo - namely, Latin hypercube sampling and randomized quasi-Monte Carlo - for the generation of scenario trees, as well as for the sampling of scenarios that is part of the SDDP algorithm. Secondly, we discuss the formulation of stopping criteria for the optimization algorithm in terms of statistical hypothesis tests, which allows us to propose an alternative criterion that is more robust than that originally proposed for the SDDP. We test these ideas on a problem associated with the whole Brazilian power system, with a three-year planning horizon. (orig.)
Forsgren, Eva; Locke, Barbara; Semberg, Emilia; Laugen, Ane T; Miranda, Joachim R de
2017-08-01
Viral infections in managed honey bees are numerous, and most of them are caused by viruses with an RNA genome. Since RNA degrades rapidly, appropriate sample management and RNA extraction methods are imperative to get high quality RNA for downstream assays. This study evaluated the effect of various sampling-transport scenarios (combinations of temperature, RNA stabilizers, and duration) of transport on six RNA quality parameters; yield, purity, integrity, cDNA synthesis efficiency, target detection and quantification. The use of water and extraction buffer were also compared for a primary bee tissue homogenate prior to RNA extraction. The strategy least affected by time was preservation of samples at -80°C. All other regimens turned out to be poor alternatives unless the samples were frozen or processed within 24h. Chemical stabilizers have the greatest impact on RNA quality and adding an extra homogenization step (a QIAshredder™ homogenizer) to the extraction protocol significantly improves the RNA yield and chemical purity. This study confirms that RIN values (RNA Integrity Number), should be used cautiously with bee RNA. Using water for the primary homogenate has no negative effect on RNA quality as long as this step is no longer than 15min. Copyright © 2017 Elsevier B.V. All rights reserved.
Mansion, Guilhem; Parolly, Gerald; Crowl, Andrew A.; Mavrodiev, Evgeny; Cellinese, Nico; Oganesian, Marine; Fraunhofer, Katharina; Kamari, Georgia; Phitos, Dimitrios; Haberle, Rosemarie; Akaydin, Galip; Ikinci, Nursel; Raus, Thomas; Borsch, Thomas
2012-01-01
Background Speciose clades usually harbor species with a broad spectrum of adaptive strategies and complex distribution patterns, and thus constitute ideal systems to disentangle biotic and abiotic causes underlying species diversification. The delimitation of such study systems to test evolutionary hypotheses is difficult because they often rely on artificial genus concepts as starting points. One of the most prominent examples is the bellflower genus Campanula with some 420 species, but up to 600 species when including all lineages to which Campanula is paraphyletic. We generated a large alignment of petD group II intron sequences to include more than 70% of described species as a reference. By comparison with partial data sets we could then assess the impact of selective taxon sampling strategies on phylogenetic reconstruction and subsequent evolutionary conclusions. Methodology/Principal Findings Phylogenetic analyses based on maximum parsimony (PAUP, PRAP), Bayesian inference (MrBayes), and maximum likelihood (RAxML) were first carried out on the large reference data set (D680). Parameters including tree topology, branch support, and age estimates, were then compared to those obtained from smaller data sets resulting from “classification-guided” (D088) and “phylogeny-guided sampling” (D101). Analyses of D088 failed to fully recover the phylogenetic diversity in Campanula, whereas D101 inferred significantly different branch support and age estimates. Conclusions/Significance A short genomic region with high phylogenetic utility allowed us to easily generate a comprehensive phylogenetic framework for the speciose Campanula clade. Our approach recovered 17 well-supported and circumscribed sub-lineages. Knowing these will be instrumental for developing more specific evolutionary hypotheses and guide future research, we highlight the predictive value of a mass taxon-sampling strategy as a first essential step towards illuminating the detailed evolutionary
Blake, Christine E.; Wethington, Elaine; Farrell, Tracy J.; Bisogni, Carole A.; Devine, Carol M.
2012-01-01
Employed parents’ work and family conditions provide behavioral contexts for their food choices. Relationships between employed parents’ food-choice coping strategies, behavioral contexts, and dietary quality were evaluated. Data on work and family conditions, sociodemographic characteristics, eating behavior, and dietary intake from two 24-hour dietary recalls were collected in a random sample cross-sectional pilot telephone survey in the fall of 2006. Black, white, and Latino employed mothers (n=25) and fathers (n=25) were recruited from a low/moderate income urban area in upstate New York. Hierarchical cluster analysis (Ward’s method) identified three clusters of parents differing in use of food-choice coping strategies (ie, Individualized Eating, Missing Meals, and Home Cooking). Cluster sociodemographic, work, and family characteristics were compared using χ2 and Fisher’s exact tests. Cluster differences in dietary quality (Healthy Eating Index 2005) were analyzed using analysis of variance. Clusters differed significantly (P≤0.05) on food-choice coping strategies, dietary quality, and behavioral contexts (ie, work schedule, marital status, partner’s employment, and number of children). Individualized Eating and Missing Meals clusters were characterized by nonstandard work hours, having a working partner, single parenthood and with family meals away from home, grabbing quick food instead of a meal, using convenience entrées at home, and missing meals or individualized eating. The Home Cooking cluster included considerably more married fathers with nonemployed spouses and more home-cooked family meals. Food-choice coping strategies affecting dietary quality reflect parents’ work and family conditions. Nutritional guidance and family policy needs to consider these important behavioral contexts for family nutrition and health. PMID:21338739
Monte Carlo Molecular Simulation with Isobaric-Isothermal and Gibbs-NPT Ensembles
Du, Shouhong
2012-01-01
This thesis presents Monte Carlo methods for simulations of phase behaviors of Lennard-Jones fluids. The isobaric-isothermal (NPT) ensemble and Gibbs-NPT ensemble are introduced in detail. NPT ensemble is employed to determine the phase diagram of pure component. The reduced simulation results are verified by comparison with the equation of state by by Johnson et al. and results with L-J parameters of methane agree considerably with the experiment measurements. We adopt the blocking method for variance estimation and error analysis of the simulation results. The relationship between variance and number of Monte Carlo cycles, error propagation and Random Number Generator performance are also investigated. We review the Gibbs-NPT ensemble employed for phase equilibrium of binary mixture. The phase equilibrium is achieved by performing three types of trial move: particle displacement, volume rearrangement and particle transfer. The simulation models and the simulation details are introduced. The simulation results of phase coexistence for methane and ethane are reported with comparison of the experimental data. Good agreement is found for a wide range of pressures. The contribution of this thesis work lies in the study of the error analysis with respect to the Monte Carlo cycles and number of particles in some interesting aspects.
Gibbs Measures Over Locally Tree-Like Graphs and Percolative Entropy Over Infinite Regular Trees
Austin, Tim; Podder, Moumanti
2018-03-01
Consider a statistical physical model on the d-regular infinite tree Td described by a set of interactions Φ . Let Gn be a sequence of finite graphs with vertex sets V_n that locally converge to Td. From Φ one can construct a sequence of corresponding models on the graphs G_n. Let μ_n be the resulting Gibbs measures. Here we assume that μ n converges to some limiting Gibbs measure μ on Td in the local weak^* sense, and study the consequences of this convergence for the specific entropies |V_n|^{-1}H(μ _n). We show that the limit supremum of |V_n|^{-1}H(μ _n) is bounded above by the percolative entropy H_{it{perc}}(μ ), a function of μ itself, and that |V_n|^{-1}H(μ _n) actually converges to H_{it{perc}}(μ ) in case Φ exhibits strong spatial mixing on T_d. When it is known to exist, the limit of |V_n|^{-1}H(μ _n) is most commonly shown to be given by the Bethe ansatz. Percolative entropy gives a different formula, and we do not know how to connect it to the Bethe ansatz directly. We discuss a few examples of well-known models for which the latter result holds in the high temperature regime.
Monte Carlo Molecular Simulation with Isobaric-Isothermal and Gibbs-NPT Ensembles
Du, Shouhong
2012-05-01
This thesis presents Monte Carlo methods for simulations of phase behaviors of Lennard-Jones fluids. The isobaric-isothermal (NPT) ensemble and Gibbs-NPT ensemble are introduced in detail. NPT ensemble is employed to determine the phase diagram of pure component. The reduced simulation results are verified by comparison with the equation of state by by Johnson et al. and results with L-J parameters of methane agree considerably with the experiment measurements. We adopt the blocking method for variance estimation and error analysis of the simulation results. The relationship between variance and number of Monte Carlo cycles, error propagation and Random Number Generator performance are also investigated. We review the Gibbs-NPT ensemble employed for phase equilibrium of binary mixture. The phase equilibrium is achieved by performing three types of trial move: particle displacement, volume rearrangement and particle transfer. The simulation models and the simulation details are introduced. The simulation results of phase coexistence for methane and ethane are reported with comparison of the experimental data. Good agreement is found for a wide range of pressures. The contribution of this thesis work lies in the study of the error analysis with respect to the Monte Carlo cycles and number of particles in some interesting aspects.
Excess Gibbs energy for six binary solid solutions of molecularly simple substances
Lobo, L J; Staveley, L A.K.
1985-01-01
In this paper we apply the method developed in a previous study of Ar + CH/sub 4/ to the evaluation of the excess Gibbs energy G /SUP E.S/ for solid solutions of two molecularly simple components. The method depends on combining information on the excess Gibbs energy G /SUP E.L/ for the liquid mixture of the two components with a knowledge of the (T, x) solid-liquid phase diagram. Certain thermal properties o the pure substances are also needed. G /SUP E.S/ has been calculated for binary mixtures of Ar + Kr, Kr + CH/sub 4/, CO + N/sub 2/, Kr + Xe, Ar + N/sub 2/, and Ar + CO. In general, but not always, the solid mixtures are more non-ideal than the liquid mixtures of the same composition at the same temperature. Except for the Kr + CH/sub 4/ system, the ratio r = G /SUP E.S/ /G /SUP E.L/ is larger the richer the solution in the component with the smaller molecules.
Latella, Ivan; Pérez-Madrid, Agustín
2013-10-01
The local thermodynamics of a system with long-range interactions in d dimensions is studied using the mean-field approximation. Long-range interactions are introduced through pair interaction potentials that decay as a power law in the interparticle distance. We compute the local entropy, Helmholtz free energy, and grand potential per particle in the microcanonical, canonical, and grand canonical ensembles, respectively. From the local entropy per particle we obtain the local equation of state of the system by using the condition of local thermodynamic equilibrium. This local equation of state has the form of the ideal gas equation of state, but with the density depending on the potential characterizing long-range interactions. By volume integration of the relation between the different thermodynamic potentials at the local level, we find the corresponding equation satisfied by the potentials at the global level. It is shown that the potential energy enters as a thermodynamic variable that modifies the global thermodynamic potentials. As a result, we find a generalized Gibbs-Duhem equation that relates the potential energy to the temperature, pressure, and chemical potential. For the marginal case where the power of the decaying interaction potential is equal to the dimension of the space, the usual Gibbs-Duhem equation is recovered. As examples of the application of this equation, we consider spatially uniform interaction potentials and the self-gravitating gas. We also point out a close relationship with the thermodynamics of small systems.
Sovilj P.
2014-10-01
Full Text Available Measurement methods, based on the approach named Digital Stochastic Measurement, have been introduced, and several prototype and small-series commercial instruments have been developed based on these methods. These methods have been mostly investigated for various types of stationary signals, but also for non-stationary signals. This paper presents, analyzes and discusses digital stochastic measurement of electroencephalography (EEG signal in the time domain, emphasizing the problem of influence of the Wilbraham-Gibbs phenomenon. The increase of measurement error, related to the Wilbraham-Gibbs phenomenon, is found. If the EEG signal is measured and measurement interval is 20 ms wide, the average maximal error relative to the range of input signal is 16.84 %. If the measurement interval is extended to 2s, the average maximal error relative to the range of input signal is significantly lowered - down to 1.37 %. Absolute errors are compared with the error limit recommended by Organisation Internationale de Métrologie Légale (OIML and with the quantization steps of the advanced EEG instruments with 24-bit A/D conversion
GPU-accelerated Gibbs ensemble Monte Carlo simulations of Lennard-Jonesium
Mick, Jason; Hailat, Eyad; Russo, Vincent; Rushaidat, Kamel; Schwiebert, Loren; Potoff, Jeffrey
2013-12-01
This work describes an implementation of canonical and Gibbs ensemble Monte Carlo simulations on graphics processing units (GPUs). The pair-wise energy calculations, which consume the majority of the computational effort, are parallelized using the energetic decomposition algorithm. While energetic decomposition is relatively inefficient for traditional CPU-bound codes, the algorithm is ideally suited to the architecture of the GPU. The performance of the CPU and GPU codes are assessed for a variety of CPU and GPU combinations for systems containing between 512 and 131,072 particles. For a system of 131,072 particles, the GPU-enabled canonical and Gibbs ensemble codes were 10.3 and 29.1 times faster (GTX 480 GPU vs. i5-2500K CPU), respectively, than an optimized serial CPU-bound code. Due to overhead from memory transfers from system RAM to the GPU, the CPU code was slightly faster than the GPU code for simulations containing less than 600 particles. The critical temperature Tc∗=1.312(2) and density ρc∗=0.316(3) were determined for the tail corrected Lennard-Jones potential from simulations of 10,000 particle systems, and found to be in exact agreement with prior mixed field finite-size scaling calculations [J.J. Potoff, A.Z. Panagiotopoulos, J. Chem. Phys. 109 (1998) 10914].
Gibbs Free Energy of Formation for Selected Platinum Group Minerals (PGM
Spiros Olivotos
2016-01-01
Full Text Available Thermodynamic data for platinum group (Os, Ir, Ru, Rh, Pd and Pt minerals are very limited. The present study is focused on the calculation of the Gibbs free energy of formation (ΔfG° for selected PGM occurring in layered intrusions and ophiolite complexes worldwide, applying available experimental data on their constituent elements at their standard state (ΔG = G(species − ΔG(elements, using the computer program HSC Chemistry software 6.0. The evaluation of the accuracy of the calculation method was made by the calculation of (ΔGf of rhodium sulfide phases. The calculated values were found to be ingood agreement with those measured in the binary system (Rh + S as a function of temperature by previous authors (Jacob and Gupta (2014. The calculated Gibbs free energy (ΔfG° followed the order RuS2 < (Ir,OsS2 < (Pt, PdS < (Pd, PtTe2, increasing from compatible to incompatible noble metals and from sulfides to tellurides.
Međedović Janko
2017-01-01
Full Text Available This study looked for a General Factor of Personality (GFP in a sample of male convicts (N=226; mean age 32 years. The GFP was extracted from seven broad personality traits: FFM factors, Amoralism (the negative pole of the lexical Honesty-Humility factor and Disintegration (operationalization of Schizotypy. Three first-order factors were extracted, labeled Dysfunctionality, Antisociality and Openness, and GFP was found through the hierarchical factor analysis. The nature of the GFP was explored through analysis of its relations with markers of fast Life-History strategy and covitality. The results demonstrated that the GFP is associated with unrestricted sexual behavior, medical problems, mental problems, early involvement in criminal activity and stability of criminal behavior. The evidence shows that the GFP is a meaningful construct on the highest level of personality structure. It may represent a personality indicator of fitness-related characteristics and could be useful in research of personality in an evolutionary context.
Supplee, Lauren H.; Skuban, Emily Moye; Trentacosta, Christopher J.; Shaw, Daniel S.; Stoltz, Emilee
2011-01-01
Little longitudinal research has been conducted on changes in children's emotional self-regulation strategy (SRS) use after infancy, particularly for children at risk. The current study examined changes in boys' emotional SRS from toddlerhood through preschool. Repeated observational assessments using delay of gratification tasks at ages 2, 3, and 4 were examined with both variable- and person-oriented analyses in a low-income sample of boys (N = 117) at-risk for early problem behavior. Results were consistent with theory on emotional SRS development in young children. Children initially used more emotion-focused SRS (e.g., comfort seeking) and transitioned to greater use of planful SRS (e.g., distraction) by age 4. Person-oriented analysis using trajectory analysis found similar patterns from 2–4, with small groups of boys showing delayed movement away from emotion-focused strategies or delay in the onset of regular use of distraction. The results provide a foundation for future research to examine the development of SRS in low-income young children. PMID:21675542
Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi
2015-12-01
A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.
Gentles, Stephen J; Charles, Cathy; Nicholas, David B; Ploeg, Jenny; McKibbon, K Ann
2016-10-11
Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that demand unique review procedures. The purpose of this paper is to initiate discussion about what a rigorous systematic approach to reviews of methods, referred to here as systematic methods overviews, might look like by providing tentative suggestions for approaching specific challenges likely to be encountered. The guidance offered here was derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research. The guidance is organized into several principles that highlight specific objectives for this type of review given the common challenges that must be overcome to achieve them. Optional strategies for achieving each principle are also proposed, along with discussion of how they were successfully implemented in the overview on sampling. We describe seven paired principles and strategies that address the following aspects: delimiting the initial set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology used to describe specific methods topics, and generating rigorous verifiable analytic interpretations. Since a broad aim in systematic methods overviews is to describe and interpret the relevant literature in qualitative terms, we suggest that iterative decision making at various stages of the review process, and a rigorous qualitative approach to analysis are necessary features of this review type
Shao, Xuexin; Huang, Biao; Zhao, Yongcun; Sun, Weixia; Gu, Zhiquan; Qian, Weifei
2014-06-01
The impacts of industrial and agricultural activities on soil Cd, Hg, Pb, and Cu in Zhangjiagang City, a rapidly developing region in China, were evaluated using two sampling strategies. The soil Cu, Cd, and Pb concentrations near industrial locations were greater than those measured away from industrial locations. The converse was true for Hg. The top enrichment factor (TEF) values, calculated as the ratio of metal concentrations between the topsoil and subsoil, were greater near industrial location than away from industrial locations and were further related to the industry type. Thus, the TEF is an effective index to distinguish sources of toxic elements not only between anthropogenic and geogenic but also among different industry types. Target soil sampling near industrial locations resulted in a greater estimation in high levels of soil heavy metals. This study revealed that the soil heavy metal contamination was primarily limited to local areas near industrial locations, despite rapid development over the last 20 years. The prevention and remediation of the soil heavy metal pollution should focus on these high-risk areas in the future. Copyright © 2014 Elsevier Inc. All rights reserved.
Suarez-Kurtz G.
2001-01-01
Full Text Available Bioanalytical data from a bioequivalence study were used to develop limited-sampling strategy (LSS models for estimating the area under the plasma concentration versus time curve (AUC and the peak plasma concentration (Cmax of 4-methylaminoantipyrine (MAA, an active metabolite of dipyrone. Twelve healthy adult male volunteers received single 600 mg oral doses of dipyrone in two formulations at a 7-day interval in a randomized, crossover protocol. Plasma concentrations of MAA (N = 336, measured by HPLC, were used to develop LSS models. Linear regression analysis and a "jack-knife" validation procedure revealed that the AUC0-¥ and the Cmax of MAA can be accurately predicted (R²>0.95, bias 0.85 of the AUC0-¥ or Cmax for the other formulation. LSS models based on three sampling points (1.5, 4 and 24 h, but using different coefficients for AUC0-¥ and Cmax, predicted the individual values of both parameters for the enrolled volunteers (R²>0.88, bias = -0.65 and -0.37%, precision = 4.3 and 7.4% as well as for plasma concentration data sets generated by simulation (R²>0.88, bias = -1.9 and 8.5%, precision = 5.2 and 8.7%. Bioequivalence assessment of the dipyrone formulations based on the 90% confidence interval of log-transformed AUC0-¥ and Cmax provided similar results when either the best-estimated or the LSS-derived metrics were used.
Sahu, Manjulata; Dash, Smruti
2011-01-01
The standard molar Gibbs energies of formation of Nd 6 UO 12 (s) have been measured using an oxygen concentration cell with yttria stabilized zirconia as solid electrolyte. Δ f G m o (T) for Nd 6 UO 12 (s) has been calculated using the measured and required thermodynamic data from the literature. The calculated Gibbs energy expression can be given as: Δ f G m o (Nd 6 UO 12 , s,T)/(± 2.3) kJmol -1 = -6660.1+1.0898 (T/K). (author)
Nunes, Matheus A.G.; Voss, Mônica; Corazza, Gabriela; Flores, Erico M.M.; Dressler, Valderi L.
2016-01-01
The use of reference solutions dispersed on filter paper discs is proposed for the first time as an external calibration strategy for matrix matching and determination of As, Cd, Co, Cr, Cu, Mn, Ni, Pb, Sr, V and Zn in plants by laser ablation-inductively coupled plasma mass spectrometry (LA-ICP-MS). The procedure is based on the use of filter paper discs as support for aqueous reference solutions, which are further evaporated, resulting in solid standards with concentrations up to 250 μg g −1 of each element. The use of filter paper for calibration is proposed as matrix matched standards due to the similarities of this material with botanical samples, regarding to carbon concentration and its distribution through both matrices. These characteristics allowed the use of 13 C as internal standard (IS) during the analysis by LA-ICP-MS. In this way, parameters as analyte signal normalization with 13 C, carrier gas flow rate, laser energy, spot size, and calibration range were monitored. The calibration procedure using solution deposition on filter paper discs resulted in precision improvement when 13 C was used as IS. The method precision was calculated by the analysis of a certified reference material (CRM) of botanical matrix, considering the RSD obtained for 5 line scans and was lower than 20%. Accuracy of LA-ICP-MS determinations were evaluated by analysis of four CRM pellets of botanical composition, as well as by comparison with results obtained by ICP-MS using solution nebulization after microwave assisted digestion. Plant samples of unknown elemental composition were analyzed by the proposed LA method and good agreement were obtained with results of solution analysis. Limits of detection (LOD) established for LA-ICP-MS were obtained by the ablation of 10 lines on the filter paper disc containing 40 μL of 5% HNO 3 (v v −1 ) as calibration blank. Values ranged from 0.05 to 0.81 μg g −1 . Overall, the use of filter paper as support for dried
Nunes, Matheus A.G.; Voss, Mônica; Corazza, Gabriela; Flores, Erico M.M.; Dressler, Valderi L., E-mail: vdressler@gmail.com
2016-01-28
The use of reference solutions dispersed on filter paper discs is proposed for the first time as an external calibration strategy for matrix matching and determination of As, Cd, Co, Cr, Cu, Mn, Ni, Pb, Sr, V and Zn in plants by laser ablation-inductively coupled plasma mass spectrometry (LA-ICP-MS). The procedure is based on the use of filter paper discs as support for aqueous reference solutions, which are further evaporated, resulting in solid standards with concentrations up to 250 μg g{sup −1} of each element. The use of filter paper for calibration is proposed as matrix matched standards due to the similarities of this material with botanical samples, regarding to carbon concentration and its distribution through both matrices. These characteristics allowed the use of {sup 13}C as internal standard (IS) during the analysis by LA-ICP-MS. In this way, parameters as analyte signal normalization with {sup 13}C, carrier gas flow rate, laser energy, spot size, and calibration range were monitored. The calibration procedure using solution deposition on filter paper discs resulted in precision improvement when {sup 13}C was used as IS. The method precision was calculated by the analysis of a certified reference material (CRM) of botanical matrix, considering the RSD obtained for 5 line scans and was lower than 20%. Accuracy of LA-ICP-MS determinations were evaluated by analysis of four CRM pellets of botanical composition, as well as by comparison with results obtained by ICP-MS using solution nebulization after microwave assisted digestion. Plant samples of unknown elemental composition were analyzed by the proposed LA method and good agreement were obtained with results of solution analysis. Limits of detection (LOD) established for LA-ICP-MS were obtained by the ablation of 10 lines on the filter paper disc containing 40 μL of 5% HNO{sub 3} (v v{sup −1}) as calibration blank. Values ranged from 0.05 to 0.81 μg g{sup −1}. Overall, the use of filter
Sturrock, Hugh J W; Gething, Pete W; Ashton, Ruth A; Kolaczinski, Jan H; Kabatereine, Narcis B; Brooker, Simon
2011-09-01
In schistosomiasis control, there is a need to geographically target treatment to populations at high risk of morbidity. This paper evaluates alternative sampling strategies for surveys of Schistosoma mansoni to target mass drug administration in Kenya and Ethiopia. Two main designs are considered: lot quality assurance sampling (LQAS) of children from all schools; and a geostatistical design that samples a subset of schools and uses semi-variogram analysis and spatial interpolation to predict prevalence in the remaining unsurveyed schools. Computerized simulations are used to investigate the performance of sampling strategies in correctly classifying schools according to treatment needs and their cost-effectiveness in identifying high prevalence schools. LQAS performs better than geostatistical sampling in correctly classifying schools, but at a cost with a higher cost per high prevalence school correctly classified. It is suggested that the optimal surveying strategy for S. mansoni needs to take into account the goals of the control programme and the financial and drug resources available.
Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A
2018-05-01
High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2 = 0.98; p strategy promises to achieve the target area under the curve as part of precision dosing.
W. L. Silva
2008-09-01
Full Text Available The reduction efficiency is an important variable during the black liquor burning process in the Kraft recovery boiler. This variable value is obtained by slow experimental routines and the delay of this measure disturbs the pulp and paper industry customary control. This paper describes an optimization approach for the reduction efficiency determination in the furnace bottom of the recovery boiler based on the minimization of the Gibbs free energy. The industrial data used in this study were directly obtained from CENIBRA's data acquisition system. The resulting approach is able to predict the steady state behavior of the chemical composition of the furnace recovery boiler, - especially the reduction efficiency when different operational conditions are used. This result confirms the potential of this approach in the analysis of the daily operation of the recovery boiler.
Dynamics of macro-observables and space-time inhomogeneous Gibbs ensembles
Lanz, L.; Lupieri, G.
1978-01-01
The relationship between the classical description of a macro-system and quantum mechanics of its particles is considered within the framework recently developed by Ludwig. A procedure is given to define probability measures on the trajectory space of a macrosystem which yields a statistical description of the dynamics of a macrosystem. The basic tool in this treatment is a new concept of space-time inhomogeneous Gibbs ensemble, defined in N-body quantum mechanics. In the Gaussian approximation of the probabilities the results of Zubarev's theory based on the ''nonequilibrium statistical operator'' are recovered. The present ''embedding'' of the description of a macrosystem inside the N-body theory allows for a joint description of a macrosystem and a microsubsystem of it, and a ''macroscopical'' calculation of the statistical operator of the microsystem is indicated. (author)
The osmotic second virial coefficient and the Gibbs-McMillan-Mayer framework
Mollerup, J.M.; Breil, Martin Peter
2009-01-01
The osmotic second virial coefficient is a key parameter in light scattering, protein crystallisation. self-interaction chromatography, and osmometry. The interpretation of the osmotic second virial coefficient depends on the set of independent variables. This commonly includes the independent...... variables associated with the Kirkwood-Buff, the McMillan-Mayer, and the Lewis-Randall solution theories. In this paper we analyse the osmotic second virial coefficient using a Gibbs-McMillan-Mayer framework which is similar to the McMillan-Mayer framework with the exception that pressure rather than volume...... is an independent variable. A Taylor expansion is applied to the osmotic pressure of a solution where one of the solutes is a small molecule, a salt for instance, that equilibrates between the two phases. Other solutes are retained. Solvents are small molecules that equilibrate between the two phases...
Generalized Gibbs distribution and energy localization in the semiclassical FPU problem
Hipolito, Rafael; Danshita, Ippei; Oganesyan, Vadim; Polkovnikov, Anatoli
2011-03-01
We investigate dynamics of the weakly interacting quantum mechanical Fermi-Pasta-Ulam (qFPU) model in the semiclassical limit below the stochasticity threshold. Within this limit we find that initial quantum fluctuations lead to the damping of FPU oscillations and relaxation of the system to a slowly evolving steady state with energy localized within few momentum modes. We find that in large systems this state can be described by the generalized Gibbs ensemble (GGE), with the Lagrange multipliers being very weak functions of time. This ensembles gives accurate description of the instantaneous correlation functions, both quadratic and quartic. Based on these results we conjecture that GGE generically appears as a prethermalized state in weakly non-integrable systems.
A course on large deviations with an introduction to Gibbs measures
Rassoul-Agha, Firas
2015-01-01
This is an introductory course on the methods of computing asymptotics of probabilities of rare events: the theory of large deviations. The book combines large deviation theory with basic statistical mechanics, namely Gibbs measures with their variational characterization and the phase transition of the Ising model, in a text intended for a one semester or quarter course. The book begins with a straightforward approach to the key ideas and results of large deviation theory in the context of independent identically distributed random variables. This includes Cramér's theorem, relative entropy, Sanov's theorem, process level large deviations, convex duality, and change of measure arguments. Dependence is introduced through the interactions potentials of equilibrium statistical mechanics. The phase transition of the Ising model is proved in two different ways: first in the classical way with the Peierls argument, Dobrushin's uniqueness condition, and correlation inequalities and then a second time through the ...
Standard molar Gibbs free energy of formation of URh3(s)
Prasad, Rajendra; Sayi, Y.S.; Radhakrishna, J.; Yadav, C.S.; Shankaran, P.S.; Chhapru, G.C.
1992-01-01
Equilibrium partial pressures of CO(g) over the system (UO 2 (s) + C(s) + Rh(s) + URh 3 (s)) were measured in the temperature range 1327 - 1438 K. Standard Gibbs molar free energy of formation of URh 3 (Δ f G o m ) in the above temperature range can be expressed as Δ f G o m (URh 3 ,s,T)+-3.0(kJ/mol)= -348.165 + 0.03144 T(K). The second and third law enthalpy of formation, ΔfH o m (URh 3 ,s,298.15 K) are (-318.4 +- 3.0) and (298.3 +- 2.5) kJ/mol respectively. (author). 7 refs., 3 tabs
Ergodic time-reversible chaos for Gibbs' canonical oscillator
Hoover, William Graham; Sprott, Julien Clinton; Patra, Puneet Kumar
2015-01-01
Nosé's pioneering 1984 work inspired a variety of time-reversible deterministic thermostats. Though several groups have developed successful doubly-thermostated models, single-thermostat models have failed to generate Gibbs' canonical distribution for the one-dimensional harmonic oscillator. A 2001 doubly-thermostated model, claimed to be ergodic, has a singly-thermostated version. Though neither of these models is ergodic this work has suggested a successful route toward singly-thermostated ergodicity. We illustrate both ergodicity and its lack for these models using phase-space cross sections and Lyapunov instability as diagnostic tools. - Highlights: • We develop cross-section and Lyapunov methods for diagnosing ergodicity. • We apply these methods to several thermostatted-oscillator problems. • We demonstrate the nonergodicity of previous work. • We find a novel family of ergodic thermostatted-oscillator problems.
Tremaine, P.R.
1979-01-01
Methods for calculating high-temprature Gibbs free energies of mononuclear cations and anions from room-temperature data are reviewed. Emphasis is given to species required for oxide solubility calculations relevant to mass transport situations in the nuclear industry. Free energies predicted by each method are compared to selected values calculated from recently reported solubility studies and other literature data. Values for monatomic ions estimated using the assumption anti C 0 p(T) = anti C 0 p(298) agree best with experiment to 423 K. From 423 K to 523 K, free energies from an electrostatic model for ion hydration are more accurate. Extrapolations for hydrolyzed species are limited by a lack of room-temperature entropy data and expressions for estimating these entropies are discussed. (orig.) [de
Winter, B.C. de; Gelder, T. van; Mathôt, R.A.A.; Glander, P.; Tedesco-Silva, H.; Hilbrands, L.B.; Budde, K.; Hest, R.M. van
2009-01-01
Previous studies predicted that limited sampling strategies (LSS) for estimation of mycophenolic acid (MPA) area-under-the-curve (AUC(0-12)) after ingestion of enteric-coated mycophenolate sodium (EC-MPS) using a clinically feasible sampling scheme may have poor predictive performance. Failure of
Boulos, Samy; Nyström, Laura
2017-11-01
The oxidation of cereal (1→3,1→4)-β-D-glucan can influence the health promoting and technological properties of this linear, soluble homopolysaccharide by introduction of new functional groups or chain scission. Apart from deliberate oxidative modifications, oxidation of β-glucan can already occur during processing and storage, which is mediated by hydroxyl radicals (HO•) formed by the Fenton reaction. We present four complementary sample preparation strategies to investigate oat and barley β-glucan oxidation products by hydrophilic interaction ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS), employing selective enzymatic digestion, graphitized carbon solid phase extraction (SPE), and functional group labeling techniques. The combination of these methods allows for detection of both lytic (C1, C3/4, C5) and non-lytic (C2, C4/3, C6) oxidation products resulting from HO•-attack at different glucose-carbons. By treating oxidized β-glucan with lichenase and β-glucosidase, only oxidized parts of the polymer remained in oligomeric form, which could be separated by SPE from the vast majority of non-oxidized glucose units. This allowed for the detection of oligomers with mid-chain glucuronic acids (C6) and carbonyls, as well as carbonyls at the non-reducing end from lytic C3/C4 oxidation. Neutral reducing ends were detected by reductive amination with anthranilic acid/amide as labeled glucose and cross-ring cleaved units (arabinose, erythrose) after enzyme treatment and SPE. New acidic chain termini were observed by carbodiimide-mediated amidation of carboxylic acids as anilides of gluconic, arabinonic, and erythronic acids. Hence, a full characterization of all types of oxidation products was possible by combining complementary sample preparation strategies. Differences in fine structure depending on source (oat vs. barley) translates to the ratio of observed oxidized oligomers, with in-depth analysis corroborating a random HO
Mendoza, W. G.; Corredor, J. E.; Ko, D.; Zika, R. G.; Mooers, C. N.
2008-05-01
The increasing effort to develop the coastal ocean observing system (COOS) in various institutions has gained momentum due to its high value to climate, environmental, economic, and health issues. The stress contributed by nutrients to the coral reef ecosystem is among many problems that are targeted to be resolved using this system. Traditional nutrient sampling has been inadequate to resolve issues on episodic nutrient fluxes in reef regions due to temporal and spatial variability. This paper illustrates sampling strategy using the COOS information to identify areas that need critical investigation. The area investigated is within the Puerto Rico subdomain (60-70oW, 15-20oN), and Caribbean Time Series (CaTS), World Ocean Circulation Experiment (WOCE), Intra-America Sea (IAS) ocean nowcast/forecast system (IASNFS), and other COOS-related online datasets are utilized. Nutrient profile results indicate nitrate is undetectable in the upper 50 m apparently due to high biological consumption. Nutrients are delivered in Puerto Rico particularly in the CaTS station either via a meridional jet formed from opposing cyclonic and anticyclonic eddies or wind-driven upwelling. The strong vertical fluctuation in the upper 50 m demonstrates a high anomaly in temperature and salinity and a strong cross correlation signal. High chlorophyll a concentration corresponding to seasonal high nutrient influx coincides with higher precipitation accumulation rates and apparent riverine input from the Amazon and Orinoco Rivers during summer (August) than during winter (February) seasons. Non-detectability of nutrients in the upper 50 m is a reflection of poor sampling frequency or the absence of a highly sensitive nutrient analysis method to capture episodic events. Thus, this paper was able to determine the range of depths and concentrations that need to be critically investigated to determine nutrient fluxes, nutrient sources, and climatological factors that can affect nutrient delivery
Gary, Ronald K.
2004-01-01
The concentration dependence of (delta)S term in the Gibbs free energy function is described in relation to its application to reversible reactions in biochemistry. An intuitive and non-mathematical argument for the concentration dependence of the (delta)S term in the Gibbs free energy equation is derived and the applicability of the equation to…
Ulstrup, Jens
1999-01-01
We discuss a simple model for the environmental reorganisation Gibbs free energy, E-r, in electron transfer between a metalloprotein and a small reaction partner. The protein is represented as a dielectric globule with low dielectric constant, the metal centres as conducting spheres, all embedded...
Moučka, F.; Nezbeda, Ivo
2013-01-01
Roč. 360, DEC 25 (2013), s. 472-476 ISSN 0378-3812 Grant - others:GA MŠMT(CZ) LH12019 Institutional support: RVO:67985858 Keywords : multi-particle move monte carlo * Gibbs ensemble * vapor-liquid-equilibria Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.241, year: 2013
Ebenhöh, Oliver; Spelberg, Stephanie
2018-02-19
The photosynthetic carbon reduction cycle, or Calvin-Benson-Bassham (CBB) cycle, is now contained in every standard biochemistry textbook. Although the cycle was already proposed in 1954, it is still the subject of intense research, and even the structure of the cycle, i.e. the exact series of reactions, is still under debate. The controversy about the cycle's structure was fuelled by the findings of Gibbs and Kandler in 1956 and 1957, when they observed that radioactive 14 CO 2 was dynamically incorporated in hexoses in a very atypical and asymmetrical way, a phenomenon later termed the 'photosynthetic Gibbs effect'. Now, it is widely accepted that the photosynthetic Gibbs effect is not in contradiction to the reaction scheme proposed by CBB, but the arguments given have been largely qualitative and hand-waving. To fully appreciate the controversy and to understand the difficulties in interpreting the Gibbs effect, it is illustrative to illuminate the history of the discovery of the CBB cycle. We here give an account of central scientific advances and discoveries, which were essential prerequisites for the elucidation of the cycle. Placing the historic discoveries in the context of the modern textbook pathway scheme illustrates the complexity of the cycle and demonstrates why especially dynamic labelling experiments are far from easy to interpret. We conclude by arguing that it requires sound theoretical approaches to resolve conflicting interpretations and to provide consistent quantitative explanations. © 2018 The Author(s).
Algunas Precisiones en torno a las funciones termodinámicas energía libre de Gibbs
Solaz Portolés, Joan Josep; Quílez Pardo, Juan
2001-01-01
The aim of this study is to elucidate some didactic misundertandings related with the use and the appli cability of the delta functions ∆G, ∆rG and ∆rG0, which derive from the thermodynamic potential Gibbs Free Energy, G.
Ferreira, D. J. S.; Bezerra, B. N.; Collyer, M. N.; Garcia, A.; Ferreira, I. L.
2018-04-01
The simulation of casting processes demands accurate information on the thermophysical properties of the alloy; however, such information is scarce in the literature for multicomponent alloys. Generally, metallic alloys applied in industry have more than three solute components. In the present study, a general solution of Butler's formulation for surface tension is presented for multicomponent alloys and is applied in quaternary Al-Cu-Si-Fe alloys, thus permitting the Gibbs-Thomson coefficient to be determined. Such coefficient is a determining factor to the reliability of predictions furnished by microstructure growth models and by numerical computations of solidification thermal parameters, which will depend on the thermophysical properties assumed in the calculations. The Gibbs-Thomson coefficient for ternary and quaternary alloys is seldom reported in the literature. A numerical model based on Powell's hybrid algorithm and a finite difference Jacobian approximation has been coupled to a Thermo-Calc TCAPI interface to assess the excess Gibbs energy of the liquid phase, permitting liquidus temperature, latent heat, alloy density, surface tension and Gibbs-Thomson coefficient for Al-Cu-Si-Fe hypoeutectic alloys to be calculated, as an example of calculation capabilities for multicomponent alloys of the proposed method. The computed results are compared with thermophysical properties of binary Al-Cu and ternary Al-Cu-Si alloys found in the literature and presented as a function of the Cu solute composition.
Fleming D
2006-01-01
Full Text Available Background : Therapeutic drug monitoring for mycophenolic acid (MPA is increasingly being advocated. Thepresent therapeutic range relates to the 12-hour area under the serum concentration time profile (AUC.However, this is a cumbersome, tedious, cost restricting procedure. Is it possible to reduce this samplingperiod? Aim : To compare the AUC from a reduced sampling strategy with the full 12-hour profile for MPA. Settings and Design : Clinical Pharmacology Unit of a tertiary care hospital in South India. Retrospective, paireddata. Materials and Methods : Thirty-four 12-hour profiles from post-renal transplant patients on Cellcept ® wereevaluated. Profiles were grouped according to steroid and immunosuppressant co-medication and the timeafter transplant. MPA was estimated by high performance liquid chromatography with UV detection. From the12-hour profiles the AUC up to only six hours was calculated by the trapezoidal rule and a correction factorapplied. These two AUCs were then compared. Statistical Analysis : Linear regression, intra-class correlations (ICC and a two-tailed paired t-test were appliedto the data. Results : Comparing the 12-hour AUC with the paired 6-hour extrapolated AUC, the ICC and linear regression(r2 were very good for all three groups. No statistical difference was found by a two-tailed paired t-test. Nobias was seen with a Bland Altman plot or by calculation. Conclusion : For patients on Cellcept ® with prednisolone ± cyclosporine the 6-hour corrected is an accuratemeasure of the full 12-hour AUC.
Lehtola, Markku J; Miettinen, Ilkka T; Hirvonen, Arja; Vartiainen, Terttu; Martikainen, Pertti J
2007-12-01
The numbers of bacteria generally increase in distributed water. Often household pipelines or water fittings (e.g., taps) represent the most critical location for microbial growth in water distribution systems. According to the European Union drinking water directive, there should not be abnormal changes in the colony counts in water. We used a pilot distribution system to study the effects of water stagnation on drinking water microbial quality, concentration of copper and formation of biofilms with two commonly used pipeline materials in households; copper and plastic (polyethylene). Water stagnation for more than 4h significantly increased both the copper concentration and the number of bacteria in water. Heterotrophic plate counts were six times higher in PE pipes and ten times higher in copper pipes after 16 h of stagnation than after only 40 min stagnation. The increase in the heterotrophic plate counts was linear with time in both copper and plastic pipelines. In the distribution system, bacteria originated mainly from biofilms, because in laboratory tests with water, there was only minor growth of bacteria after 16 h stagnation. Our study indicates that water stagnation in the distribution system clearly affects microbial numbers and the concentration of copper in water, and should be considered when planning the sampling strategy for drinking water quality control in distribution systems.
Lablans, Martin; Kadioglu, Dennis; Mate, Sebastian; Leb, Ines; Prokosch, Hans-Ulrich; Ückert, Frank
2016-03-01
Medical research projects often require more biological material than can be supplied by a single biobank. For this reason, a multitude of strategies support locating potential research partners with matching material without requiring centralization of sample storage. Classification of different strategies for biobank networks, in particular for locating suitable samples. Description of an IT infrastructure combining these strategies. Existing strategies can be classified according to three criteria: (a) granularity of sample data: coarse bank-level data (catalogue) vs. fine-granular sample-level data, (b) location of sample data: central (central search service) vs. decentral storage (federated search services), and (c) level of automation: automatic (query-based, federated search service) vs. semi-automatic (inquiry-based, decentral search). All mentioned search services require data integration. Metadata help to overcome semantic heterogeneity. The "Common Service IT" in BBMRI-ERIC (Biobanking and BioMolecular Resources Research Infrastructure) unites a catalogue, the decentral search and metadata in an integrated platform. As a result, researchers receive versatile tools to search suitable biomaterial, while biobanks retain a high degree of data sovereignty. Despite their differences, the presented strategies for biobank networks do not rule each other out but can complement and even benefit from each other.
Rogal, Shari S; Yakovchenko, Vera; Waltz, Thomas J; Powell, Byron J; Kirchner, JoAnn E; Proctor, Enola K; Gonzalez, Rachel; Park, Angela; Ross, David; Morgan, Timothy R; Chartier, Maggie; Chinman, Matthew J
2017-05-11
Hepatitis C virus (HCV) is a common and highly morbid illness. New medications that have much higher cure rates have become the new evidence-based practice in the field. Understanding the implementation of these new medications nationally provides an opportunity to advance the understanding of the role of implementation strategies in clinical outcomes on a large scale. The Expert Recommendations for Implementing Change (ERIC) study defined discrete implementation strategies and clustered these strategies into groups. The present evaluation assessed the use of these strategies and clusters in the context of HCV treatment across the US Department of Veterans Affairs (VA), Veterans Health Administration, the largest provider of HCV care nationally. A 73-item survey was developed and sent to all VA sites treating HCV via electronic survey, to assess whether or not a site used each ERIC-defined implementation strategy related to employing the new HCV medication in 2014. VA national data regarding the number of Veterans starting on the new HCV medications at each site were collected. The associations between treatment starts and number and type of implementation strategies were assessed. A total of 80 (62%) sites responded. Respondents endorsed an average of 25 ± 14 strategies. The number of treatment starts was positively correlated with the total number of strategies endorsed (r = 0.43, p strategies endorsed (p strategies, compared to 15 strategies in the lowest quartile. There were significant differences in the types of strategies endorsed by sites in the highest and lowest quartiles of treatment starts. Four of the 10 top strategies for sites in the top quartile had significant correlations with treatment starts compared to only 1 of the 10 top strategies in the bottom quartile sites. Overall, only 3 of the top 15 most frequently used strategies were associated with treatment. These results suggest that sites that used a greater number of implementation
Feng, Dong-xia; Nguyen, Anh V
2016-03-01
Floating objects on the air-water interfaces are central to a number of everyday activities, from walking on water by insects to flotation separation of valuable minerals using air bubbles. The available theories show that a fine sphere can float if the force of surface tension and buoyancies can support the sphere at the interface with an apical angle subtended by the circle of contact being larger than the contact angle. Here we show that the pinning of the contact line at the sharp edge, known as the Gibbs inequality condition, also plays a significant role in controlling the stability and detachment of floating spheres. Specifically, we truncated the spheres with different angles and used a force sensor device to measure the force of pushing the truncated spheres from the interface into water. We also developed a theoretical modeling to calculate the pushing force that in combination with experimental results shows different effects of the Gibbs inequality condition on the stability and detachment of the spheres from the water surface. For small angles of truncation, the Gibbs inequality condition does not affect the sphere detachment, and hence the classical theories on the floatability of spheres are valid. For large truncated angles, the Gibbs inequality condition determines the tenacity of the particle-meniscus contact and the stability and detachment of floating spheres. In this case, the classical theories on the floatability of spheres are no longer valid. A critical truncated angle for the transition from the classical to the Gibbs inequality regimes of detachment was also established. The outcomes of this research advance our understanding of the behavior of floating objects, in particular, the flotation separation of valuable minerals, which often contain various sharp edges of their crystal faces.
Lima da Silva, Aline; Heck, Nestor Cesar
2003-01-01
Equilibrium concentrations are traditionally calculated with the help of equilibrium constant equations from selected reactions. This procedure, however, is only useful for simpler problems. Analysis of the equilibrium state in a multicomponent and multiphase system necessarily involves solution of several simultaneous equations, and, as the number of system components grows, the required computation becomes more complex and tedious. A more direct and general method for solving the problem is the direct minimization of the Gibbs energy function. The solution for the nonlinear problem consists in minimizing the objective function (Gibbs energy of the system) subjected to the constraints of the elemental mass-balance. To solve it, usually a computer code is developed, which requires considerable testing and debugging efforts. In this work, a simple method to predict equilibrium composition in multicomponent systems is presented, which makes use of an electronic spreadsheet. The ability to carry out these calculations within a spreadsheet environment shows several advantages. First, spreadsheets are available 'universally' on nearly all personal computers. Second, the input and output capabilities of spreadsheets can be effectively used to monitor calculated results. Third, no additional systems or programs need to be learned. In this way, spreadsheets can be as suitable in computing equilibrium concentrations as well as to be used as teaching and learning aids. This work describes, therefore, the use of the Solver tool, contained in the Microsoft Excel spreadsheet package, on computing equilibrium concentrations in a multicomponent system, by the method of direct Gibbs energy minimization. The four phases Fe-Cr-O-C-Ni system is used as an example to illustrate the method proposed. The pure stoichiometric phases considered in equilibrium calculations are: Cr 2 O 3 (s) and FeO C r 2 O 3 (s). The atmosphere consists of O 2 , CO e CO 2 constituents. The liquid iron
Aymerich, I; Acuña, V; Ort, C; Rodríguez-Roda, I; Corominas, Ll
2017-11-15
The growing awareness of the relevance of organic microcontaminants on the environment has led to a growing number of studies on attenuation of these compounds in wastewater treatment plants (WWTP) and rivers. However, the effects of the sampling strategies (frequency and duration of composite samples) on the attenuation estimates are largely unknown. Our goal was to assess how frequency and duration of composite samples influence uncertainty of the attenuation estimates in WWTPs and rivers. Furthermore, we also assessed how compound consumption rate and degradability influence uncertainty. The assessment was conducted through simulating the integrated wastewater system of Puigcerdà (NE Iberian Peninsula) using a sewer pattern generator and a coupled model of WWTP and river. Results showed that the sampling strategy is especially critical at the influent of WWTP, particularly when the number of toilet flushes containing the compound of interest is small (≤100 toilet flushes with compound day -1 ), and less critical at the effluent of the WWTP and in the river due to the mixing effects of the WWTP. For example, at the WWTP, when evaluating a compound that is present in 50 pulses·d -1 using a sampling frequency of 15-min to collect a 24-h composite sample, the attenuation uncertainty can range from 94% (0% degradability) to 9% (90% degradability). The estimation of attenuation in rivers is less critical than in WWTPs, as the attenuation uncertainty was lower than 10% for all evaluated scenarios. Interestingly, the errors in the estimates of attenuation are usually lower than those of loads for most sampling strategies and compound characteristics (e.g. consumption and degradability), although the opposite occurs for compounds with low consumption and inappropriate sampling strategies at the WWTP. Hence, when designing a sampling campaign, one should consider the influence of compounds' consumption and degradability as well as the desired level of accuracy in
Jurowski, Kamil; Buszewski, Bogusław; Piekoszewski, Wojciech
2015-01-01
Nowadays, studies related to the distribution of metallic elements in biological samples are one of the most important issues. There are many articles dedicated to specific analytical atomic spectrometry techniques used for mapping/(bio)imaging the metallic elements in various kinds of biological samples. However, in such literature, there is a lack of articles dedicated to reviewing calibration strategies, and their problems, nomenclature, definitions, ways and methods used to obtain quantitative distribution maps. The aim of this article was to characterize the analytical calibration in the (bio)imaging/mapping of the metallic elements in biological samples including (1) nomenclature; (2) definitions, and (3) selected and sophisticated, examples of calibration strategies with analytical calibration procedures applied in the different analytical methods currently used to study an element's distribution in biological samples/materials such as LA ICP-MS, SIMS, EDS, XRF and others. The main emphasis was placed on the procedures and methodology of the analytical calibration strategy. Additionally, the aim of this work is to systematize the nomenclature for the calibration terms: analytical calibration, analytical calibration method, analytical calibration procedure and analytical calibration strategy. The authors also want to popularize the division of calibration methods that are different than those hitherto used. This article is the first work in literature that refers to and emphasizes many different and complex aspects of analytical calibration problems in studies related to (bio)imaging/mapping metallic elements in different kinds of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.
2006-10-05
the likely existence of a small foreshock . 2. BACKGROUND 2.1. InSAR The most well-known examples of InSAR used as a geodetic tool involve...the event. We have used the seismic waveforms in the Sultan Dag event to identify a small foreshock preceding the main shock by about 3 seconds
Brooks, Benjamin A; Gomez, Francisco; Sandvol, Eric; Frazer, L. N
2006-01-01
...) in primarily Africa and the Middle East, although we also included some events from Asia. We find that InSAR is capable of routine detection of surface displacements associated with small (
The Gibbs free energy of homogeneous nucleation: From atomistic nuclei to the planar limit.
Cheng, Bingqing; Tribello, Gareth A; Ceriotti, Michele
2017-09-14
In this paper we discuss how the information contained in atomistic simulations of homogeneous nucleation should be used when fitting the parameters in macroscopic nucleation models. We show how the number of solid and liquid atoms in such simulations can be determined unambiguously by using a Gibbs dividing surface and how the free energy as a function of the number of solid atoms in the nucleus can thus be extracted. We then show that the parameters (the chemical potential, the interfacial free energy, and a Tolman correction) of a model based on classical nucleation theory can be fitted using the information contained in these free-energy profiles but that the parameters in such models are highly correlated. This correlation is unfortunate as it ensures that small errors in the computed free energy surface can give rise to large errors in the extrapolated properties of the fitted model. To resolve this problem we thus propose a method for fitting macroscopic nucleation models that uses simulations of planar interfaces and simulations of three-dimensional nuclei in tandem. We show that when the chemical potentials and the interface energy are pinned to their planar-interface values, more precise estimates for the Tolman length are obtained. Extrapolating the free energy profile obtained from small simulation boxes to larger nuclei is thus more reliable.
Comment on "Inference with minimal Gibbs free energy in information field theory".
Iatsenko, D; Stefanovska, A; McClintock, P V E
2012-03-01
Enßlin and Weig [Phys. Rev. E 82, 051112 (2010)] have introduced a "minimum Gibbs free energy" (MGFE) approach for estimation of the mean signal and signal uncertainty in Bayesian inference problems: it aims to combine the maximum a posteriori (MAP) and maximum entropy (ME) principles. We point out, however, that there are some important questions to be clarified before the new approach can be considered fully justified, and therefore able to be used with confidence. In particular, after obtaining a Gaussian approximation to the posterior in terms of the MGFE at some temperature T, this approximation should always be raised to the power of T to yield a reliable estimate. In addition, we show explicitly that MGFE indeed incorporates the MAP principle, as well as the MDI (minimum discrimination information) approach, but not the well-known ME principle of Jaynes [E.T. Jaynes, Phys. Rev. 106, 620 (1957)]. We also illuminate some related issues and resolve apparent discrepancies. Finally, we investigate the performance of MGFE estimation for different values of T, and we discuss the advantages and shortcomings of the approach.
A study of the Boltzmann and Gibbs entropies in the context of a stochastic toy model
Malgieri, Massimiliano; Onorato, Pasquale; De Ambrosis, Anna
2018-05-01
In this article we reconsider a stochastic toy model of thermal contact, first introduced in Onorato et al (2017 Eur. J. Phys. 38 045102), showing its educational potential for clarifying some current issues in the foundations of thermodynamics. The toy model can be realized in practice using dice and coins, and can be seen as representing thermal coupling of two subsystems with energy bounded from above. The system is used as a playground for studying the different behaviours of the Boltzmann and Gibbs temperatures and entropies in the approach to steady state. The process that models thermal contact between the two subsystems can be proved to be an ergodic, reversible Markov chain; thus the dynamics produces an equilibrium distribution in which the weight of each state is proportional to its multiplicity in terms of microstates. Each one of the two subsystems, taken separately, is formally equivalent to an Ising spin system in the non-interacting limit. The model is intended for educational purposes, and the level of readership of the article is aimed at advanced undergraduates.
A Gibbs Energy Minimization Approach for Modeling of Chemical Reactions in a Basic Oxygen Furnace
Kruskopf, Ari; Visuri, Ville-Valtteri
2017-12-01
In modern steelmaking, the decarburization of hot metal is converted into steel primarily in converter processes, such as the basic oxygen furnace. The objective of this work was to develop a new mathematical model for top blown steel converter, which accounts for the complex reaction equilibria in the impact zone, also known as the hot spot, as well as the associated mass and heat transport. An in-house computer code of the model has been developed in Matlab. The main assumption of the model is that all reactions take place in a specified reaction zone. The mass transfer between the reaction volume, bulk slag, and metal determine the reaction rates for the species. The thermodynamic equilibrium is calculated using the partitioning of Gibbs energy (PGE) method. The activity model for the liquid metal is the unified interaction parameter model and for the liquid slag the modified quasichemical model (MQM). The MQM was validated by calculating iso-activity lines for the liquid slag components. The PGE method together with the MQM was validated by calculating liquidus lines for solid components. The results were compared with measurements from literature. The full chemical reaction model was validated by comparing the metal and slag compositions to measurements from industrial scale converter. The predictions were found to be in good agreement with the measured values. Furthermore, the accuracy of the model was found to compare favorably with the models proposed in the literature. The real-time capability of the proposed model was confirmed in test calculations.
Limit order book and its modeling in terms of Gibbs Grand-Canonical Ensemble
Bicci, Alberto
2016-12-01
In the domain of so called Econophysics some attempts have been already made for applying the theory of thermodynamics and statistical mechanics to economics and financial markets. In this paper a similar approach is made from a different perspective, trying to model the limit order book and price formation process of a given stock by the Grand-Canonical Gibbs Ensemble for the bid and ask orders. The application of the Bose-Einstein statistics to this ensemble allows then to derive the distribution of the sell and buy orders as a function of price. As a consequence we can define in a meaningful way expressions for the temperatures of the ensembles of bid orders and of ask orders, which are a function of minimum bid, maximum ask and closure prices of the stock as well as of the exchanged volume of shares. It is demonstrated that the difference between the ask and bid orders temperatures can be related to the VAO (Volume Accumulation Oscillator), an indicator empirically defined in Technical Analysis of stock markets. Furthermore the derived distributions for aggregate bid and ask orders can be subject to well defined validations against real data, giving a falsifiable character to the model.
Demonstration and resolution of the Gibbs paradox of the first kind
Peters, Hjalmar
2014-01-01
The Gibbs paradox of the first kind (GP1) refers to the false increase in entropy which, in statistical mechanics, is calculated from the process of combining two gas systems S1 and S2 consisting of distinguishable particles. Presented in a somewhat modified form, the GP1 manifests as a contradiction to the second law of thermodynamics. Contrary to popular belief, this contradiction affects not only classical but also quantum statistical mechanics. This paper resolves the GP1 by considering two effects. (i) The uncertainty about which particles are located in S1 and which in S2 contributes to the entropies of S1 and S2. (ii) S1 and S2 are correlated by the fact that if a certain particle is located in one system, it cannot be located in the other. As a consequence, the entropy of the total system consisting of S1 and S2 is not the sum of the entropies of S1 and S2. (paper)
A Gibbs potential expansion with a quantic system made up of a large number of particles
Bloch, Claude; Dominicis, Cyrano de
1959-01-01
Starting from an expansion derived in a previous work, we study the contribution to the Gibbs potential of the two-body dynamical correlations, taking into account the statistical correlations. Such a contribution is of interest for low density systems at low temperature. In the zero density limit, it reduces to the Beth Uhlenbeck expression of the second virial coefficient. For a system of fermions in the zero temperature limit, it yields the contribution of the Brueckner reaction matrix to the ground state energy, plus, under certain conditions, additional terms of the form exp. (β |Δ|), where the Δ are the binding energies of 'bound states' of the type first discussed by L. Cooper. Finally, we study the wave function of two particles immersed in a medium (defined by its temperature and chemical potential). lt satisfies an equation generalizing the Bethe Goldstone equation for an arbitrary temperature. Reprint of a paper published in 'Nuclear Physics' 10, 1959, p. 181-196 [fr
Ahmad, Mohd Ali Khameini; Liao, Lingmin; Saburov, Mansoor
2018-06-01
We study the set of p-adic Gibbs measures of the q-state Potts model on the Cayley tree of order three. We prove the vastness of the set of the periodic p-adic Gibbs measures for such model by showing the chaotic behavior of the corresponding Potts-Bethe mapping over Q_p for the prime numbers p≡1 (mod 3). In fact, for 0< |θ -1|_p< |q|_p^2 < 1 where θ =\\exp _p(J) and J is a coupling constant, there exists a subsystem that is isometrically conjugate to the full shift on three symbols. Meanwhile, for 0< |q|_p^2 ≤ |θ -1|_p< |q|_p < 1, there exists a subsystem that is isometrically conjugate to a subshift of finite type on r symbols where r ≥ 4. However, these subshifts on r symbols are all topologically conjugate to the full shift on three symbols. The p-adic Gibbs measures of the same model for the prime numbers p=2,3 and the corresponding Potts-Bethe mapping are also discussed. On the other hand, for 0< |θ -1|_p< |q|_p < 1, we remark that the Potts-Bethe mapping is not chaotic when p=3 and p≡ 2 (mod 3) and we could not conclude the vastness of the set of the periodic p-adic Gibbs measures. In a forthcoming paper with the same title, we will treat the case 0< |q|_p ≤ |θ -1|_p < 1 for all prime numbers p.
Judora J. Spangenberg
2000-06-01
Full Text Available To examine the relationships between stress levels and, respectively, stressor appraisal, coping strategies and bio- graphical variables, 107 managers completed a biographical questionnaire. Experience of Work and Life Circumstances Questionnaire, and Coping Strategy Indicator. Significant negative correlations were found between stress levels and appraisal scores on all work-related stressors. An avoidant coping strategy explained significant variance in stress levels in a model also containing social support-seeking and problem-solving coping strategies. It was concluded that an avoidant coping strategy probably contributed to increased stress levels. Female managers experienced significantly higher stress levels and utilized a social support-seeking coping strategy significantly more than male managers did. Opsomming Om die verband tussen stresvlakke en, onderskeidelik, taksering van stressors, streshanteringstrategiee en biografiese veranderlikes te ondersoek, het 107 bestuurders n biografiese vraelys, Ervaring vanWerk- en Lewensomstandighedevraelys en Streshanteringstrategieskaal voltooi. Beduidende negatiewe korrelasies is aangetref tussen stresvlakke en takseringtellings ten opsigte van alle werkverwante stressors. 'nVermydende streshantermgstrategie het beduidende variansie in stresvlakke verklaar in n model wat ook sosiale ondersteuningsoekende en pro-bleemoplossende streshanteringstrategiee ingesluit het. Die gevolgtrekking is bereik dat n vermydende stres- hanteringstrategie waarskynlik bygedra het tot verhoogde stresvlakke. Vroulike bestuurders het beduidend hoer stresvlakke ervaar en het n sosiale ondersteuningsoekende streshanteringstrategie beduidend meer gebnnk as manlike bestuurders.
Park, Jinil; Shin, Taehoon; Yoon, Soon Ho; Goo, Jin Mo; Park, Jang-Yeon
2016-05-01
The purpose of this work was to develop a 3D radial-sampling strategy which maintains uniform k-space sample density after retrospective respiratory gating, and demonstrate its feasibility in free-breathing ultrashort-echo-time lung MRI. A multi-shot, interleaved 3D radial sampling function was designed by segmenting a single-shot trajectory of projection views such that each interleaf samples k-space in an incoherent fashion. An optimal segmentation factor for the interleaved acquisition was derived based on an approximate model of respiratory patterns such that radial interleaves are evenly accepted during the retrospective gating. The optimality of the proposed sampling scheme was tested by numerical simulations and phantom experiments using human respiratory waveforms. Retrospectively, respiratory-gated, free-breathing lung MRI with the proposed sampling strategy was performed in healthy subjects. The simulation yielded the most uniform k-space sample density with the optimal segmentation factor, as evidenced by the smallest standard deviation of the number of neighboring samples as well as minimal side-lobe energy in the point spread function. The optimality of the proposed scheme was also confirmed by minimal image artifacts in phantom images. Human lung images showed that the proposed sampling scheme significantly reduced streak and ring artifacts compared with the conventional retrospective respiratory gating while suppressing motion-related blurring compared with full sampling without respiratory gating. In conclusion, the proposed 3D radial-sampling scheme can effectively suppress the image artifacts due to non-uniform k-space sample density in retrospectively respiratory-gated lung MRI by uniformly distributing gated radial views across the k-space. Copyright © 2016 John Wiley & Sons, Ltd.
Naumov, V. V.; Isaeva, V. A.; Kuzina, E. N.; Sharnin, V. A.
2012-12-01
Gibbs energies for the transfer of glycylglycine and glycylglycinate ions from water to water-dimethylsulfoxide solvents are determined from the interface distribution of substances between immiscible phases in the composition range of 0.00 to 0.20 molar fractions of DMSO at 298.15 K. It is shown that with a rise in the concentration of nonaqueous components in solution, we observe the solvation of dipeptide and its anion, due mainly to the destabilization of the carboxyl group.
Naslain, R.; Thebault, J.; Hagenmuller, P.; Bernard, C.
1979-01-01
A thermodynamic approach based on the minimization of the total Gibbs free energy of the system is used to study the chemical vapour deposition (CVD) of boron from BCl 3 -H 2 or BBr 3 -H 2 mixtures on various types of substrates (at 1000 < T< 1900 K and 1 atm). In this approach it is assumed that states close to equilibrium are reached in the boron CVD apparatus. (Auth.)
Fisher, Shawn C.; Reilly, Timothy J.; Jones, Daniel K.; Benzel, William M.; Griffin, Dale W.; Loftin, Keith A.; Iwanowicz, Luke R.; Cohl, Jonathan A.
2015-12-17
An understanding of the effects on human and ecological health brought by major coastal storms or flooding events is typically limited because of a lack of regionally consistent baseline and trends data in locations proximal to potential contaminant sources and mitigation activities, sensitive ecosystems, and recreational facilities where exposures are probable. In an attempt to close this gap, the U.S. Geological Survey (USGS) has implemented the Sediment-bound Contaminant Resiliency and Response (SCoRR) strategy pilot study to collect regional sediment-quality data prior to and in response to future coastal storms. The standard operating procedure (SOP) detailed in this document serves as the sample-collection protocol for the SCoRR strategy by providing step-by-step instructions for site preparation, sample collection and processing, and shipping of soil and surficial sediment (for example, bed sediment, marsh sediment, or beach material). The objectives of the SCoRR strategy pilot study are (1) to create a baseline of soil-, sand-, marsh sediment-, and bed-sediment-quality data from sites located in the coastal counties from Maine to Virginia based on their potential risk of being contaminated in the event of a major coastal storm or flooding (defined as Resiliency mode); and (2) respond to major coastal storms and flooding by reoccupying select baseline sites and sampling within days of the event (defined as Response mode). For both modes, samples are collected in a consistent manner to minimize bias and maximize quality control by ensuring that all sampling personnel across the region collect, document, and process soil and sediment samples following the procedures outlined in this SOP. Samples are analyzed using four USGS-developed screening methods—inorganic geochemistry, organic geochemistry, pathogens, and biological assays—which are also outlined in this SOP. Because the SCoRR strategy employs a multi-metric approach for sample analyses, this
The Charlie-Gibbs Fracture Zone: A Crossroads of the Atlantic Meridional Overturning Circulation
Bower, A. S.; Furey, H. H.; Xu, X.
2016-02-01
The Charlie-Gibbs Fracture Zone (CGFZ), a deep gap in the Mid-Atlantic Ridge at 52N, is the primary conduit for westward-flowing Iceland-Scotland Overflow Water (ISOW), which merges with Denmark Strait Overflow Water to form the Deep Western Boundary Current. The CGFZ has also been shown to "funnel" the path of the northern branch of the eastward-flowing North Atlantic Current (NAC), thereby bringing these two branches of the AMOC into close proximity. A recent two-year time series of hydrographic properties and currents from eight tall moorings across the CGFZ offers the first opportunity to investigate the NAC as a source of variability for ISOW transport. The two-year mean and standard deviation of ISOW transport was -1.7 ± 1.5 Sv, compared to -2.4 ± 3.0 Sv reported by Saunders for a 13-month period in 1988-1989. Differences in the two estimates are partly explained by limitations of the Saunders array, but more importantly reflect the strong low-frequency variability in ISOW transport through CGFZ (which includes complete reversals). Both the observations and output from a multi-decadal simulation of the North Atlantic using the Hybrid Coordinate Ocean Model (HYCOM) forced with interannually varying wind and buoyancy fields indicate a strong positive correlation between ISOW transport and the strength of the NAC through the CGFZ (stronger eastward NAC related to weaker westward ISOW transport). Vertical structure of the low-frequency current variability and water mass structure in the CGFZ will also be discussed. The results have implications regarding the interaction of the upper and lower limbs of the AMOC, and downstream propagation of ISOW transport variability in the Deep Western Boundary Current.
Modeling Electric Double-Layer Capacitors Using Charge Variation Methodology in Gibbs Ensemble
Ganeshprasad Pavaskar
2018-01-01
Full Text Available Supercapacitors deliver higher power than batteries and find applications in grid integration and electric vehicles. Recent work by Chmiola et al. (2006 has revealed unexpected increase in the capacitance of porous carbon electrodes using ionic liquids as electrolytes. The work has generated curiosity among both experimentalists and theoreticians. Here, we have performed molecular simulations using a recently developed technique (Punnathanam, 2014 for simulating supercapacitor system. In this technique, the two electrodes (containing electrolyte in slit pore are simulated in two different boxes using the Gibbs ensemble methodology. This reduces the number of particles required and interfacial interactions, which helps in reducing computational load. The method simulates an electric double-layer capacitor (EDLC with macroscopic electrodes with much smaller system sizes. In addition, the charges on individual electrode atoms are allowed to vary in response to movement of electrolyte ions (i.e., electrode is polarizable while ensuring these atoms are at the same electric potential. We also present the application of our technique on EDLCs with the electrodes modeled as slit pores and as complex three-dimensional pore networks for different electrolyte geometries. The smallest pore geometry showed an increase in capacitance toward the potential of 0 charge. This is in agreement with the new understanding of the electrical double layer in regions of dense ionic packing, as noted by Kornyshev’s theoretical model (Kornyshev, 2007, which also showed a similar trend. This is not addressed by the classical Gouy–Chapman theory for the electric double layer. Furthermore, the electrode polarizability simulated in the model improved the accuracy of the calculated capacitance. However, its addition did not significantly alter the capacitance values in the voltage range considered.
Gibbs Free-Energy Gradient along the Path of Glucose Transport through Human Glucose Transporter 3.
Liang, Huiyun; Bourdon, Allen K; Chen, Liao Y; Phelix, Clyde F; Perry, George
2018-06-11
Fourteen glucose transporters (GLUTs) play essential roles in human physiology by facilitating glucose diffusion across the cell membrane. Due to its central role in the energy metabolism of the central nervous system, GLUT3 has been thoroughly investigated. However, the Gibbs free-energy gradient (what drives the facilitated diffusion of glucose) has not been mapped out along the transport path. Some fundamental questions remain. Here we present a molecular dynamics study of GLUT3 embedded in a lipid bilayer to quantify the free-energy profile along the entire transport path of attracting a β-d-glucose from the interstitium to the inside of GLUT3 and, from there, releasing it to the cytoplasm by Arrhenius thermal activation. From the free-energy profile, we elucidate the unique Michaelis-Menten characteristics of GLUT3, low K M and high V MAX , specifically suitable for neurons' high and constant demand of energy from their low-glucose environments. We compute GLUT3's binding free energy for β-d-glucose to be -4.6 kcal/mol in agreement with the experimental value of -4.4 kcal/mol ( K M = 1.4 mM). We also compute the hydration energy of β-d-glucose, -18.0 kcal/mol vs the experimental data, -17.8 kcal/mol. In this, we establish a dynamics-based connection from GLUT3's crystal structure to its cellular thermodynamics with quantitative accuracy. We predict equal Arrhenius barriers for glucose uptake and efflux through GLUT3 to be tested in future experiments.
Modelling metal-humate interactions: an approach based on the Gibbs-Donnan concept
Ephraim, J.H.
1995-01-01
Humic and fulvic acids constitute an appreciable portion of organic substances in both aquatic and terrestrial environments. Their ability to sequester metal ions and other trace elements has engaged the interest of numerous environmental scientists recently and even though considerable advances have been made, a lot more remains unknown in the area. The existence of high molecular weight fractions and functional group heterogeneity have endowed ion exchange characteristics to these substances. For example, the cation exchange capacities of some humic substances have been compared to those of smectites. Recent development in the solution chemistry has also indicated that humic substances have the capability to interact with other anions because of their amphiphilic nature. In this paper, metal-humate interaction is described by relying heavily on information obtained from treatment of the solution chemistry of ion exchangers as typical polymers. In such a treatment, the perturbations to the metal-humate interaction are estimated by resort to the Gibbs-Donnan concept where the humic substance molecule is envisaged as having a potential counter-ion concentrating region around its molecular domain into which diffusible components can enter or leave depending on their corresponding electrochemical potentials. Information from studies with ion exchangers have been adapted to describe ionic equilibria involving these substances by making it possible to characterise the configuration/conformation of these natural organic acids and to correct for electrostatic effects in the metal-humate interaction. The resultant unified physicochemical approach has facilitated the identification and estimation of the complications to the solution chemistry of humic substances. (authors). 15 refs., 1 fig
SUBLIMATION-DRIVEN ACTIVITY IN MAIN-BELT COMET 313P/GIBBS
Hsieh, Henry H. [Institute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 10617, Taiwan (China); Hainaut, Olivier [European Southern Observatory, Karl-Schwarzschild-Straße 2, D-85748 Garching bei München (Germany); Novaković, Bojan [Department of Astronomy, Faculty of Mathematics, University of Belgrade, Studentski trg 16, 11000 Belgrade (Serbia); Bolin, Bryce [Observatoire de la Côte d’Azur, Boulevard de l’Observatoire, B.P. 4229, F-06304 Nice Cedex 4 (France); Denneau, Larry; Haghighipour, Nader; Kleyna, Jan; Meech, Karen J.; Schunova, Eva; Wainscoat, Richard J. [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States); Fitzsimmons, Alan [Astrophysics Research Centre, Queens University Belfast, Belfast BT7 1NN (United Kingdom); Kokotanekova, Rosita; Snodgrass, Colin [Planetary and Space Sciences, Department of Physical Sciences, The Open University, Milton Keynes MK7 6AA (United Kingdom); Lacerda, Pedro [Max Planck Institute for Solar System Research, Justus-von-Liebig-Weg 3, D-37077 Göttingen (Germany); Micheli, Marco [ESA SSA NEO Coordination Centre, Frascati, RM (Italy); Moskovitz, Nick; Wasserman, Lawrence [Lowell Observatory, 1400 W. Mars Hill Road, Flagstaff, AZ 86001 (United States); Waszczak, Adam, E-mail: hhsieh@asiaa.sinica.edu.tw [Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125 (United States)
2015-02-10
We present an observational and dynamical study of newly discovered main-belt comet 313P/Gibbs. We find that the object is clearly active both in observations obtained in 2014 and in precovery observations obtained in 2003 by the Sloan Digital Sky Survey, strongly suggesting that its activity is sublimation-driven. This conclusion is supported by a photometric analysis showing an increase in the total brightness of the comet over the 2014 observing period, and dust modeling results showing that the dust emission persists over at least three months during both active periods, where we find start dates for emission no later than 2003 July 24 ± 10 for the 2003 active period and 2014 July 28 ± 10 for the 2014 active period. From serendipitous observations by the Subaru Telescope in 2004 when the object was apparently inactive, we estimate that the nucleus has an absolute R-band magnitude of H{sub R} = 17.1 ± 0.3, corresponding to an effective nucleus radius of r{sub e} ∼ 1.00 ± 0.15 km. The object’s faintness at that time means we cannot rule out the presence of activity, and so this computed radius should be considered an upper limit. We find that 313P’s orbit is intrinsically chaotic, having a Lyapunov time of T{sub l} = 12,000 yr and being located near two three-body mean-motion resonances with Jupiter and Saturn, 11J-1S-5A and 10J+12S-7A, yet appears stable over >50 Myr in an apparent example of stable chaos. We furthermore find that 313P is the second main-belt comet, after P/2012 T1 (PANSTARRS), to belong to the ∼155 Myr old Lixiaohua asteroid family.
Comparison of Boltzmann and Gibbs entropies for the analysis of single-chain phase transitions
Shakirov, T.; Zablotskiy, S.; Böker, A.; Ivanov, V.; Paul, W.
2017-03-01
In the last 10 years, flat histogram Monte Carlo simulations have contributed strongly to our understanding of the phase behavior of simple generic models of polymers. These simulations result in an estimate for the density of states of a model system. To connect this result with thermodynamics, one has to relate the density of states to the microcanonical entropy. In a series of publications, Dunkel, Hilbert and Hänggi argued that it would lead to a more consistent thermodynamic description of small systems, when one uses the Gibbs definition of entropy instead of the Boltzmann one. The latter is the logarithm of the density of states at a certain energy, the former is the logarithm of the integral of the density of states over all energies smaller than or equal to this energy. We will compare the predictions using these two definitions for two polymer models, a coarse-grained model of a flexible-semiflexible multiblock copolymer and a coarse-grained model of the protein poly-alanine. Additionally, it is important to note that while Monte Carlo techniques are normally concerned with the configurational energy only, the microcanonical ensemble is defined for the complete energy. We will show how taking the kinetic energy into account alters the predictions from the analysis. Finally, the microcanonical ensemble is supposed to represent a closed mechanical N-particle system. But due to Galilei invariance such a system has two additional conservation laws, in general: momentum and angular momentum. We will also show, how taking these conservation laws into account alters the results.
Gibbs-Thomson Law for Singular Step Segments: Thermodynamics Versus Kinetics
Chernov, A. A.
2003-01-01
Classical Burton-Cabrera-Frank theory presumes that thermal fluctuations are so fast that at any time density of kinks on a step is comparable with the reciprocal intermolecular distance, so that the step rate is about isotropic within the crystal plane. Such azimuthal isotropy is, however, often not the case: Kink density may be much lower. In particular, it was recently found on the (010) face of orthorhombic lysozyme that interkink distance may exceed 500-600 intermolecular distances. Under such conditions, Gibbs-Thomson law (GTL) may not be applicable: On a straight step segment between two corners, communication between the comers occurs exclusively by kink exchange. Annihilation between kinks of opposite sign generated at the comers results in the grain in step energy entering GTL. If the step segment length l much greater than D/v, where D and v are the kink diffusivity and propagation rate, respectively, the opposite kinks have practically no chance to annihilate and GTL is not applicable. The opposite condition of the GTL applicability, l much less than D/v, is equivalent to the requirement that relative supersaturation Delta(sub mu)/kT much less than alpha/l, where alpha is molecular size. Thus, GTL may be applied to a segment of 10(exp 3)alpha approx. 3 x 10(exp -5)cm approx 0.3 micron only if supersaturation is less than 0.1%, while practically used driving forces for crystallization are much larger. Relationships alternative to the GTL for different, but low, kink density have been discussed. They confirm experimental evidences that the Burton-Cabrera-Frank theory of spiral growth is growth rates twice as low as compared to the observed figures. Also, application of GTL results in unrealistic step energy while suggested kinetic law give reasonable figures.
Bochicchio, F.; Risica, S.; Piermattei, S.
1993-01-01
The paper outlines the criteria and organization adopted by the Italian National Institutions in carrying out a representative national survey to evaluate the distribution of radon concentration and the exposure of the Italian population to natural radiation indoors. The main items of the survey - i.e. sampling design, choice of the sample size (5000 dwellings), organization, analysis of the actual sample structure, questionnaire to collect data about families and their dwellings, experimental set up and communication with the public - are discussed. Some results, concerning a first fraction of the total sample, are also presented. (author). 13 refs, 2 figs, 2 tabs
Brouwer, D.H.; Lidén, G.; Asbach, C.; Berges, M.; Tongeren, M. van
2014-01-01
The production of nanomaterials and nano-enabled products is associated with the potential for workers' exposure to (manufactured) nano-objects' agglomerates and aggregates (NOAA). Workplace air monitoring studies have been conducted to assess the actual exposure; however, the methods and strategies
Golbasi, Zehra; Kelleci, Meral; Dogan, Selma
2008-12-01
This study aims to describe and compare the job satisfaction, coping strategies, personal and organizational characteristics among nurses working in a hospital in Turkey. In this cross-sectional survey design study, 186 nurses from Cumhuriyet University Hospital completed Personal Data Form, Minnesota Satisfaction Questionnaire and Ways of Coping Inventory. Response rate was 74.4%. In this study, it was found that job satisfaction score of nurses showed moderate (mean: 3.46+/-0.56) was found. While nurses mostly used to employ self-confident and optimistic approaches that had already being considered as positive coping strategies with stress, yielding and helpless approaches were employed less than that. While a statistically significant positive relation (pjob satisfaction and dimensions of Ways of Coping Inventory "self-confident approach" and "optimistic approach", negative relation (pjob satisfaction and dimensions of the "helpless approach". Organizational and individual nurse characteristics were not found to be associated with job satisfaction. But, job satisfaction of the nurses who is bounded by a contract was found higher than that of permanent staff nurses (pjob satisfaction of Turkish hospital nurses was at a moderate and that of the nurses who succeeded to coping with the stress was heightened. Higher levels of job satisfaction were associated with positive coping strategies. This study contributes to a growing body of evidence demonstrating the importance of coping strategies to nurses' job satisfaction.
Luque Salas, Bárbara; Yáñez Rodríguez, Virginia; Tabernero Urbieta, Carmen; Cuadrado, Esther
2017-02-01
This research aims to understand the role of coping strategies and self-efficacy expectations as predictors of life satisfaction in a sample of parents of boys and girls diagnosed with autistic spectrum disorder. A total of 129 parents (64 men and 65 women) answered a questionnaire on life-satisfaction, coping strategies and self-efficacy scales. Using a regression model, results show that the age of the child is associated with a lower level of satisfaction in parents. The results show that self-efficacy is the variable that best explains the level of satisfaction in mothers, while the use of problem solving explains a higher level of satisfaction in fathers. Men and women show similar levels of life satisfaction; however significant differences were found in coping strategies where women demonstrated higher expressing emotions and social support strategies than men. The development of functional coping strategies and of a high level of self-efficacy represents a key tool for adapting to caring for children with autism. Our results indicated the necessity of early intervention with parents to promote coping strategies, self-efficacy and high level of life satisfaction.
Pugel, Betsy
2017-01-01
This presentation is a review of the timeline for Apollo's approach to Planetary Protection, then known as Planetary Quarantine. Return of samples from Apollo 11, 12 and 14 represented NASA's first attempts into conducting what is now known as Restricted Earth Return, where return of samples is undertaken by the Agency with the utmost care for the impact that the samples may have on Earth's environment due to the potential presence of microbial or other life forms that originate from the parent body (in this case, Earth's Moon).
iMOST Team; Campbell, K. A.; Farmer, J. D.; Van Kranendonk, M. J.; Fernandez-Remolar, D. C.; Czaja, A. D.; Altieri, F.; Amelin, Y.; Ammannito, E.; Anand, M.; Beaty, D. W.; Benning, L. G.; Bishop, J. L.; Borg, L. E.; Boucher, D.; Brucato, J. R.; Busemann, H.; Carrier, B. L.; Debaille, V.; Des Marais, D. J.; Dixon, M.; Ehlmann, B. L.; Fogarty, J.; Glavin, D. P.; Goreva, Y. S.; Grady, M. M.; Hallis, L. J.; Harrington, A. D.; Hausrath, E. M.; Herd, C. D. K.; Horgan, B.; Humayun, M.; Kleine, T.; Kleinhenz, J.; Mangold, N.; Mackelprang, R.; Mayhew, L. E.; McCubbin, F. M.; McCoy, J. T.; McLennan, S. M.; McSween, H. Y.; Moser, D. E.; Moynier, F.; Mustard, J. F.; Niles, P. B.; Ori, G. G.; Raulin, F.; Rettberg, P.; Rucker, M. A.; Schmitz, N.; Sefton-Nash, E.; Sephton, M. A.; Shaheen, R.; Shuster, D. L.; Siljestrom, S.; Smith, C. L.; Spry, J. A.; Steele, A.; Swindle, T. D.; ten Kate, I. L.; Tosca, N. J.; Usui, T.; Wadhwa, M.; Weiss, B. P.; Werner, S. C.; Westall, F.; Wheeler, R. M.; Zipfel, J.; Zorzano, M. P.
2018-04-01
The iMOST hydrothermal deposits sub-team has identified key samples and investigations required to delineate the character and preservational state of potential biosignatures in ancient hydrothermal deposits.
Davis, Alissa; Roth, Alexis; Brand, Juanita Ebert; Zimet, Gregory D; Van Der Pol, Barbara
2016-03-01
This study focused on understanding the coping strategies and related behavioural changes of women who were recently diagnosed with herpes simplex virus type 2. In particular, we were interested in how coping strategies, condom use, and acyclovir uptake evolve over time. Twenty-eight women screening positive for herpes simplex virus type 2 were recruited through a public health STD clinic and the Indianapolis Community Court. Participants completed three semi-structured interviews with a woman researcher over a six-month period. The interviews focused on coping strategies for dealing with a diagnosis, frequency of condom use, suppressive and episodic acyclovir use, and the utilisation of herpes simplex virus type 2 support groups. Interview data were analysed using content analysis to identify and interpret concepts and themes that emerged from the interviews. Women employed a variety of coping strategies following an herpes simplex virus type 2 diagnosis. Of the women, 32% reported an increase in religious activities, 20% of women reported an increase in substance use, and 56% of women reported engaging in other coping activities. A total of 80% of women reported abstaining from sex immediately following the diagnosis, but 76% of women reported engaging in sex again by the six-month interview. Condom and medication use did not increase and herpes simplex virus type 2 support groups were not utilised by participants. All participants reported engaging in at least one coping mechanism after receiving their diagnosis. A positive diagnosis did not seem to result in increased use of condoms for the majority of participants and the use of acyclovir was low overall. © The Author(s) 2015.
Peter Potapov
2013-04-01
Full Text Available Insular Southeast Asia is a hotspot of humid tropical forest cover loss. A sample-based monitoring approach quantifying forest cover loss from Landsat imagery was implemented to estimate gross forest cover loss for two eras, 1990–2000 and 2000–2005. For each time interval, a probability sample of 18.5 km × 18.5 km blocks was selected, and pairs of Landsat images acquired per sample block were interpreted to quantify forest cover area and gross forest cover loss. Stratified random sampling was implemented for 2000–2005 with MODIS-derived forest cover loss used to define the strata. A probability proportional to x (πpx design was implemented for 1990–2000 with AVHRR-derived forest cover loss used as the x variable to increase the likelihood of including forest loss area in the sample. The estimated annual gross forest cover loss for Malaysia was 0.43 Mha/yr (SE = 0.04 during 1990–2000 and 0.64 Mha/yr (SE = 0.055 during 2000–2005. Our use of the πpx sampling design represents a first practical trial of this design for sampling satellite imagery. Although the design performed adequately in this study, a thorough comparative investigation of the πpx design relative to other sampling strategies is needed before general design recommendations can be put forth.
Bertacchini, Lucia; Durante, Caterina; Marchetti, Andrea; Sighinolfi, Simona; Silvestri, Michele; Cocchi, Marina
2012-08-30
Aim of this work is to assess the potentialities of the X-ray powder diffraction technique as fingerprinting technique, i.e. as a preliminary tool to assess soil samples variability, in terms of geochemical features, in the context of food geographical traceability. A correct approach to sampling procedure is always a critical issue in scientific investigation. In particular, in food geographical traceability studies, where the cause-effect relations between the soil of origin and the final foodstuff is sought, a representative sampling of the territory under investigation is certainly an imperative. This research concerns a pilot study to investigate the field homogeneity with respect to both field extension and sampling depth, taking also into account the seasonal variability. Four Lambrusco production sites of the Modena district were considered. The X-Ray diffraction spectra, collected on the powder of each soil sample, were treated as fingerprint profiles to be deciphered by multivariate and multi-way data analysis, namely PCA and PARAFAC. The differentiation pattern observed in soil samples, as obtained by this fast and non-destructive analytical approach, well matches with the results obtained by characterization with other costly analytical techniques, such as ICP/MS, GFAAS, FAAS, etc. Thus, the proposed approach furnishes a rational basis to reduce the number of soil samples to be collected for further analytical characterization, i.e. metals content, isotopic ratio of radiogenic element, etc., while maintaining an exhaustive description of the investigated production areas. Copyright © 2012 Elsevier B.V. All rights reserved.
Chang, A.; Davis, H.; Frazar, B.; Haines, B.
1997-01-01
This paper will present Phase 3's sampling strategies, techniques, methods and substrates for assisting the District to resolve the complaints involving yellowish-brown staining and spotting of homes, cars, etc. These spots could not be easily washed off and some were permanent. The sampling strategies for the three phases were based on Phase 1 -- the identification of the reactive particles conducted in October, 1989 by APCD and IITRI, Phase 2 -- a study of the size distribution and concentration as a function of distance and direction of reactive particle deposition conducted by Radian and LG and E, and Phase 3 -- the determination of the frequency of soiling events over a full year's duration conducted in 1995 by APCD and IITRI. The sampling methods included two primary substrates -- ACE sheets and painted steel, and four secondary substrates -- mailbox, aluminum siding, painted wood panels and roof tiles. The secondary substrates were the main objects from the Valley Village complaints. The sampling technique included five Valley Village (VV) soiling/staining assessment sites and one southwest of the power plant as background/upwind site. The five VV sites northeast of the power plant covered 50 degrees span sector and 3/4 miles distance from the stacks. Hourly meteorological data for wind speeds and wind directions were collected. Based on this sampling technique, there were fifteen staining episodes detected. Nine of them were in summer, 1995
Dinpajooh, Mohammadhasan; Bai, Peng; Allan, Douglas A.; Siepmann, J. Ilja
2015-01-01
Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T c = 1.3128 ± 0.0016, ρ c = 0.316 ± 0.004, and p c = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ t ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r cut = 3.5σ yield T c and p c that are higher by 0.2% and 1.4% than simulations with r cut = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r cut = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard-core square-well particles with various
Paleocurrents in the Charlie-Gibbs Fracture Zone during the Late Quaternary
Bashirova, L. D.; Dorokhova, E.; Sivkov, V.; Andersen, N.; Kuleshova, L. A.; Matul, A.
2017-12-01
The sedimentary processes prevailing in the Charlie-Gibbs Fracture Zone (CGFZ) are gravity flows. They rework pelagic sediments and contourites, and hereby mask the paleoceanographic information partly. The aim of this work is to study sediments of the AMK-4515 core taken in eastern part of the CGFZ. The sediment core AMK-4515 (52°03.14" N, 29°00.12" W; 370 cm length, water depth 3590 m) is located in the southern valley of the CGFZ. This natural deep corridor is influenced by both the westward Iceland-Scotland Overflow Water and underlying counterflow from the Newfoundland Basin. An alternation of the calcareous silty clays and hemipelagic clayey muds in the studied section indicates similarity between our core and long cores taking from CGFZ. A sharp facies shift was found at 80 cm depth in the investigated core. Only the upper section (0-80 cm) is valid for paleoreconstruction. Planktonic foraminiferal distribution and sea-surface temperature (SST) derived from these allow for tracing the PF and NAC latitudinal migrations during investigated period. So-called sortable silt mean size (SS) was used as proxy for reconstruction of bottom current intensity. The age model is based on δ18O and AMS 14C dating, as well as ice-rafted debris (IRD) counts and CaCO3 content. Stratigraphic subdivision of this section allows to allocate 2 marine isotope stages (MIS) covering the last 27 ka. We refer sediments below this level (80-370 cm) to upper part of turbidite, which was formed as a result of massive slide in the southern channel of the CGFZ. Sandy particles were deposited first, underlying silts and clays. This short-term event occurred so quickly that pelagic sedimentation played no role and was not reflected in the grain size distributions. There is evidence for the significant role of gravity flows in sedimentation in the southern channel of the CGFZ. According to our data, the massive sediment slide occurred in the CGFZ about 27 ka. The authors are grateful to RSF
Zhu, Ying; Zhao, Rui; Piehowski, Paul D.; Moore, Ronald J.; Lim, Sujung; Orphan, Victoria J.; Paša-Tolić, Ljiljana; Qian, Wei-Jun; Smith, Richard D.; Kelly, Ryan T.
2018-04-01
One of the greatest challenges for mass spectrometry (MS)-based proteomics is the limited ability to analyze small samples. Here we investigate the relative contributions of liquid chromatography (LC), MS instrumentation and data analysis methods with the aim of improving proteome coverage for sample sizes ranging from 0.5 ng to 50 ng. We show that the LC separations utilizing 30-µm-i.d. columns increase signal intensity by >3-fold relative to those using 75-µm-i.d. columns, leading to 32% increase in peptide identifications. The Orbitrap Fusion Lumos mass spectrometer significantly boosted both sensitivity and sequencing speed relative to earlier generation Orbitraps (e.g., LTQ-Orbitrap), leading to a ~3× increase in peptide identifications and 1.7× increase in identified protein groups for 2 ng tryptic digests of bacterial lysate. The Match Between Runs algorithm of open-source MaxQuant software further increased proteome coverage by ~ 95% for 0.5 ng samples and by ~42% for 2 ng samples. The present platform is capable of identifying >3000 protein groups from tryptic digestion of cell lysates equivalent to 50 HeLa cells and 100 THP-1 cells (~10 ng total proteins), respectively, and >950 proteins from subnanogram bacterial and archaeal cell lysates. The present ultrasensitive LC-MS platform is expected to enable deep proteome coverage for subnanogram samples, including single mammalian cells.
Demarest, Stefaan; Molenberghs, Geert; Van der Heyden, Johan; Gisle, Lydia; Van Oyen, Herman; de Waleffe, Sandrine; Van Hal, Guido
2017-11-01
Substitution of non-participating households is used in the Belgian Health Interview Survey (BHIS) as a method to obtain the predefined net sample size. Yet, possible effects of applying substitution on response rates and health estimates remain uncertain. In this article, the process of substitution with its impact on response rates and health estimates is assessed. The response rates (RR)-both at household and individual level-according to the sampling criteria were calculated for each stage of the substitution process, together with the individual accrual rate (AR). Unweighted and weighted health estimates were calculated before and after applying substitution. Of the 10,468 members of 4878 initial households, 5904 members (RRind: 56.4%) of 2707 households (RRhh: 55.5%) participated. For the three successive (matched) substitutes, the RR dropped to 45%. The composition of the net sample resembles the one of the initial samples. Applying substitution did not produce any important distorting effects on the estimates. Applying substitution leads to an increase in non-participation, but does not impact the estimations.
Jongenburger, I.; Reij, M.W.; Boer, E.P.J.; Gorris, L.G.M.; Zwietering, M.H.
2011-01-01
The actual spatial distribution of microorganisms within a batch of food influences the results of sampling for microbiological testing when this distribution is non-homogeneous. In the case of pathogens being non-homogeneously distributed, it markedly influences public health risk. This study
Gibbs free-energy difference between the glass and crystalline phases of a Ni-Zr alloy
Ohsaka, K.; Trinh, E. H.; Holzer, J. C.; Johnson, W. L.
1993-01-01
The heats of eutectic melting and devitrification, and the specific heats of the crystalline, glass, and liquid phases have been measured for a Ni24Zr76 alloy. The data are used to calculate the Gibbs free-energy difference, Delta G(AC), between the real glass and the crystal on an assumption that the liquid-glass transition is second order. The result shows that Delta G(AC) continuously increases as the temperature decreases in contrast to the ideal glass case where Delta G(AC) is assumed to be independent of temperature.
Size Fluctuations of Near Critical Nuclei and Gibbs Free Energy for Nucleation of BDA on Cu(001)
Schwarz, Daniel; van Gastel, Raoul; Zandvliet, Harold J. W.; Poelsema, Bene
2012-07-01
We present a low-energy electron microscopy study of nucleation and growth of BDA on Cu(001) at low supersaturation. At sufficiently high coverage, a dilute BDA phase coexists with c(8×8) crystallites. The real-time microscopic information allows a direct visualization of near-critical nuclei, determination of the supersaturation and the line tension of the crystallites, and, thus, derivation of the Gibbs free energy for nucleation. The resulting critical nucleus size nicely agrees with the measured value. Nuclei up to 4-6 times larger still decay with finite probability, urging reconsideration of the classic perception of a critical nucleus.
Duque, Michel; Andraca, Adriana; Goldstein, Patricia; del Castillo, Luis Felipe
2018-04-01
The Adam-Gibbs equation has been used for more than five decades, and still a question remains unanswered on the temperature dependence of the chemical potential it includes. Nowadays, it is a well-known fact that in fragile glass formers, actually the behavior of the system depends on the temperature region it is being studied. Transport coefficients change due to the appearance of heterogeneity in the liquid as it is supercooled. Using the different forms for the logarithmic shift factor and the form of the configurational entropy, we evaluate this temperature dependence and present a discussion on our results.
Starr, Francis W.; Douglas, Jack F.; Sastry, Srikanth
2013-01-01
We carefully examine common measures of dynamical heterogeneity for a model polymer melt and test how these scales compare with those hypothesized by the Adam and Gibbs (AG) and random first-order transition (RFOT) theories of relaxation in glass-forming liquids. To this end, we first analyze clusters of highly mobile particles, the string-like collective motion of these mobile particles, and clusters of relative low mobility. We show that the time scale of the high-mobility clusters and stri...
Borak, T B
1986-04-01
Periodic grab sampling in combination with time-of-occupancy surveys has been the accepted procedure for estimating the annual exposure of underground U miners to Rn daughters. Temporal variations in the concentration of potential alpha energy in the mine generate uncertainties in this process. A system to randomize the selection of locations for measurement is described which can reduce uncertainties and eliminate systematic biases in the data. In general, a sample frequency of 50 measurements per year is sufficient to satisfy the criteria that the annual exposure be determined in working level months to within +/- 50% of the true value with a 95% level of confidence. Suggestions for implementing this randomization scheme are presented.
Elisavet Tsakelidou
2017-03-01
Full Text Available The effect of endogenous interferences of serum in multi-targeted metabolite profiling HILIC-MS/MS analysis was investigated by studying different sample preparation procedures. A modified QuEChERS dispersive SPE protocol, a HybridSPE protocol, and a combination of liquid extraction with protein precipitation were compared to a simple protein precipitation. Evaluation of extraction efficiency and sample clean-up was performed for all methods. SPE sorbent materials tested were found to retain hydrophilic analytes together with endogenous interferences, thus additional elution steps were needed. Liquid extraction was not shown to minimise matrix effects. In general, it was observed that a balance should be reached in terms of recovery, efficient clean-up, and sample treatment time when a wide range of metabolites are analysed. A quick step for removing phospholipids prior to the determination of hydrophilic endogenous metabolites is required, however, based on the results from the applied methods, further studies are needed to achieve high recoveries for all metabolites.
Wullschleger, S. D.; Charsley-Groffman, L.; Baltzer, J. L.; Berg, A. A.; Griffith, P. C.; Jafarov, E. E.; Marsh, P.; Miller, C. E.; Schaefer, K. M.; Siqueira, P.; Wilson, C. J.; Kasischke, E. S.
2017-12-01
There is considerable interest in using L- and P-band Synthetic Aperture Radar (SAR) data to monitor variations in aboveground woody biomass, soil moisture, and permafrost conditions in high-latitude ecosystems. Such information is useful for quantifying spatial heterogeneity in surface and subsurface properties, and for model development and evaluation. To conduct these studies, it is desirable that field studies share a common sampling strategy so that the data from multiple sites can be combined and used to analyze variations in conditions across different landscape geomorphologies and vegetation types. In 2015, NASA launched the decade-long Arctic-Boreal Vulnerability Experiment (ABoVE) to study the sensitivity and resilience of these ecosystems to disturbance and environmental change. NASA is able to leverage its remote sensing strengths to collect airborne and satellite observations to capture important ecosystem properties and dynamics across large spatial scales. A critical component of this effort includes collection of ground-based data that can be used to analyze, calibrate and validate remote sensing products. ABoVE researchers at a large number of sites located in important Arctic and boreal ecosystems in Alaska and western Canada are following common design protocols and strategies for measuring soil moisture, thaw depth, biomass, and wetland inundation. Here we elaborate on those sampling strategies as used in the 2017 summer SAR campaign and address the sampling design and measurement protocols for supporting the ABoVE aerial activities. Plot size, transect length, and distribution of replicates across the landscape systematically allowed investigators to optimally sample a site for soil moisture, thaw depth, and organic layer thickness. Specific examples and data sets are described for the Department of Energy's Next-Generation Ecosystem Experiments (NGEE Arctic) project field sites near Nome and Barrow, Alaska. Future airborne and satellite
Studer, Joseph; Baggio, Stéphanie; Mohler-Kuo, Meichun; Simon, Olivier; Daeppen, Jean-Bernard; Gmel, Gerhard
2016-06-01
The study aimed to identify different patterns of gambling activities (PGAs) and to investigate how PGAs differed in gambling problems, substance use outcomes, personality traits and coping strategies. A representative sample of 4989 young Swiss males completed a questionnaire assessing seven distinct gambling activities, gambling problems, substance use outcomes, personality traits and coping strategies. PGAs were identified using latent class analysis (LCA). Differences between PGAs in gambling and substance use outcomes, personality traits and coping strategies were tested. LCA identified six different PGAs. With regard to gambling and substance use outcomes, the three most problematic PGAs were extensive gamblers, followed by private gamblers, and electronic lottery and casino gamblers, respectively. By contrast, the three least detrimental PGAs were rare or non-gamblers, lottery only gamblers and casino gamblers. With regard to personality traits, compared with rare or non-gamblers, private and casino gamblers reported higher levels of sensation seeking. Electronic lottery and casino gamblers, private gamblers and extensive gamblers had higher levels of aggression-hostility. Extensive and casino gamblers reported higher levels of sociability, whereas casino gamblers reported lower levels of anxiety-neuroticism. Extensive gamblers used more maladaptive and less adaptive coping strategies than other groups. Results suggest that gambling is not a homogeneous activity since different types of gamblers exist according to the PGA they are engaged in. Extensive gamblers, electronic and casino gamblers and private gamblers may have the most problematic PGAs. Personality traits and coping skills may predispose individuals to PGAs associated with more or less negative outcomes.
Variations in the Holocene North Atlantic Bottom Current Strength in the Charlie Gibbs Fracture Zone
Kissel, C.; Van Toer, A.; Cortijo, E.; Turon, J.
2011-12-01
The changes in the strength of the North Atlantic bottom current during the Holocene period is presented via the study of cores located at the western termination of the northern deep channel of the Charlie-Gibbs fracture zone. This natural roughly E-W corridor is bathed by the Iceland-Scotland overflow water (ISOW) when it passes westward out of the Iceland Basin into the western North Atlantic basin. At present, it is also described as the place where southern sourced silicate-rich Lower Deep Water (LDW) derived from the Antarctic Bottom Waters (AABW) are passing westward, mixing with the ISOW. We conducted a deep-water multiproxy analysis on two nearby cores, coupling magnetic properties, anisotropy, sortable silt and benthic foraminifera isotopes. The first core had been taken by the R. V. Charcot in 1977 and the second one is a CASQ core taken during the IMAGES-AMOCINT MD168- cruise in the framework of the 06-EuroMARC-FP-008 Project on board the R.V. Marion Dufresne (French Polar Institute, IPEV) in 2008. The radiocarbon ages indicate an average sedimentation rate of about 50 cm/kyr through middle and late Holocene allowing a data resolution ranging from 40 to 100 years depending on the proxy. In each core, we observe long-term and short-term changes in the strength of the bottom currents. On the long term, a decrease in the amount of magnetic particles (normalized by the carbonate content) is first from 10 kyr to 8.6 kyr and then between 6 and 2 kyrs before reaching a steady state. Following Kissel et al. (2009), this indicates a decrease in the ISOW strength. The mean sortable silt shows exactly the same pattern indicating that not only the intensity of the ISOW but the whole deep water mass bathing the sites has decreased. On the short term, a first very prominent event centered at about 8.4 kyr (cal. ages) is marked by a pronounced minima in magnetic content and the smaller mean sortable silt sizes. This is typical for an abrupt reduction in deep flow
Jacek Hunicz
2015-01-01
Full Text Available In this study we summarize and analyze experimental observations of cyclic variability in homogeneous charge compression ignition (HCCI combustion in a single-cylinder gasoline engine. The engine was configured with negative valve overlap (NVO to trap residual gases from prior cycles and thus enable auto-ignition in successive cycles. Correlations were developed between different fuel injection strategies and cycle average combustion and work output profiles. Hypothesized physical mechanisms based on these correlations were then compared with trends in cycle-by-cycle predictability as revealed by sample entropy. The results of these comparisons help to clarify how fuel injection strategy can interact with prior cycle effects to affect combustion stability and so contribute to design control methods for HCCI engines.
Voitsekhovych, Oleg V.; Lavrova, Tatiana V.; Kostezh, Alexander B.
2012-01-01
There are many sites in the world, where Environment are still under influence of the contamination related to the Uranium production carried out in past. Author's experience shows that lack of site characterization data, incomplete or unreliable environment monitoring studies can significantly limit quality of Safety Assessment procedures and Priority actions analyses needed for Remediation Planning. During recent decades the analytical laboratories of the many enterprises, currently being responsible for establishing the site specific environment monitoring program have been significantly improved their technical sampling and analytical capacities. However, lack of experience in the optimal site specific sampling strategy planning and also not enough experience in application of the required analytical techniques, such as modern alpha-beta radiometers, gamma and alpha spectrometry and liquid-scintillation analytical methods application for determination of U-Th series radionuclides in the environment, does not allow to these laboratories to develop and conduct efficiently the monitoring programs as a basis for further Safety Assessment in decision making procedures. This paper gives some conclusions, which were gained from the experience establishing monitoring programs in Ukraine and also propose some practical steps on optimization in sampling strategy planning and analytical procedures to be applied for the area required Safety assessment and justification for its potential remediation and safe management. (authors)
Mansour, Chourouk; Ouarezki, Yasmine; Jones, Jeremy; Fitch, Moira; Smith, Sarah; Mason, Avril; Donaldson, Malcolm
2017-10-01
To determine ages at first capillary sampling and notification and age at notification after second sampling in Scottish newborns referred with elevated thyroid-stimulating hormone (TSH). Referrals between 1980 and 2014 inclusive were grouped into seven 5-year blocks and analysed according to agreed standards. Of 2 116 132 newborn infants screened, 919 were referred with capillary TSH elevation ≥8 mU/L of whom 624 had definite (606) or probable (18) congenital hypothyroidism. Median age at first sampling fell from 7 to 5 days between 1980 and 2014 (standard 4-7 days), with 22, 8 and 3 infants sampled >7 days during 2000-2004, 2005-2009 and 2010-2014. Median age at notification was consistently ≤14 days, range falling during 2000-2004, 2005-2009 and 2010-2014 from 6 to 78, 7-52 and 7-32 days with 12 (14.6%), 6 (5.6%) and 5 (4.3%) infants notified >14 days. However 18/123 (14.6%) of infants undergoing second sampling from 2000 onwards breached the ≤26-day standard for notification. By 2010-2014, the 91 infants with confirmed congenital hypothyroidism had shown favourable median age at first sample (5 days) with start of treatment (10.5 days) approaching age at notification. Most standards for newborn thyroid screening are being met by the Scottish programme, but there is a need to reduce age range at notification, particularly following second sampling. Strategies to improve screening performance include carrying out initial capillary sampling as close to 96 hours as possible; introducing 6-day laboratory reporting and use of electronic transmission for communicating repeat requests. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.
Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L
2017-06-13
λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.
Gelb, Lev D; Chakraborty, Somendra Nath
2011-12-14
The normal boiling points are obtained for a series of metals as described by the "quantum-corrected Sutton Chen" (qSC) potentials [S.-N. Luo, T. J. Ahrens, T. Çağın, A. Strachan, W. A. Goddard III, and D. C. Swift, Phys. Rev. B 68, 134206 (2003)]. Instead of conventional Monte Carlo simulations in an isothermal or expanded ensemble, simulations were done in the constant-NPH adabatic variant of the Gibbs ensemble technique as proposed by Kristóf and Liszi [Chem. Phys. Lett. 261, 620 (1996)]. This simulation technique is shown to be a precise tool for direct calculation of boiling temperatures in high-boiling fluids, with results that are almost completely insensitive to system size or other arbitrary parameters as long as the potential truncation is handled correctly. Results obtained were validated using conventional NVT-Gibbs ensemble Monte Carlo simulations. The qSC predictions for boiling temperatures are found to be reasonably accurate, but substantially underestimate the enthalpies of vaporization in all cases. This appears to be largely due to the systematic overestimation of dimer binding energies by this family of potentials, which leads to an unsatisfactory description of the vapor phase. © 2011 American Institute of Physics
Black, Clanton C
2008-01-01
The very personal touch of Professor Martin Gibbs as a worldwide advocate for photosynthesis and plant physiology was lost with his death in July 2006. Widely known for his engaging humorous personality and his humanitarian lifestyle, Martin Gibbs excelled as a strong international science diplomat; like a personal science family patriarch encouraging science and plant scientists around the world. Immediately after World War II he was a pioneer at the Brookhaven National Laboratory in the use of (14)C to elucidate carbon flow in metabolism and particularly carbon pathways in photosynthesis. His leadership on carbon metabolism and photosynthesis extended for four decades of working in collaboration with a host of students and colleagues. In 1962, he was selected as the Editor-in-Chief of Plant Physiology. That appointment initiated 3 decades of strong directional influences by Gibbs on plant research and photosynthesis. Plant Physiology became and remains a premier source of new knowledge about the vital and primary roles of plants in earth's environmental history and the energetics of our green-blue planet. His leadership and charismatic humanitarian character became the quintessence of excellence worldwide. Martin Gibbs was in every sense the personification of a model mentor not only for scientists but also shown in devotion to family. Here we pay tribute and honor to an exemplary humanistic mentor, Martin Gibbs.
Jacome, Paulo A.D.; Landim, Mariana C.; Garcia, Amauri; Furtado, Alexandre F.; Ferreira, Ivaldo L.
2011-01-01
Highlights: → Surface tension and the Gibbs-Thomson coefficient are computed for Al-based alloys. → Butler's scheme and ThermoCalc are used to compute the thermophysical properties. → Predictive cell/dendrite growth models depend on accurate thermophysical properties. → Mechanical properties can be related to the microstructural cell/dendrite spacing. - Abstract: In this paper, a solution for Butler's formulation is presented permitting the surface tension and the Gibbs-Thomson coefficient of Al-based binary alloys to be determined. The importance of Gibbs-Thomson coefficient for binary alloys is related to the reliability of predictions furnished by predictive cellular and dendritic growth models and of numerical computations of solidification thermal variables, which will be strongly dependent on the thermophysical properties assumed for the calculations. A numerical model based on Powell hybrid algorithm and a finite difference Jacobian approximation was coupled to a specific interface of a computational thermodynamics software in order to assess the excess Gibbs energy of the liquid phase, permitting the surface tension and Gibbs-Thomson coefficient for Al-Fe, Al-Ni, Al-Cu and Al-Si hypoeutectic alloys to be calculated. The computed results are presented as a function of the alloy composition.
Jacome, Paulo A.D.; Landim, Mariana C. [Department of Mechanical Engineering, Fluminense Federal University, Av. dos Trabalhadores, 420-27255-125 Volta Redonda, RJ (Brazil); Garcia, Amauri, E-mail: amaurig@fem.unicamp.br [Department of Materials Engineering, University of Campinas, UNICAMP, PO Box 6122, 13083-970 Campinas, SP (Brazil); Furtado, Alexandre F.; Ferreira, Ivaldo L. [Department of Mechanical Engineering, Fluminense Federal University, Av. dos Trabalhadores, 420-27255-125 Volta Redonda, RJ (Brazil)
2011-08-20
Highlights: {yields} Surface tension and the Gibbs-Thomson coefficient are computed for Al-based alloys. {yields} Butler's scheme and ThermoCalc are used to compute the thermophysical properties. {yields} Predictive cell/dendrite growth models depend on accurate thermophysical properties. {yields} Mechanical properties can be related to the microstructural cell/dendrite spacing. - Abstract: In this paper, a solution for Butler's formulation is presented permitting the surface tension and the Gibbs-Thomson coefficient of Al-based binary alloys to be determined. The importance of Gibbs-Thomson coefficient for binary alloys is related to the reliability of predictions furnished by predictive cellular and dendritic growth models and of numerical computations of solidification thermal variables, which will be strongly dependent on the thermophysical properties assumed for the calculations. A numerical model based on Powell hybrid algorithm and a finite difference Jacobian approximation was coupled to a specific interface of a computational thermodynamics software in order to assess the excess Gibbs energy of the liquid phase, permitting the surface tension and Gibbs-Thomson coefficient for Al-Fe, Al-Ni, Al-Cu and Al-Si hypoeutectic alloys to be calculated. The computed results are presented as a function of the alloy composition.
Raleigh, M. S.; Smyth, E.; Small, E. E.
2017-12-01
The spatial distribution of snow water equivalent (SWE) is not sufficiently monitored with either remotely sensed or ground-based observations for water resources management. Recent applications of airborne Lidar have yielded basin-wide mapping of SWE when combined with a snow density model. However, in the absence of snow density observations, the uncertainty in these SWE maps is dominated by uncertainty in modeled snow density rather than in Lidar measurement of snow depth. Available observations tend to have a bias in physiographic regime (e.g., flat open areas) and are often insufficient in number to support testing of models across a range of conditions. Thus, there is a need for targeted sampling strategies and controlled model experiments to understand where and why different snow density models diverge. This will enable identification of robust model structures that represent dominant processes controlling snow densification, in support of basin-scale estimation of SWE with remotely-sensed snow depth datasets. The NASA SnowEx mission is a unique opportunity to evaluate sampling strategies of snow density and to quantify and reduce uncertainty in modeled snow density. In this presentation, we present initial field data analyses and modeling results over the Colorado SnowEx domain in the 2016-2017 winter campaign. We detail a framework for spatially mapping the uncertainty in snowpack density, as represented across multiple models. Leveraging the modular SUMMA model, we construct a series of physically-based models to assess systematically the importance of specific process representations to snow density estimates. We will show how models and snow pit observations characterize snow density variations with forest cover in the SnowEx domains. Finally, we will use the spatial maps of density uncertainty to evaluate the selected locations of snow pits, thereby assessing the adequacy of the sampling strategy for targeting uncertainty in modeled snow density.
Dos Santos Augusto, Amanda; Barsanelli, Paulo Lopes; Pereira, Fabiola Manhas Verbi; Pereira-Filho, Edenir Rodrigues
2017-04-01
This study describes the application of laser-induced breakdown spectroscopy (LIBS) for the direct determination of Ca, K and Mg in powdered milk and solid dietary supplements. The following two calibration strategies were applied: (i) use of the samples to calculate calibration models (milk) and (ii) use of sample mixtures (supplements) to obtain a calibration curve. In both cases, reference values obtained from inductively coupled plasma optical emission spectroscopy (ICP OES) after acid digestion were used. The emission line selection from LIBS spectra was accomplished by analysing the regression coefficients of partial least squares (PLS) regression models, and wavelengths of 534.947, 766.490 and 285.213nm were chosen for Ca, K and Mg, respectively. In the case of the determination of Ca in supplements, it was necessary to perform a dilution (10-fold) of the standards and samples to minimize matrix interference. The average accuracy for powdered milk ranged from 60% to 168% for Ca, 77% to 152% for K and 76% to 131% for Mg. In the case of dietary supplements, standard error of prediction (SEP) varied from 295 (Mg) to 3782mgkg -1 (Ca). The proposed method presented an analytical frequency of around 60 samples per hour and the step of sample manipulation was drastically reduced, with no generation of toxic chemical residues. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zheng, Lianqing; Chen, Mengen; Yang, Wei
2009-06-21
To overcome the pseudoergodicity problem, conformational sampling can be accelerated via generalized ensemble methods, e.g., through the realization of random walks along prechosen collective variables, such as spatial order parameters, energy scaling parameters, or even system temperatures or pressures, etc. As usually observed, in generalized ensemble simulations, hidden barriers are likely to exist in the space perpendicular to the collective variable direction and these residual free energy barriers could greatly abolish the sampling efficiency. This sampling issue is particularly severe when the collective variable is defined in a low-dimension subset of the target system; then the "Hamiltonian lagging" problem, which reveals the fact that necessary structural relaxation falls behind the move of the collective variable, may be likely to occur. To overcome this problem in equilibrium conformational sampling, we adopted the orthogonal space random walk (OSRW) strategy, which was originally developed in the context of free energy simulation [L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008)]. Thereby, generalized ensemble simulations can simultaneously escape both the explicit barriers along the collective variable direction and the hidden barriers that are strongly coupled with the collective variable move. As demonstrated in our model studies, the present OSRW based generalized ensemble treatments show improved sampling capability over the corresponding classical generalized ensemble treatments.
Yingst, R. A.; Bartley, J. K.; Chidsey, T. C.; Cohen, B. A.; Gilleaudeau, G. J.; Hynek, B. M.; Kah, L. C.; Minitti, M. E.; Williams, R. M. E.; Black, S.; Gemperline, J.; Schaufler, R.; Thomas, R. J.
2018-05-01
The GHOST field tests are designed to isolate and test science-driven rover operations protocols, to determine best practices. During a recent field test at a potential Mars 2020 landing site analog, we tested two Mars Science Laboratory data-acquisition and decision-making methods to assess resulting science return and sample quality: a linear method, where sites of interest are studied in the order encountered, and a "walkabout-first" method, where sites of interest are examined remotely before down-selecting to a subset of sites that are interrogated with more resource-intensive instruments. The walkabout method cost less time and fewer resources, while increasing confidence in interpretations. Contextual data critical to evaluating site geology was acquired earlier than for the linear method, and given a higher priority, which resulted in development of more mature hypotheses earlier in the analysis process. Combined, this saved time and energy in the collection of data with more limited spatial coverage. Based on these results, we suggest that the walkabout method be used where doing so would provide early context and time for the science team to develop hypotheses-critical tests; and that in gathering context, coverage may be more important than higher resolution.
Cihan Ulas
2013-11-01
Full Text Available Simultaneous localization and mapping (SLAM plays an important role in fully autonomous systems when a GNSS (global navigation satellite system is not available. Studies in both 2D indoor and 3D outdoor SLAM are based on the appearance of environments and utilize scan-matching methods to find rigid body transformation parameters between two consecutive scans. In this study, a fast and robust scan-matching method based on feature extraction is introduced. Since the method is based on the matching of certain geometric structures, like plane segments, the outliers and noise in the point cloud are considerably eliminated. Therefore, the proposed scan-matching algorithm is more robust than conventional methods. Besides, the registration time and the number of iterations are significantly reduced, since the number of matching points is efficiently decreased. As a scan-matching framework, an improved version of the normal distribution transform (NDT is used. The probability density functions (PDFs of the reference scan are generated as in the traditional NDT, and the feature extraction - based on stochastic plane detection - is applied to the only input scan. By using experimental dataset belongs to an outdoor environment like a university campus, we obtained satisfactory performance results. Moreover, the feature extraction part of the algorithm is considered as a special sampling strategy for scan-matching and compared to other sampling strategies, such as random sampling and grid-based sampling, the latter of which is first used in the NDT. Thus, this study also shows the effect of the subsampling on the performance of the NDT.
Platteau, Tom; Fransen, Katrien; Apers, Ludwig; Kenyon, Chris; Albers, Laura; Vermoesen, Tine; Loos, Jasna; Florence, Eric
2015-09-01
As HIV remains a public health concern, increased testing among those at risk for HIV acquisition is important. Men who have sex with men (MSM) are the most important group for targeted HIV testing in Europe. Several new strategies have been developed and implemented to increase HIV-testing uptake in this group, among them the Swab2know project. In this project, we aim to assess the acceptability and feasibility of outreach and online HIV testing using oral fluid samples as well as Web-based delivery of test results. Sample collection happened between December 2012 and April 2014 via outreach and online sampling among MSM. Test results were communicated through a secured website. HIV tests were executed in the laboratory. Each reactive sample needed to be confirmed using state-of-the-art confirmation procedures on a blood sample. Close follow-up of participants who did not pick up their results, and those with reactive results, was included in the protocol. Participants were asked to provide feedback on the methodology using a short survey. During 17 months, 1071 tests were conducted on samples collected from 898 men. Over half of the samples (553/1071, 51.63%) were collected during 23 outreach sessions. During an 8-month period, 430 samples out of 1071 (40.15%) were collected from online sampling. Additionally, 88 samples out of 1071 (8.22%) were collected by two partner organizations during face-to-face consultations with MSM and male sex workers. Results of 983 out of 1071 tests (91.78%) had been collected from the website. The pickup rate was higher among participants who ordered their kit online (421/430, 97.9%) compared to those participating during outreach activities (559/641, 87.2%; Ponline participants were more likely to have never been tested before (17.3% vs 10.0%; P=.001) and reported more sexual partners in the 6 months prior to participation in the project (mean 7.18 vs 3.23; Ponline counseling tool), and in studying the cost effectiveness of the
Zhihua Wang
2017-05-01
Full Text Available Crude oil is generally produced with water, and the water cut produced by oil wells is increasingly common over their lifetime, so it is inevitable to create emulsions during oil production. However, the formation of emulsions presents a costly problem in surface process particularly, both in terms of transportation energy consumption and separation efficiency. To deal with the production and operational problems which are related to crude oil emulsions, especially to ensure the separation and transportation of crude oil-water systems, it is necessary to better understand the emulsification mechanism of crude oil under different conditions from the aspects of bulk and interfacial properties. The concept of shearing energy was introduced in this study to reveal the driving force for emulsification. The relationship between shearing stress in the flow field and interfacial tension (IFT was established, and the correlation between shearing energy and interfacial Gibbs free energy was developed. The potential of the developed correlation model was validated using the experimental and field data on emulsification behavior. It was also shown how droplet deformation could be predicted from a random deformation degree and orientation angle. The results indicated that shearing energy as the energy produced by shearing stress working in the flow field is the driving force activating the emulsification behavior. The deformation degree and orientation angle of dispersed phase droplet are associated with the interfacial properties, rheological properties and the experienced turbulence degree. The correlation between shearing stress and IFT can be quantified if droplet deformation degree vs. droplet orientation angle data is available. When the water cut is close to the inversion point of waxy crude oil emulsion, the interfacial Gibbs free energy change decreased and the shearing energy increased. This feature is also presented in the special regions where
Corstjens, Paul L A M; Hoekstra, Pytsje T; de Dood, Claudia J; van Dam, Govert J
2017-11-01
Methodological applications of the high sensitivity genus-specific Schistosoma CAA strip test, allowing detection of single worm active infections (ultimate sensitivity), are discussed for efficient utilization in sample pooling strategies. Besides relevant cost reduction, pooling of samples rather than individual testing can provide valuable data for large scale mapping, surveillance, and monitoring. The laboratory-based CAA strip test utilizes luminescent quantitative up-converting phosphor (UCP) reporter particles and a rapid user-friendly lateral flow (LF) assay format. The test includes a sample preparation step that permits virtually unlimited sample concentration with urine, reaching ultimate sensitivity (single worm detection) at 100% specificity. This facilitates testing large urine pools from many individuals with minimal loss of sensitivity and specificity. The test determines the average CAA level of the individuals in the pool thus indicating overall worm burden and prevalence. When requiring test results at the individual level, smaller pools need to be analysed with the pool-size based on expected prevalence or when unknown, on the average CAA level of a larger group; CAA negative pools do not require individual test results and thus reduce the number of tests. Straightforward pooling strategies indicate that at sub-population level the CAA strip test is an efficient assay for general mapping, identification of hotspots, determination of stratified infection levels, and accurate monitoring of mass drug administrations (MDA). At the individual level, the number of tests can be reduced i.e. in low endemic settings as the pool size can be increased as opposed to prevalence decrease. At the sub-population level, average CAA concentrations determined in urine pools can be an appropriate measure indicating worm burden. Pooling strategies allowing this type of large scale testing are feasible with the various CAA strip test formats and do not affect
Grazhdan, K. V.; Gamov, G. A.; Dushina, S. V.; Sharnin, V. A.
2012-11-01
Coefficients of the interphase distribution of nicotinic acid are determined in aqueous solution systems of ethanol-hexane and DMSO-hexane at 25.0 ± 0.1°C. They are used to calculate the Gibbs energy of the transfer of nicotinic acid from water into aqueous solutions of ethanol and dimethylsulfoxide. The Gibbs energy values for the transfer of the molecular and zwitterionic forms of nicotinic acid are obtained by means of UV spectroscopy. The diametrically opposite effect of the composition of binary solvents on the transfer of the molecular and zwitterionic forms of nicotinic acid is noted.
Gibbs free energy difference between the undercooled liquid and the beta phase of a Ti-Cr alloy
Ohsaka, K.; Trinh, E. H.; Holzer, J. C.; Johnson, W. L.
1992-01-01
The heat of fusion and the specific heats of the solid and liquid have been experimentally determined for a Ti60Cr40 alloy. The data are used to evaluate the Gibbs free energy difference, delta-G, between the liquid and the beta phase as a function of temperature to verify a reported spontaneous vitrification (SV) of the beta phase in Ti-Cr alloys. The results show that SV of an undistorted beta phase in the Ti60Cr40 alloy at 873 K is not feasible because delta-G is positive at the temperature. However, delta-G may become negative with additional excess free energy to the beta phase in the form of defects.
Naumov, Sergej; von Sonntag, Clemens
2011-11-01
Free radicals are common intermediates in the chemistry of ozone in aqueous solution. Their reactions with ozone have been probed by calculating the standard Gibbs free energies of such reactions using density functional theory (Jaguar 7.6 program). O(2) reacts fast and irreversibly only with simple carbon-centered radicals. In contrast, ozone also reacts irreversibly with conjugated carbon-centered radicals such as bisallylic (hydroxycylohexadienyl) radicals, with conjugated carbon/oxygen-centered radicals such as phenoxyl radicals, and even with nitrogen- oxygen-, sulfur-, and halogen-centered radicals. In these reactions, further ozone-reactive radicals are generated. Chain reactions may destroy ozone without giving rise to products other than O(2). This may be of importance when ozonation is used in pollution control, and reactions of free radicals with ozone have to be taken into account in modeling such processes.
Rog, G.; Kucza, W.; Kozlowska-Rog, A.
2004-01-01
The standard Gibbs free energy of formation of LiMnO 2 and LiMn 2 O 4 at the temperatures of (680, 740 and 800) K has been determined with the help of the solid-state galvanic cells involving lithium-β-alumina electrolyte. The equilibrium electrical potentials of cathode containing Li x Mn 2 O 4 spinel, in the composition ranges 0≤x≤1 and 1≤x≤2, vs. metallic lithium in the reversible intercalation galvanic cell have been calculated. The existence of two-voltage plateaus which appeared during charging and discharging processes in reversible intercalation of lithium into Li x Mn 2 O 4 spinel, has been discussed
Xu, Fuchao; Giovanoulis, Georgios; van Waes, Sofie; Padilla-Sanchez, Juan Antonio; Papadopoulou, Eleni; Magnér, Jorgen; Haug, Line Småstuen; Neels, Hugo; Covaci, Adrian
2016-07-19
We compared the human exposure to organophosphate flame retardants (PFRs) via inhalation, dust ingestion, and dermal absorption using different sampling and assessment strategies. Air (indoor stationary air and personal ambient air), dust (floor dust and surface dust), and hand wipes were sampled from 61 participants and their houses. We found that stationary air contains higher levels of ΣPFRs (median = 163 ng/m(3), IQR = 161 ng/m(3)) than personal air (median = 44 ng/m(3), IQR = 55 ng/m(3)), suggesting that the stationary air sample could generate a larger bias for inhalation exposure assessment. Tris(chloropropyl) phosphate isomers (ΣTCPP) accounted for over 80% of ΣPFRs in both stationary and personal air. PFRs were frequently detected in both surface dust (ΣPFRs median = 33 100 ng/g, IQR = 62 300 ng/g) and floor dust (ΣPFRs median = 20 500 ng/g, IQR = 30 300 ng/g). Tris(2-butoxylethyl) phosphate (TBOEP) accounted for 40% and 60% of ΣPFRs in surface and floor dust, respectively, followed by ΣTCPP (30% and 20%, respectively). TBOEP (median = 46 ng, IQR = 69 ng) and ΣTCPP (median = 37 ng, IQR = 49 ng) were also frequently detected in hand wipe samples. For the first time, a comprehensive assessment of human exposure to PFRs via inhalation, dust ingestion, and dermal absorption was conducted with individual personal data rather than reference factors of the general population. Inhalation seems to be the major exposure pathway for ΣTCPP and tris(2-chloroethyl) phosphate (TCEP), while participants had higher exposure to TBOEP and triphenyl phosphate (TPHP) via dust ingestion. Estimated exposure to ΣPFRs was the highest with stationary air inhalation (median =34 ng·kg bw(-1)·day(-1), IQR = 38 ng·kg bw(-1)·day(-1)), followed by surface dust ingestion (median = 13 ng·kg bw(-1)·day(-1), IQR = 28 ng·kg bw(-1)·day(-1)), floor dust ingestion and personal air inhalation. The median dermal exposure on hand wipes was 0.32 ng·kg bw(-1)·day(-1) (IQR
He, Ping
2012-01-01
The long-standing puzzle surrounding the statistical mechanics of self-gravitating systems has not yet been solved successfully. We formulate a systematic theoretical framework of entropy-based statistical mechanics for spherically symmetric collisionless self-gravitating systems. We use an approach that is very different from that of the conventional statistical mechanics of short-range interaction systems. We demonstrate that the equilibrium states of self-gravitating systems consist of both mechanical and statistical equilibria, with the former characterized by a series of velocity-moment equations and the latter by statistical equilibrium equations, which should be derived from the entropy principle. The velocity-moment equations of all orders are derived from the steady-state collisionless Boltzmann equation. We point out that the ergodicity is invalid for the whole self-gravitating system, but it can be re-established locally. Based on the local ergodicity, using Fermi-Dirac-like statistics, with the non-degenerate condition and the spatial independence of the local microstates, we rederive the Boltzmann-Gibbs entropy. This is consistent with the validity of the collisionless Boltzmann equation, and should be the correct entropy form for collisionless self-gravitating systems. Apart from the usual constraints of mass and energy conservation, we demonstrate that the series of moment or virialization equations must be included as additional constraints on the entropy functional when performing the variational calculus; this is an extension to the original prescription by White & Narayan. Any possible velocity distribution can be produced by the statistical-mechanical approach that we have developed with the extended Boltzmann-Gibbs/White-Narayan statistics. Finally, we discuss the questions of negative specific heat and ensemble inequivalence for self-gravitating systems.
Jebri, Sonia; Khattech, Ismail; Jemal, Mohamed
2017-01-01
Highlights: • A-type carbonate hydroxyapatites with 0 ⩽ x ⩽ 1 were prepared and characterized by DRX, IR spectroscopy and CHN analysis. • The heat of solution was measured in 9 wt% HNO 3 using an isoperibol calorimeter. • The standard enthalpy of formation was determined by thermochemical cycle. • Gibbs free energy has been deduced by estimating standard entropy of formation. • Carbonatation increases the stability till x = 0.6 mol. - Abstract: « A » type carbonate phosphocalcium hydroxyapatites having the general formula Ca 10 (PO 4 ) 6 (OH) (2-2x) (CO 3 ) x with 0 ⩽ x ⩽ 1, were prepared by solid gas reaction in the temperature range of 700–1000 °C. The obtained materials were characterized by X-ray diffraction and infrared spectroscopy. The carbonate content has been determined by C–H–N analysis. The heat of solution of these products was measured at T = 298 K in 9 wt% nitric acid solution using an isoperibol calorimeter. A thermochemical cycle was proposed and complementary experiences were performed in order to access to the standard enthalpies of formation of these phosphates. The results were compared to those previously obtained on apatites containing strontium and barium and show a decrease with the carbonate amount introduced in the lattice. This quantity becomes more negative as the ratio of substitution increases. Estimation of the entropy of formation allowed the determination of standard Gibbs free energy of formation of these compounds. The study showed that the substitution of hydroxyl by carbonate ions contributes to the stabilisation of the apatite structure.
Fortunati, G.U.; Banfi, C.; Pasturenzi, M.
1994-01-01
This study attempts to survey the problems associated with techniques and strategies of soil sampling. Keeping in mind the well defined objectives of a sampling campaign, the aim was to highlight the most important aspect of representativeness of samples as a function of the available resources. Particular emphasis was given to the techniques and particularly to a description of the many types of samplers which are in use. The procedures and techniques employed during the investigations following the Seveso accident are described. (orig.)
Alcaráz, Mirta R; Bortolato, Santiago A; Goicoechea, Héctor C; Olivieri, Alejandro C
2015-03-01
Matrix augmentation is regularly employed in extended multivariate curve resolution-alternating least-squares (MCR-ALS), as applied to analytical calibration based on second- and third-order data. However, this highly useful concept has almost no correspondence in parallel factor analysis (PARAFAC) of third-order data. In the present work, we propose a strategy to process third-order chromatographic data with matrix fluorescence detection, based on an Augmented PARAFAC model. The latter involves decomposition of a three-way data array augmented along the elution time mode with data for the calibration samples and for each of the test samples. A set of excitation-emission fluorescence matrices, measured at different chromatographic elution times for drinking water samples, containing three fluoroquinolones and uncalibrated interferences, were evaluated using this approach. Augmented PARAFAC exploits the second-order advantage, even in the presence of significant changes in chromatographic profiles from run to run. The obtained relative errors of prediction were ca. 10 % for ofloxacin, ciprofloxacin, and danofloxacin, with a significant enhancement in analytical figures of merit in comparison with previous reports. The results are compared with those furnished by MCR-ALS.
Yingxin Gu
2016-11-01
Full Text Available Regression tree models have been widely used for remote sensing-based ecosystem mapping. Improper use of the sample data (model training and testing data may cause overfitting and underfitting effects in the model. The goal of this study is to develop an optimal sampling data usage strategy for any dataset and identify an appropriate number of rules in the regression tree model that will improve its accuracy and robustness. Landsat 8 data and Moderate-Resolution Imaging Spectroradiometer-scaled Normalized Difference Vegetation Index (NDVI were used to develop regression tree models. A Python procedure was designed to generate random replications of model parameter options across a range of model development data sizes and rule number constraints. The mean absolute difference (MAD between the predicted and actual NDVI (scaled NDVI, value from 0–200 and its variability across the different randomized replications were calculated to assess the accuracy and stability of the models. In our case study, a six-rule regression tree model developed from 80% of the sample data had the lowest MAD (MADtraining = 2.5 and MADtesting = 2.4, which was suggested as the optimal model. This study demonstrates how the training data and rule number selections impact model accuracy and provides important guidance for future remote-sensing-based ecosystem modeling.
La Iglesia, A.
1989-12-01
Full Text Available The effect of grinding on crystallinity, particle size and solubility of two samples of kaolinite was studied. The standard Gibbs free energies of formation of different ground samples were calculated from solubility measurements, and show a direct relationship between Gibbs free energy and particle size-crystallinity variation. Values of -3752.2 and -3776.4 KJ/mol. were determinated for ÎGÂºl (am and ÎGÂºl (crys of kaolinite, respectively. A new thermodinamic equation that relates ÎGÂºl to particle size is proposed. This equation can probably be extended to clay mineals.Se estudia el efecto de la molienda prolongada sobre la cristalinidad, tamaño de partícula y solubilidad de dos muestras de caolinita. Se ha calculado la energía libre estandar de formación del mineral a partir de medidas de solubilidad, encontrando una relación directa entre ÎGÂºl, y las variaciones de tamaño de partícula y cristalinidad de las muestras. Por extrapolación, se han obtenido los valores de -3752,0 y -3776,4 KJ/mol. para ÎGÂºl caolinita amorfa y cristalina. Se propone una ecuación termodinámica que relaciona ÎGÂºl y el tamaño de partícula de la caolinita; esta ecuación puede aplicarse también a otros minerales de la arcilla.
Morales Ríos Herbert
2010-04-01
Full Text Available Resumen:Se describe la experiencia del uso de pre-exámenes o exámenes de prueba como una estrategia didáctica para el mejoramiento en el rendimiento y en el desempeño estudiantil en los cursos propios de la carrera de física. El objetivo principal de la experiencia era determinar, a priori, las deficiencias, tanto matemáticas como físicas, que tiene el estudiantado, corregirlas antes de administrarle el examen definitivo y establecer una evaluación formativa en el curso. En particular, el tema evaluado era el de oscilaciones lineales del curso de Mecánica Teórica. Se detalla en qué consiste dicha estrategia, la motivación de su implementación y los roles tanto docente como estudiantil. Se analizan los resultados de la experiencia para concluir con las bondades, limitaciones y proyecciones futuras del uso de los pre-exámenes, con el fin de mostrarlos como una herramienta más dentro de la labor docente universitaria.Abstract:We discuss our experience of using sample tests as a teaching strategy that allows us to improve the student grades in courses that belong to the College Physics Program. The main purpose of our experience was to find out the common mistakes both in mathematics and in physics made by the students and to correct them before the actual test, so that we could accomplish a formative evaluation. In particular, the evaluated subject was linear oscillations in the Classical Mechanics course. We describe what the strategy consists of, our motivation for using it and both the professor and the student roles. We analyze our results obtained in its implementation to conclude with the pros and cons of this teaching strategy and also with its future applications as a useful tool for improving college teaching.
Brooks, Emily K; Tett, Susan E; Isbel, Nicole M; McWhinney, Brett; Staatz, Christine E
2018-04-01
Although multiple linear regression-based limited sampling strategies (LSSs) have been published for enteric-coated mycophenolate sodium, none have been evaluated for the prediction of subsequent mycophenolic acid (MPA) exposure. This study aimed to examine the predictive performance of the published LSS for the estimation of future MPA area under the concentration-time curve from 0 to 12 hours (AUC0-12) in renal transplant recipients. Total MPA plasma concentrations were measured in 20 adult renal transplant patients on 2 occasions a week apart. All subjects received concomitant tacrolimus and were approximately 1 month after transplant. Samples were taken at 0, 0.33, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 6, and 8 hours and 0, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2, 3, 4, 6, 9, and 12 hours after dose on the first and second sampling occasion, respectively. Predicted MPA AUC0-12 was calculated using 19 published LSSs and data from the first or second sampling occasion for each patient and compared with the second occasion full MPA AUC0-12 calculated using the linear trapezoidal rule. Bias (median percentage prediction error) and imprecision (median absolute prediction error) were determined. Median percentage prediction error and median absolute prediction error for the prediction of full MPA AUC0-12 were multiple linear regression-based LSS was not possible without concentrations up to at least 8 hours after the dose.
Blandamer, Michael J.; Cullis, Paul M.; Soldi, L. Giorgio; Engberts, Jan B.F.N.; Kacperska, Anna; Os, Nico M. van
1995-01-01
Micellar colloids are distinguished from other colloids by their association-dissociation equilibrium in solution between monomers, counter-ions and micelles. According to classical thermodynamics, the standard Gibbs energy of formation of micelles at fixed temperature and pressure can be related to
Evard, Margarita E.; Volkov, Aleksandr E.; Belyaev, Fedor S.; Ignatova, Anna D.
2018-05-01
The choice of Gibbs' potential for microstructural modeling of FCC ↔ HCP martensitic transformation in FeMn-based shape memory alloys is discussed. Threefold symmetry of the HCP phase is taken into account on specifying internal variables characterizing volume fractions of martensite variants. Constraints imposed on model constants by thermodynamic equilibrium conditions are formulated.
Langmaier, Jan; Záliš, Stanislav; Samec, Zdeněk; Bovtun, Viktor; Kempa, Martin
2013-01-01
Roč. 87, JAN 2013 (2013), s. 591-598 ISSN 0013-4686 R&D Projects: GA ČR GAP206/11/0707 Institutional support: RVO:61388955 ; RVO:68378271 Keywords : ionic liquid s * cyclic voltammetry * standard Gibbs energy of ion transfer Subject RIV: CG - Electrochemistry Impact factor: 4.086, year: 2013
Sousa, M A; Gonçalves, C; Cunha, E; Hajšlová, J; Alpendurada, M F
2011-01-01
This work describes the development and validation of an offline solid-phase extraction with simultaneous cleanup capability, followed by liquid chromatography-(electrospray ionisation)-ion trap mass spectrometry, enabling the concurrent determination of 23 pharmaceuticals of diverse chemical nature, among the most consumed in Portugal, in wastewater samples. Several cleanup strategies, exploiting the physical and chemical properties of the analytes vs. interferences, alongside with the use of internal standards, were assayed in order to minimise the influence of matrix components in the ionisation efficiency of target analytes. After testing all combinations of adsorbents (normal-phase, ion exchange and mixed composition) and elution solvents, the best results were achieved with the mixed-anion exchange Oasis MAX cartridges. They provided recovery rates generally higher than 60%. The precision of the method ranged from 2% to 18% and 4% to 19% (except for diclofenac (22%) and simvastatin (26%)) for intra- and inter-day analysis, respectively. Method detection limits varied between 1 and 20 ng L(-1), while method quantification limits were diclofenac and bezafibrate were detected in concentrations ranging from 1 to 20 μg L(-1), while gemfibrozil, simvastatin, ketoprofen, azithromycin, bisoprolol, lorazepam and paroxetine were quantified in levels below 1 μg L(-1). These WWTPs were given particular attention since they discharge their effluents into the Douro river, where water is extracted for the production of drinking water. Some sampling spots in this river were also analysed.
Bloch, Claude; Dominicis, Cyrano de [Commissariat a l' energie atomique et aux energies alternatives - CEA, Centre d' Etudes Nucleaires de Saclay, Gif-sur-Yvette (France)
1959-07-01
Starting from an expansion derived in a previous work, we study the contribution to the Gibbs potential of the two-body dynamical correlations, taking into account the statistical correlations. Such a contribution is of interest for low density systems at low temperature. In the zero density limit, it reduces to the Beth Uhlenbeck expression of the second virial coefficient. For a system of fermions in the zero temperature limit, it yields the contribution of the Brueckner reaction matrix to the ground state energy, plus, under certain conditions, additional terms of the form exp. (β |Δ|), where the Δ are the binding energies of 'bound states' of the type first discussed by L. Cooper. Finally, we study the wave function of two particles immersed in a medium (defined by its temperature and chemical potential). lt satisfies an equation generalizing the Bethe Goldstone equation for an arbitrary temperature. Reprint of a paper published in 'Nuclear Physics' 10, 1959, p. 181-196 [French] Partant d'un developpement extrait d'un precedent travail, nous etudions la contribution au potentiel de Gibbs des relations dynamiques du systeme de deux corps, en tenant compte des relations statistiques. Une telle contribution presente de l'interet pour les systemes a densite faible et a basse temperature. A la densite limite zero, elle se ramene a l'expression de Beth Uhlenbeck du second coefficient virial. Pour un systeme de fermions a la temperature limite zero, il produit la contribution de la matrice de reaction de Brueckner au niveau fondamental, plus, dans certaines conditions, des termes additionnels de la forme exp. (β |Δ|), ou les Δ sont les energies de liaison des 'etats lies' du premier type, discutes auparavant par L. Cooper. Finalement, on etudie la fonction d'onde de deux particules immerges dans un milieu (definie par sa temperature et son potentiel chimique). Il satisfait a une equation generalisant l'equation de Bethe Goldstone pour une temperature arbitraire
Entropy landscape and non-Gibbs solutions in constraint satisfaction problems
Dall'Asta, L.; Ramezanpour, A.; Zecchina, R.
2008-05-01
We study the entropy landscape of solutions for the bicoloring problem in random graphs, a representative difficult constraint satisfaction problem. Our goal is to classify which type of clusters of solutions are addressed by different algorithms. In the first part of the study we use the cavity method to obtain the number of clusters with a given internal entropy and determine the phase diagram of the problem, e.g. dynamical, rigidity and SAT-UNSAT transitions. In the second part of the paper we analyze different algorithms and locate their behavior in the entropy landscape of the problem. For instance we show that a smoothed version of a decimation strategy based on Belief Propagation is able to find solutions belonging to sub-dominant clusters even beyond the so called rigidity transition where the thermodynamically relevant clusters become frozen. These non-equilibrium solutions belong to the most probable unfrozen clusters. (author)
Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V
2018-03-01
Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error model predicts the exposure of
Lima da Silva, Aline; De Fraga Malfatti, Celia; Heck, Nestor Cesar
2003-01-01
The use of fuel cells is a promising technology in the conversion of chemical to electrical energy. Due to environmental concerns related to the reduction of atmospheric pollution and greenhouse gases emissions such as CO 2 , NO x and hydrocarbons, there have been many researches about fuel cells using hydrogen as fuel. Hydrogen gas can be produced by several routes; a promising one is the steam reforming of ethanol. This route may become an important industrial process, especially for sugarcane producing countries. Ethanol is renewable energy and presents several advantages over other sources related to natural availability, storage and handling safety. In order to contribute to the understanding of the steam reforming of ethanol inside the reformer, this work displays a detailed thermodynamic analysis of the ethanol/water system, in the temperature range of 500-1200K, considering different H 2 O/ethanol reforming ratios. The equilibrium determinations were done with the help of the Gibbs energy minimization method using the Generalized Reduced Gradient algorithm (GRG). Based on literature data, the species considered in calculations were: H 2 , H 2 O, CO, CO 2 , CH 4 , C 2 H 4 , CH 3 CHO, C 2 H 5 OH (gas phase) and C gr . (graphite phase). The thermodynamic conditions for carbon deposition (probably soot) on catalyst during gas reforming were analyzed, in order to establish temperature ranges and H 2 O/ethanol ratios where carbon precipitation is not thermodynamically feasible. Experimental results from literature show that carbon deposition causes catalyst deactivation during reforming. This deactivation is due to encapsulating carbon that covers active phases on a catalyst substrate, e.g. Ni over Al 2 O 3 . In the present study, a mathematical relationship between Lagrange multipliers and the carbon activity (with reference to the graphite phase) was deduced, unveiling the carbon activity in the reformer atmosphere. From this, it is possible to foreseen if soot
Okano, Yasushi
1999-08-01
In order to analyze the reaction heat and compounds due to sodium combustion, the multiphase chemical equilibrium calculation program for chemical reaction among sodium, oxygen and hydrogen is developed in this study. The developed numerical program is named BISHOP; which denotes Bi-Phase, Sodium - Oxygen - Hydrogen, Chemical Equilibrium Calculation Program'. Gibbs free energy minimization method is used because of the special merits that easily add and change chemical species, and generally deal many thermochemical reaction systems in addition to constant temperature and pressure one. Three new methods are developed for solving multi-phase sodium reaction system in this study. One is to construct equation system by simplifying phase, and the other is to expand the Gibbs free energy minimization method into multi-phase system, and the last is to establish the effective searching method for the minimum value. Chemical compounds by the combustion of sodium in the air are calculated using BISHOP. The Calculated temperature and moisture conditions where sodium-oxide and hydroxide are formed qualitatively agree with the experiments. Deformation of sodium hydride is calculated by the program. The estimated result of the relationship between the deformation temperature and pressure closely agree with the well known experimental equation of Roy and Rodgers. It is concluded that BISHOP can be used for evaluated the combustion and deformation behaviors of sodium and its compounds. Hydrogen formation condition of the dump-tank room at the sodium leak event of FBR is quantitatively evaluated by BISHOP. It can be concluded that to keep the temperature of dump-tank room lower is effective method to suppress the formation of hydrogen. In case of choosing the lower inflammability limit of 4.1 mol% as the hydrogen concentration criterion, formation reaction of sodium hydride from sodium and hydrogen is facilitated below the room temperature of 800 K, and concentration of hydrogen
Santos, Juracir Silva
2011-07-01
The work was developed under the project 993/2007 - 'Development of analytical strategies for uranium determination in environmental and industrial samples - Environmental monitoring in the Caetite city, Bahia, Brazil' and made possible through a partnership established between Universidade Federal da Bahia and the Comissao Nacional de Energia Nuclear. Strategies were developed to uranium determination in natural water and effluents of uranium mine. The first one was a critical evaluation of the determination of uranium by inductively coupled plasma optical emission spectrometry (ICP OES) performed using factorial and Doehlert designs involving the factors: acid concentration, radio frequency power and nebuliser gas flow rate. Five emission lines were simultaneously studied (namely: 367.007, 385.464, 385.957, 386.592 and 409.013 nm), in the presence of HN0{sub 3}, H{sub 3}C{sub 2}00H or HCI. The determinations in HN0{sub 3} medium were the most sensitive. Among the factors studied, the gas flow rate was the most significant for the five emission lines. Calcium caused interference in the emission intensity for some lines and iron did not interfere (at least up to 10 mg L{sup -1}) in the five lines studied. The presence of 13 other elements did not affect the emission intensity of uranium for the lines chosen. The optimized method, using the line at 385.957 nm, allows the determination of uranium with limit of quantification of 30 {mu}g L{sup -1} and precision expressed as RSD lower than 2.2% for uranium concentrations of either 500 and 1000 {mu}g L{sup -1}. In second one, a highly sensitive flow-based procedure for uranium determination in natural waters is described. A 100-cm optical path flow cell based on a liquid-core waveguide (LCW) was exploited to increase sensitivity of the arsenazo 111 method, aiming to achieve the limits established by environmental regulations. The flow system was designed with solenoid micro-pumps in order to improve mixing and
Posuvailo, V. M.; Klapkiv, M. D.; Student, M. M.; Sirak, Y. Y.; Pokhmurska, H. V.
2017-03-01
The oxide ceramic coating with copper inclusions was synthesized by the method of plasma electrolytic oxidation (PEO). Calculations of the Gibbs energies of reactions between the plasma channel elements with inclusions of copper and copper oxide were carried out. Two methods of forming the oxide-ceramic coatings on aluminum base in electrolytic plasma with copper inclusions were established. The first method - consist in the introduction of copper into the aluminum matrix, the second - copper oxide. During the synthesis of oxide ceramic coatings plasma channel does not react with copper and copper oxide-ceramic included in the coating. In the second case is reduction of copper oxide in interaction with elements of the plasma channel. The content of oxide-ceramic layer was investigated by X-ray and X-ray microelement analysis. The inclusions of copper, CuAl2, Cu9Al4 in the oxide-ceramic coatings were found. It was established that in the spark plasma channels alongside with the oxidation reaction occurs also the reaction aluminothermic reduction of the metal that allows us to dope the oxide-ceramic coating by metal the isobaric-isothermal potential oxidation of which is less negative than the potential of the aluminum oxide.
HYOUNGJU YOON
2013-02-01
Full Text Available It is required that the pH of the sump solution should be above 7.0 to retain iodine in a liquid phase and be within the material compatibility constraints under LOCA condition of PWR. The pH of the sump solution can be determined by conventional chemical equilibrium constants or by the minimization of Gibbs free energy. The latter method developed as a computer code called SOLGASMIX-PV is more convenient than the former since various chemical components can be easily treated under LOCA conditions. In this study, SOLGASMIX-PV code was modified to accommodate the acidic and basic materials produced by radiolysis reactions and to calculate the pH of the sump solution. When the computed pH was compared with measured by the ORNL experiment to verify the reliability of the modified code, the error between two values was within 0.3 pH. Finally, two cases of calculation were performed for the SKN 3&4 and UCN 1&2. As results, pH of the sump solution for the SKN 3&4 was between 7.02 and 7.45, and for the UCN 1&2 plant between 8.07 and 9.41. Furthermore, it was found that the radiolysis reactions have insignificant effects on pH because the relative concentrations of HCl, HNO3, and Cs are very low.
Tan, Chao; Chen, Hui; Wang, Chao; Zhu, Wanping; Wu, Tong; Diao, Yuanbo
2013-03-01
Near and mid-infrared (NIR/MIR) spectroscopy techniques have gained great acceptance in the industry due to their multiple applications and versatility. However, a success of application often depends heavily on the construction of accurate and stable calibration models. For this purpose, a simple multi-model fusion strategy is proposed. It is actually the combination of Kohonen self-organizing map (KSOM), mutual information (MI) and partial least squares (PLSs) and therefore named as KMICPLS. It works as follows: First, the original training set is fed into a KSOM for unsupervised clustering of samples, on which a series of training subsets are constructed. Thereafter, on each of the training subsets, a MI spectrum is calculated and only the variables with higher MI values than the mean value are retained, based on which a candidate PLS model is constructed. Finally, a fixed number of PLS models are selected to produce a consensus model. Two NIR/MIR spectral datasets from brewing industry are used for experiments. The results confirms its superior performance to two reference algorithms, i.e., the conventional PLS and genetic algorithm-PLS (GAPLS). It can build more accurate and stable calibration models without increasing the complexity, and can be generalized to other NIR/MIR applications.
Isham, M. A.
1992-01-01
Silicon carbide and silicon nitride are considered for application as structural materials and coating in advanced propulsion systems including nuclear thermal. Three-dimensional Gibbs free energy were constructed for reactions involving these materials in H2 and H2/H2O. Free energy plots are functions of temperature and pressure. Calculations used the definition of Gibbs free energy where the spontaneity of reactions is calculated as a function of temperature and pressure. Silicon carbide decomposes to Si and CH4 in pure H2 and forms a SiO2 scale in a wet atmosphere. Silicon nitride remains stable under all conditions. There was no apparent difference in reaction thermodynamics between ideal and Van der Waals treatment of gaseous species.
Silverio, Sara C.; Rodriguez, Oscar; Teixeira, Jose A.; Macedo, Eugenia A.
2010-01-01
The Gibbs free energy of transfer of a suitable hydrophobic probe can be regarded as a measure of the relative hydrophobicity of the different phases. The methylene group (CH 2 ) can be considered hydrophobic, and thus be a suitable probe for hydrophobicity. In this work, the partition coefficients of a series of five dinitrophenylated-amino acids were experimentally determined, at 23 o C, in three different tie-lines of the biphasic systems: (UCON + K 2 HPO 4 ), (UCON + potassium phosphate buffer, pH 7), (UCON + KH 2 PO 4 ), (UCON + Na 2 HPO 4 ), (UCON + sodium phosphate buffer, pH 7), and (UCON + NaH 2 PO 4 ). The Gibbs free energy of transfer of CH 2 units were calculated from the partition coefficients and used to compare the relative hydrophobicity of the equilibrium phases. The largest relative hydrophobicity was found for the ATPS formed by dihydrogen phosphate salts.
Gibbs energies of formation of zircon (ZrSiO4), thorite (ThSiO4), and phenacite (Be2SiO4)
Schuiling, R.D.; Vergouwen, L.; Rijst, H. van der
1976-01-01
Zircon, thorite, and phenacite are very refractory compounds which do not yield to solution calorimetry. In In order to obtain approximate Gibbs energies of formation for these minerals, their reactions with a number of silica-undersaturated compounds (silicates or oxides) were studied. Conversely baddeleyite (ZrO 2 ), thorianite (ThO 2 ), and bromellite (BeO) were reacted with the appropriate silicates. As the Gibbs energies of reaction of the undersaturated compounds with SiO 2 are known, the experiments yield the following data: Δ G 298 , 1 /sub bar/ 0 = -459.02 +- 1.04 kcal for zircon, -489.67 +- 1.04 for thorite, and -480.20 +- 1.01 for phenacite
Kireev, A.A.; Pak, T.G.; Bezuglyj, V.D.
1996-01-01
Solubilities of KClO 4 , RbClO 4 , CsClO 4 , (CH 3 ) 4 NClO 4 , (C 2 M 5 ) 4 NClO 4 in water and water-acetone mixtures are determined by the method of isothermal saturation at 298.15 K. Dissociation constants of alkali metal perchlorates are found by conductometric method. Solubility products and standard Gibbs energies of transfer of corresponding electrolytes from water into water-acetone solvents are calculated. The character of transfer Gibbs energy dependence on solvent composition is explained by preferred solvation of cations by acetone molecules and anions-by water molecules. Features of tetraalkyl ammonium ions are explained by large changes in energy of cavity formation for these ions
Smith, J. A.; Froyd, K. D.; Toon, O. B.
2012-12-01
We construct tables of reaction enthalpies and entropies for the association reactions involving sulfuric acid vapor, water vapor, and the bisulfate ion. These tables are created from experimental measurements and quantum chemical calculations for molecular clusters and a classical thermodynamic model for larger clusters. These initial tables are not thermodynamically consistent. For example, the Gibbs free energy of associating a cluster consisting of one acid molecule and two water molecules depends on the order in which the cluster was assembled: add two waters and then the acid or add an acid and a water and then the second water. We adjust the values within the tables using the method of Lagrange multipliers to minimize the adjustments and produce self-consistent Gibbs free energy surfaces for the neutral clusters and the charged clusters. With the self-consistent Gibbs free energy surfaces, we calculate size distributions of neutral and charged clusters for a variety of atmospheric conditions. Depending on the conditions, nucleation can be dominated by growth along the neutral channel or growth along the ion channel followed by ion-ion recombination.
Phase relations and Gibbs energies of spinel phases and solid solutions in the system Mg-Rh-O
Jacob, K.T., E-mail: katob@materials.iisc.ernet.in [Department of Materials Engineering, Indian Institute of Science, Bangalore 560 012 (India); Prusty, Debadutta [Department of Materials Engineering, Indian Institute of Science, Bangalore 560 012 (India); Kale, G.M. [Institute for Materials Research, University of Leeds, Leeds, LS2 9JT (United Kingdom)
2012-02-05
Highlights: Black-Right-Pointing-Pointer Refinement of phase diagram for the system Mg-Rh-O and thermodynamic data for spinel compounds MgRh{sub 2}O{sub 4} and Mg{sub 2}RhO{sub 4} is presented. Black-Right-Pointing-Pointer A solid-state electrochemical cell is used for thermodynamic measurement. Black-Right-Pointing-Pointer An advanced design of the solid-state electrochemical cell incorporating buffer electrodes is deployed to minimize polarization of working electrode. Black-Right-Pointing-Pointer Regular solution model for the spinel solid solution MgRh{sub 2}O{sub 4} - Mg{sub 2}RhO{sub 4} based on ideal mixing of cations on the octahedral site is proposed. Black-Right-Pointing-Pointer Factors responsible for stabilization of tetravalent rhodium in spinel compounds are identified. - Abstract: Pure stoichiometric MgRh{sub 2}O{sub 4} could not be prepared by solid state reaction from an equimolar mixture of MgO and Rh{sub 2}O{sub 3} in air. The spinel phase formed always contained excess of Mg and traces of Rh or Rh{sub 2}O{sub 3}. The spinel phase can be considered as a solid solution of Mg{sub 2}RhO{sub 4} in MgRh{sub 2}O{sub 4}. The compositions of the spinel solid solution in equilibrium with different phases in the ternary system Mg-Rh-O were determined by electron probe microanalysis. The oxygen potential established by the equilibrium between Rh + MgO + Mg{sub 1+x}Rh{sub 2-x}O{sub 4} was measured as a function of temperature using a solid-state cell incorporating yttria-stabilized zirconia as an electrolyte and pure oxygen at 0.1 MPa as the reference electrode. To avoid polarization of the working electrode during the measurements, an improved design of the cell with a buffer electrode was used. The standard Gibbs energies of formation of MgRh{sub 2}O{sub 4} and Mg{sub 2}RhO{sub 4} were deduced from the measured electromotive force (e.m.f.) by invoking a model for the spinel solid solution. The parameters of the model were optimized using the measured
Vecchio, Stefano
2010-01-01
The vapor pressures above the solid hexachlorobenzene (HCB) and above both the solid and liquid 1,2,3,4,5,6-hexachlorocyclohexane (lindane) were determined in the ranges 332-450 K and 347-429 K, respectively, by measuring the mass loss rates recorded by thermogravimetry under both isothermal and nonisothermal conditions. The results obtained were compared with those taken from literature. From the temperature dependence of vapor pressure derived by the experimental thermogravimetry data the molar enthalpies of sublimation Δ cr g H m o ( ) were selected for HCB and lindane as well as the molar enthalpy of vaporization Δ l g H m o ( ) for lindane only, at the middle of the respective temperature intervals. The melting temperatures and the molar enthalpies of fusion Δ cr l H m o (T fus ) of lindane were measured by differential scanning calorimetry. Finally, the standard molar enthalpies of sublimation Δ cr g H m o (298.15 K) were obtained for both chlorinated compounds at the reference temperature of 298.15 K using the Δ cr g H m o ( ), Δ l g H m o ( ) and Δ cr l H m o (T fus ) values, as well as the heat capacity differences between gas and liquid and the heat capacity differences between gas and solid, Δ l g C p,m o and Δ cr g C p,m o , respectively, both estimated by applying a group additivity procedure. Therefore, the averages of the standard (p o = 0.1 MPa) molar enthalpies, entropies and Gibbs energies of sublimation at 298.15 K, have been derived.
Masuda, Yosuke; Yamaotsu, Noriyuki; Hirono, Shuichi
2017-01-01
In order to predict the potencies of mechanism-based reversible covalent inhibitors, the relationships between calculated Gibbs free energy of hydrolytic water molecule in acyl-trypsin intermediates and experimentally measured catalytic rate constants (k cat ) were investigated. After obtaining representative solution structures by molecular dynamics (MD) simulations, hydration thermodynamics analyses using WaterMap™ were conducted. Consequently, we found for the first time that when Gibbs free energy of the hydrolytic water molecule was lower, logarithms of k cat were also lower. The hydrolytic water molecule with favorable Gibbs free energy may hydrolyze acylated serine slowly. Gibbs free energy of hydrolytic water molecule might be a useful descriptor for computer-aided discovery of mechanism-based reversible covalent inhibitors of hydrolytic enzymes.
Hansen, A.
2005-07-01
In co-operation with the Danish EPA, the National Environmental Research Institute (NERI) has carried out a series of measurements of aromatic hydrocarbons in produced water from an offshore oil and gas production platform in the Danish sector of the North Sea as part of the project 'Testing of sampling strategy for aromatic hydrocarbons in produced water from the offshore oil and gas industry'. The measurements included both volatile (BTEX: benzene, toluene, ethylbenzene and xylenes) and semi-volatile aromatic hydrocarbons: NPD (naphthalenes, phenanthrenes and dibenzothiophenes) and selected PAHs (polycyclic aromatic hydrocarbons). In total, 12 samples of produced water were sampled at the Dan FF production platform located in the North Sea by the operator, Maersk Oil and Gas, as four sets of three parallel samples from November 24 - December 02, 2004. After collection of the last set, the samples were shipped to NERI for analysis. The water samples were collected in 1 L glass bottles that were filled completely (without overfilling) and tightly closed. After sampling, the samples were preserved with hydrochloric acid and cooled below ambient until being shipped off to NERI. Here all samples were analysed in dublicates, and the results show that for BTEX, levels were reduced compared to similar measurements carried out by NERI in 2002 and others. In this work, BTEX levels were approximately 5 mg/L, while similar studies showed levels in the range 0,5 - 35 mg/L. For NPD levels were similar, 0,5 - 1,4 mg/L, while for PAH they seerred elevated; 0,1 - 0,4 mg/L in this work compared to 0,001 - 0,3 mg/L in similar studies. The applied sampling strategy has been tested by performing analysis of variance on the analytical data. The test of the analytical data has shown that the mean values of the three parallel samples collected in series constituted a good estimate of the levels at the time of sampling; thus, the variance between the parallel samples was not
Awasthi, Neha; Ritschel, Thomas; Lipowsky, Reinhard; Knecht, Volker
2013-01-01
Highlights: • ΔG and K eq for NO 2 dimerization and NH 3 synthesis calculated via ab-initio methods. • Vis-á-vis experiments, W1 and CCSD(T) are accurate and G3B3 also does quite well. • CBS-APNO most accurate for NH 3 reaction but shows limitations in modeling NO 2 . • Temperature dependence of ΔG and K eq is calculated for the NH 3 reaction. • Good agreement of calculated K eq with experiments and the van’t Hoff approximation. -- Abstract: Standard quantum chemical methods are used for accurate calculation of thermochemical properties such as enthalpies of formation, entropies and Gibbs energies of formation. Equilibrium reactions are widely investigated and experimental measurements often lead to a range of reaction Gibbs energies and equilibrium constants. It is useful to calculate these equilibrium properties from quantum chemical methods in order to address the experimental differences. Furthermore, most standard calculation methods differ in accuracy and feasibility of the system size. Hence, a systematic comparison of equilibrium properties calculated with different numerical algorithms would provide a useful reference. We select two well-known gas phase equilibrium reactions with small molecules: covalent dimer formation of NO 2 (2NO 2 ⇌ N 2 O 4 ) and the synthesis of NH 3 (N 2 + 3 H 2 ⇌ 2NH 3 ). We test four quantum chemical methods denoted by G3B3, CBS-APNO, W1 and CCSD(T) with aug-cc-pVXZ basis sets (X = 2, 3, and 4), to obtain thermochemical data for NO 2 , N 2 O 4 , and NH 3 . The calculated standard formation Gibbs energies Δ f G° are used to calculate standard reaction Gibbs energies Δ r G° and standard equilibrium constants K eq for the two reactions. Standard formation enthalpies Δ f H° are calculated in a more reliable way using high-level methods such as W1 and CCSD(T). Standard entropies S° for the molecules are calculated well within the range of experiments for all methods, however, the values of standard formation
Casey, Erin A; Leek, Cliff; Tolman, Richard M; Allen, Christopher T; Carlson, Juliana M
2017-09-01
As engaging men in gender-based violence prevention efforts becomes an increasingly institutionalised component of gender equity work globally, clarity is needed about the strategies that best initiate male-identified individuals' involvement in these efforts. The purpose of this study was to examine the perceived relevance and effectiveness of men's engagement strategies from the perspective of men around the world who have organised or attended gender-based violence prevention events. Participants responded to an online survey (available in English, French and Spanish) and rated the effectiveness of 15 discrete engagement strategies derived from earlier qualitative work. Participants also provided suggestions regarding strategies in open-ended comments. Listed strategies cut across the social ecological spectrum and represented both venues in which to reach men, and the content of violence prevention messaging. Results suggest that all strategies, on average, were perceived as effective across regions of the world, with strategies that tailor messaging to topics of particular concern to men (such as fatherhood and healthy relationships) rated most highly. Open-ended comments also surfaced tensions, particularly related to the role of a gender analysis in initial men's engagement efforts. Findings suggest the promise of cross-regional adaptation and information sharing regarding successful approaches to initiating men's anti-violence involvement.
Onda, Yuichi; Kato, Hiroaki; Hoshi, Masaharu; Takahashi, Yoshio; Nguyen, Minh-Long
2015-01-01
The Fukushima Dai-ichi Nuclear Power Plant (FDNPP) accident resulted in extensive radioactive contamination of the environment via deposited radionuclides such as radiocesium and 131 I. Evaluating the extent and level of environmental contamination is critical to protecting citizens in affected areas and to planning decontamination efforts. However, a standardized soil sampling protocol is needed in such emergencies to facilitate the collection of large, tractable samples for measuring gamma-emitting radionuclides. In this study, we developed an emergency soil sampling protocol based on preliminary sampling from the FDNPP accident-affected area. We also present the results of a preliminary experiment aimed to evaluate the influence of various procedures (e.g., mixing, number of samples) on measured radioactivity. Results show that sample mixing strongly affects measured radioactivity in soil samples. Furthermore, for homogenization, shaking the plastic sample container at least 150 times or disaggregating soil by hand-rolling in a disposable plastic bag is required. Finally, we determined that five soil samples within a 3 m × 3-m area are the minimum number required for reducing measurement uncertainty in the emergency soil sampling protocol proposed here. - Highlights: • Emergency soil sampling protocol was proposed for nuclear hazards. • Various sampling procedures were tested and evaluated in Fukushima area. • Soil sample mixing procedure was of key importance for measured radioactivity. • Minimum number of sampling was determined for reducing measurement uncertainty
Sampling Methods for Wallenius' and Fisher's Noncentral Hypergeometric Distributions
Fog, Agner
2008-01-01
the mode, ratio-of-uniforms rejection method, and rejection by sampling in the tau domain. Methods for the multivariate distributions include: simulation of urn experiments, conditional method, Gibbs sampling, and Metropolis-Hastings sampling. These methods are useful for Monte Carlo simulation of models...... of biased sampling and models of evolution and for calculating moments and quantiles of the distributions.......Several methods for generating variates with univariate and multivariate Wallenius' and Fisher's noncentral hypergeometric distributions are developed. Methods for the univariate distributions include: simulation of urn experiments, inversion by binary search, inversion by chop-down search from...
Rawana, Jennine S
2013-07-01
This study aimed to evaluate the relative importance of body change strategies and weight perception in adolescent depression after accounting for established risk factors for depression, namely low social support across key adolescent contexts. The moderating effect of self-esteem was also examined. Participants (N=4587, 49% female) were selected from the National Longitudinal Study of Adolescent Health. Regression analyses were conducted on the association between well-known depression risk factors (lack of perceived support from parents, peers, and schools), body change strategies, weight perception, and adolescent depressive symptoms one year later. Each well-known risk factor significantly predicted depressive symptoms. Body change strategies related to losing weight and overweight perceptions predicted depressive symptoms above and beyond established risk factors. Self-esteem moderated the relationship between trying to lose weight and depressive symptoms. Maladaptive weight loss strategies and overweight perceptions should be addressed in early identification depression programs. Copyright © 2013 Elsevier Inc. All rights reserved.
Rabouille, C.; Olu, K.; Baudin, F.; Khripounoff, A.; Dennielou, B.; Arnaud-Haond, S.; Babonneau, N.; Bayle, C.; Beckler, J.; Bessette, S.; Bombled, B.; Bourgeois, S.; Brandily, C.; Caprais, J. C.; Cathalot, C.; Charlier, K.; Corvaisier, R.; Croguennec, C.; Cruaud, P.; Decker, C.; Droz, L.; Gayet, N.; Godfroy, A.; Hourdez, S.; Le Bruchec, J.; Saout, J.; Le Saout, M.; Lesongeur, F.; Martinez, P.; Mejanelle, L.; Michalopoulos, P.; Mouchel, O.; Noel, P.; Pastor, L.; Picot, M.; Pignet, P.; Pozzato, L.; Pruski, A. M.; Rabiller, M.; Raimonet, M.; Ragueneau, O.; Reyss, J. L.; Rodier, P.; Ruesch, B.; Ruffine, L.; Savignac, F.; Senyarich, C.; Schnyder, J.; Sen, A.; Stetten, E.; Sun, Ming Yi; Taillefert, M.; Teixeira, S.; Tisnerat-Laborde, N.; Toffin, L.; Tourolle, J.; Toussaint, F.; Vétion, G.; Jouanneau, J. M.; Bez, M.; Congolobe Group:
2017-08-01
The presently active region of the Congo deep-sea fan (around 330,000 km2), called the terminal lobes or lobe complex, covers an area of 2500 km2 at 4700-5100 m water depth and 750-800 km offshore. It is a unique sedimentary area in the world ocean fed by a submarine canyon and a channel-levee system which presently deliver large amounts of organic carbon originating from the Congo River by turbidity currents. This particularity is due to the deep incision of the shelf by the Congo canyon, up to 30 km into the estuary, which funnels the Congo River sediments into the deep-sea. The connection between the river and the canyon is unique for major world rivers. In 2011, two cruises (WACS leg 2 and Congolobe) were conducted to simultaneously investigate the geology, organic and inorganic geochemistry, and micro- and macro-biology of the terminal lobes of the Congo deep-sea fan. Using this multidisciplinary approach, the morpho-sedimentary features of the lobes were characterized along with the origin and reactivity of organic matter, the recycling and burial of biogenic compounds, the diversity and function of bacterial and archaeal communities within the sediment, and the biodiversity and functioning of the faunal assemblages on the seafloor. Six different sites were selected for this study: Four distributed along the active channel from the lobe complex entrance to the outer rim of the sediment deposition zone, and two positioned cross-axis and at increasing distance from the active channel, thus providing a gradient in turbidite particle delivery and sediment age. This paper aims to provide the general context of this multidisciplinary study. It describes the general features of the site and the overall sampling strategy and provides the initial habitat observations to guide the other in-depth investigations presented in this special issue. Detailed bathymetry of each sampling site using 0.1-1 m resolution multibeam obtained with a remotely operated vehicle (ROV
Halbheer, Daniel; Stahl, Florian; Koenigsberg, Oded; Lehmann, Donald R
2013-01-01
This paper studies content strategies for online publishers of digital information goods. It examines sampling strategies and compares their performance to paid content and free content strategies. A sampling strategy, where some of the content is offered for free and consumers are charged for access to the rest, is known as a "metered model" in the newspaper industry. We analyze optimal decisions concerning the size of the sample and the price of the paid content when sampling serves the dua...
Starr, Francis W; Douglas, Jack F; Sastry, Srikanth
2013-03-28
We carefully examine common measures of dynamical heterogeneity for a model polymer melt and test how these scales compare with those hypothesized by the Adam and Gibbs (AG) and random first-order transition (RFOT) theories of relaxation in glass-forming liquids. To this end, we first analyze clusters of highly mobile particles, the string-like collective motion of these mobile particles, and clusters of relative low mobility. We show that the time scale of the high-mobility clusters and strings is associated with a diffusive time scale, while the low-mobility particles' time scale relates to a structural relaxation time. The difference of the characteristic times for the high- and low-mobility particles naturally explains the well-known decoupling of diffusion and structural relaxation time scales. Despite the inherent difference of dynamics between high- and low-mobility particles, we find a high degree of similarity in the geometrical structure of these particle clusters. In particular, we show that the fractal dimensions of these clusters are consistent with those of swollen branched polymers or branched polymers with screened excluded-volume interactions, corresponding to lattice animals and percolation clusters, respectively. In contrast, the fractal dimension of the strings crosses over from that of self-avoiding walks for small strings, to simple random walks for longer, more strongly interacting, strings, corresponding to flexible polymers with screened excluded-volume interactions. We examine the appropriateness of identifying the size scales of either mobile particle clusters or strings with the size of cooperatively rearranging regions (CRR) in the AG and RFOT theories. We find that the string size appears to be the most consistent measure of CRR for both the AG and RFOT models. Identifying strings or clusters with the "mosaic" length of the RFOT model relaxes the conventional assumption that the "entropic droplets" are compact. We also confirm the
Temme, K; Osborne, T J; Vollbrecht, K G; Poulin, D; Verstraete, F
2011-03-03
The original motivation to build a quantum computer came from Feynman, who imagined a machine capable of simulating generic quantum mechanical systems--a task that is believed to be intractable for classical computers. Such a machine could have far-reaching applications in the simulation of many-body quantum physics in condensed-matter, chemical and high-energy systems. Part of Feynman's challenge was met by Lloyd, who showed how to approximately decompose the time evolution operator of interacting quantum particles into a short sequence of elementary gates, suitable for operation on a quantum computer. However, this left open the problem of how to simulate the equilibrium and static properties of quantum systems. This requires the preparation of ground and Gibbs states on a quantum computer. For classical systems, this problem is solved by the ubiquitous Metropolis algorithm, a method that has basically acquired a monopoly on the simulation of interacting particles. Here we demonstrate how to implement a quantum version of the Metropolis algorithm. This algorithm permits sampling directly from the eigenstates of the Hamiltonian, and thus evades the sign problem present in classical simulations. A small-scale implementation of this algorithm should be achievable with today's technology.
Montero, Lidia; Ibáñez, Elena; Russo, Mariateresa; Rastrelli, Luca; Cifuentes, Alejandro; Herrero, Miguel
2017-09-08
Comprehensive two-dimensional liquid chromatography (LC × LC) is ever gaining interest in food analysis, as often, food-related samples are too complex to be analyzed through one-dimensional approaches. The use of hydrophilic interaction chromatography (HILIC) combined with reversed phase (RP) separations has already been demonstrated as a very orthogonal combination, which allows attaining increased resolving power. However, this coupling encompasses different analytical challenges, mainly related to the important solvent strength mismatch between the two dimensions, besides those common to every LC × LC method. In the present contribution, different strategies are proposed and compared to further increase HILIC × RP method performance for the analysis of complex food samples, using licorice as a model sample. The influence of different parameters in non-focusing modulation methods based on sampling loops, as well as under focusing modulation, through the use of trapping columns in the interface and through active modulation procedures are studied in order to produce resolving power and sensitivity gains. Although the use of a dilution strategy using sampling loops as well as the highest possible first dimension sampling rate allowed significant improvements on resolution, focusing modulation produced significant gains also in peak capacity and sensitivity. Overall, the obtained results demonstrate the great applicability and potential that active modulation may have for the analysis of complex food samples, such as licorice, by HILIC × RP. Copyright © 2017 Elsevier B.V. All rights reserved.
Pethica, Brian A
2007-12-21
As indicated by Gibbs and made explicit by Guggenheim, the electrical potential difference between two regions of different chemical composition cannot be measured. The Gibbs-Guggenheim Principle restricts the use of classical electrostatics in electrochemical theories as thermodynamically unsound with some few approximate exceptions, notably for dilute electrolyte solutions and concomitant low potentials where the linear limit for the exponential of the relevant Boltzmann distribution applies. The Principle invalidates the widespread use of forms of the Poisson-Boltzmann equation which do not include the non-electrostatic components of the chemical potentials of the ions. From a thermodynamic analysis of the parallel plate electrical condenser, employing only measurable electrical quantities and taking into account the chemical potentials of the components of the dielectric and their adsorption at the surfaces of the condenser plates, an experimental procedure to provide exceptions to the Principle has been proposed. This procedure is now reconsidered and rejected. No other related experimental procedures circumvent the Principle. Widely-used theoretical descriptions of electrolyte solutions, charged surfaces and colloid dispersions which neglect the Principle are briefly discussed. MD methods avoid the limitations of the Poisson-Bolzmann equation. Theoretical models which include the non-electrostatic components of the inter-ion and ion-surface interactions in solutions and colloid systems assume the additivity of dispersion and electrostatic forces. An experimental procedure to test this assumption is identified from the thermodynamics of condensers at microscopic plate separations. The available experimental data from Kelvin probe studies are preliminary, but tend against additivity. A corollary to the Gibbs-Guggenheim Principle is enunciated, and the Principle is restated that for any charged species, neither the difference in electrostatic potential nor the
Xu, H.; Wang, Y.
1999-01-01
In this letter, a linear free energy relationship is used to predict the Gibbs free energies of formation of crystalline phases of pyrochlore and zirconolite families with stoichiometry of MCaTi 2 O 7 (or, CaMTi 2 O 7 ,) from the known thermodynamic properties of aqueous tetravalent cations (M 4+ ). The linear free energy relationship for tetravalent cations is expressed as ΔG f,M v X 0 =a M v X ΔG n,M 4+ 0 +b M v X +β M v X r M 4+ , where the coefficients a M v X , b M v X , and β M v X characterize a particular structural family of M v X, r M 4+ is the ionic radius of M 4+ cation, ΔG f,M v X 0 is the standard Gibbs free energy of formation of M v X, and ΔG n,M 4+ 0 is the standard non-solvation energy of cation M 4+ . The coefficients for the structural family of zirconolite with the stoichiometry of M 4+ CaTi 2 O 7 are estimated to be: a M v X =0.5717, b M v X =-4284.67 (kJ/mol), and β M v X =27.2 (kJ/mol nm). The coefficients for the structural family of pyrochlore with the stoichiometry of M 4+ CaTi 2 O 7 are estimated to be: a M v X =0.5717, b M v X =-4174.25 (kJ/mol), and β M v X =13.4 (kJ/mol nm). Using the linear free energy relationship, the Gibbs free energies of formation of various zirconolite and pyrochlore phases are calculated. (orig.)
Prasad, T.E. Vittal [Properties Group, Chemical Engineering Laboratory, Indian Institute of Chemical Technology, Hyderabad 500 007 (India); Venkanna, N. [Swamy Ramanandateertha Institute of Science and Technology, Hyderabad 508 004 (India); Kumar, Y. Naveen [Swamy Ramanandateertha Institute of Science and Technology, Hyderabad 508 004 (India); Ashok, K. [Swamy Ramanandateertha Institute of Science and Technology, Hyderabad 508 004 (India); Sirisha, N.M. [Swamy Ramanandateertha Institute of Science and Technology, Hyderabad 508 004 (India); Prasad, D.H.L. [Properties Group, Chemical Engineering Laboratory, Indian Institute of Chemical Technology, Hyderabad 500 007 (India)]. E-mail: dasika@iict.res.in
2007-07-15
Bubble point temperatures at 95.23 kPa, over the entire composition range are measured for the binary mixtures formed by p-cresol with 1,2-dichloroethane, 1,1,2,2-tetrachloroethane trichloroethylene, tetrachloroethylene, and o- , m- , and p-xylenes, making use of a Swietoslawski-type ebulliometer. Liquid phase mole fraction (x {sub 1}) versus bubble point temperature (T) measurements are found to be well represented by the Wilson model. The optimum Wilson parameters are used to calculate the vapor phase composition, activity coefficients, and excess Gibbs free energy. The results are discussed.
Chaudhuri, Arijit
2014-01-01
Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...
Nakamura-Messenger, K.; Connolly, H. C., Jr.; Lauretta, D. S.
2014-01-01
OSRIS-REx is NASA's New Frontiers 3 sample return mission that will return at least 60 g of pristine surface material from near-Earth asteroid 101955 Bennu in September 2023. The scientific value of the sample increases enormously with the amount of knowledge captured about the geological context from which the sample is collected. The OSIRIS-REx spacecraft is highly maneuverable and capable of investigating the surface of Bennu at scales down to the sub-cm. The OSIRIS-REx instruments will characterize the overall surface geology including spectral properties, microtexture, and geochemistry of the regolith at the sampling site in exquisite detail for up to 505 days after encountering Bennu in August 2018. The mission requires at the very minimum one acceptable location on the asteroid where a touch-and-go (TAG) sample collection maneuver can be successfully per-formed. Sample site selection requires that the follow-ing maps be produced: Safety, Deliverability, Sampleability, and finally Science Value. If areas on the surface are designated as safe, navigation can fly to them, and they have ingestible regolith, then the scientific value of one site over another will guide site selection.
Fröhlich, Jürg; Knowles, Antti; Schlein, Benjamin; Sohinger, Vedran
2017-12-01
We prove that Gibbs measures of nonlinear Schrödinger equations arise as high-temperature limits of thermal states in many-body quantum mechanics. Our results hold for defocusing interactions in dimensions {d =1,2,3}. The many-body quantum thermal states that we consider are the grand canonical ensemble for d = 1 and an appropriate modification of the grand canonical ensemble for {d =2,3}. In dimensions d = 2, 3, the Gibbs measures are supported on singular distributions, and a renormalization of the chemical potential is necessary. On the many-body quantum side, the need for renormalization is manifested by a rapid growth of the number of particles. We relate the original many-body quantum problem to a renormalized version obtained by solving a counterterm problem. Our proof is based on ideas from field theory, using a perturbative expansion in the interaction, organized by using a diagrammatic representation, and on Borel resummation of the resulting series.
Tang, Ying; Du, Yong; Zhang, Lijun; Yuan, Xiaoming; Kaptay, George
2012-01-01
Highlights: ► An exponential formulation to describe ternary excess Gibbs energy is proposed. ► Theoretical analysis is performed to verify stability of phase using new formulation. ► Al–Mg–Si system and its boundary binaries have been assessed by the new formulation. ► Present calculations for Al–Mg–Si system are more reasonable than previous ones. - Abstract: An exponential formulation was proposed to replace the linear interaction parameter in the Redlich–Kister (R–K) polynomial for the excess Gibbs energy of ternary solution phase. The theoretical analysis indicates that the proposed new exponential formulation can not only avoid the artificial miscibility gap at high temperatures but also describe the ternary system well. A thermodynamic description for the Al–Mg–Si system and its boundary binaries was then performed by using both R–K linear and exponential formulations. The inverted miscibility gaps occurring in the Mg–Si and the Al–Mg–Si systems at high temperatures due to the use of R–K linear polynomials are avoided by using the new formulation. Besides, the thermodynamic properties predicted with the new formulation confirm the general thermodynamic belief that the solution phase approaches to the ideal solution at infinite temperatures, which cannot be described with the traditional R–K linear polynomials.
Cirillo, Emilio N.M.; Louis, Pierre-Yves; Ruszel, Wioletta M.; Spitoni, Cristian
2014-01-01
Cellular Automata are discrete-time dynamical systems on a spatially extended discrete space which provide paradigmatic examples of nonlinear phenomena. Their stochastic generalizations, i.e., Probabilistic Cellular Automata (PCA), are discrete time Markov chains on lattice with finite single-cell states whose distinguishing feature is the parallel character of the updating rule. We study the ground states of the Hamiltonian and the low-temperature phase diagram of the related Gibbs measure naturally associated with a class of reversible PCA, called the cross PCA. In such a model the updating rule of a cell depends indeed only on the status of the five cells forming a cross centered at the original cell itself. In particular, it depends on the value of the center spin (self-interaction). The goal of the paper is that of investigating the role played by the self-interaction parameter in connection with the ground states of the Hamiltonian and the low-temperature phase diagram of the Gibbs measure associated with this particular PCA
Skytte, Lilian; Rasmussen, Kaare Lund
2013-07-30
Medieval human bones have the potential to reveal diet, mobility and treatment of diseases in the past. During the last two decades trace element chemistry has been used extensively in archaeometric investigations revealing such data. Many studies have reported the trace element inventory in only one sample from each skeleton - usually from the femur or a tooth. It cannot a priori be assumed that all bones or teeth in a skeleton will have the same trace element concentrations. Six different bone and teeth samples from each individual were carefully decontaminated by mechanical means. Following dissolution of ca. 20 mg sample in nitric acid and hydrogen peroxide the assays were performed using inductively coupled plasma mass spectrometry (ICPMS) with quadropole detection. We describe the precise sampling technique as well as the analytical methods and parameters used for the ICPMS analysis. The places of sampling in the human skeleton did exhibit varying trace element concentrations. Although the samples are contaminated by Fe, Mn and Al from the surrounding soil where the bones have been residing for more than 500 years, other trace elements are intact within the bones. It is shown that the elemental ratios Sr/Ca and Ba/Ca can be used as indicators of provenance. The differences in trace element concentrations can be interpreted as indications of varying diet and provenance as a function of time in the life of the individual - a concept which can be termed chemical life history. A few examples of the results of such analyses are shown, which contains information about provenance and diagenesis. Copyright © 2013 John Wiley & Sons, Ltd.
Julien, Etienne; Senécal, Caroline; Guay, Frédéric
2009-04-01
The purpose of this study was to test the causal ordering among perceived autonomy support from health care practitioners, motivation, coping strategies and compliance to dietary self-care activities. Using a cross-lagged panel model, we investigate how these variables relate to one another over a one-year period. A total of 365 adults with Type 2 diabetes participated in the study. Results suggest that autonomous motivation and active planning are reciprocally related over time, and that prior autonomous motivation is related to the extent participants subsequently comply with their diet. Results are discussed in light of Self-determination Theory and the coping perspective.
Srinivas, N R
2016-02-01
Statins are widely prescribed medicines and are also available in fixed dose combinations with other drugs to treat several chronic ailments. Given the safety issues associated with statins it may be important to assess feasibility of a single time concentration strategy for prediction of exposure (area under the curve; AUC). The peak concentration (Cmax) was used to establish relationship with AUC separately for pravastatin and simvastatin using published pharmacokinetic data. The regression equations generated for statins were used to predict the AUC values from various literature references. The fold difference of the observed divided by predicted values along with correlation coefficient (r) were used to judge the feasibility of the single time point approach. Both pravastatin and simvastatin showed excellent correlation of Cmax vs. AUC values with r value ≥ 0.9638 (pAUC predictions and >81% of the predicted values were in a narrower range of >0.75-fold but AUC values showed excellent correlation for pravastatin (r=0.9708, n=115; pAUC predictions. On the basis of the present work, it is feasible to develop a single concentration time point strategy that coincides with Cmax occurrence for both pravastatin and simvastatin from a therapeutic drug monitoring perspective. © Georg Thieme Verlag KG Stuttgart · New York.
Eulaers, Igor; Covaci, Adrian; Hofman, Jelle; Nygård, Torgeir; Halley, Duncan J; Pinxten, Rianne; Eens, Marcel; Jaspers, Veerle L B
2011-12-01
To circumvent difficulties associated with monitoring adult predatory birds, we investigated the feasibility of different non-destructive strategies for nestling white-tailed eagles (Haliaeetus albicilla). We were able to quantify polychlorinated biphenyls (PCBs), polybrominated diphenyl ethers (PBDEs) and organochlorinated pesticides (OCPs) in body feathers (16.92, 3.37 and 7.81ngg(-1) dw, respectively), blood plasma (8.37, 0.32 and 5.22ngmL(-1) ww, respectively), and preen oil (1157.95, 30.92 and 440.74ngg(-1) ww, respectively) of all nestlings (N=14). Strong significant correlations between blood plasma and preen oil concentrations (0.565≤r≤0.801; Pfeather and blood plasma concentrations, which were almost exclusively between PCB concentrations (0.554≤r≤0.737; Pnest, were possibly undergoing certain physiological changes that may have confounded the use of body feathers as biomonitor matrix. Finally, we provide an integrated discussion on the use of body feathers and preen oil as non-destructive biomonitor strategies for nestling predatory birds. Copyright Â© 2011 Elsevier B.V. All rights reserved.
Jia, Yu; Ehlert, Ludwig; Wahlskog, Cecilia; Lundberg, Angela; Maurice, Christian
2017-12-05
Monitoring pollutants in stormwater discharge in cold climates is challenging. An environmental survey was performed by sampling the stormwater from Luleå Airport, Northern Sweden, during the period 2010-2013, when urea was used as a main component of aircraft deicing/anti-icing fluids (ADAFs). The stormwater collected from the runway was led through an oil trap to an infiltration pond to store excess water during precipitation periods and enhance infiltration and water treatment. Due to insufficient capacity, an emergency spillway was established and equipped with a flow meter and an automatic sampler. This study proposes a program for effective monitoring of pollutant discharge with a minimum number of sampling occasions when use of automatic samplers is not possible. The results showed that 90% of nitrogen discharge occurs during late autumn before the water pipes freeze and during snow melting, regardless of the precipitation during the remaining months when the pollutant discharge was negligible. The concentrations of other constituents in the discharge were generally low compared to guideline values. The best data quality was obtained using flow controlled sampling. Intensive time-controlled sampling during late autumn (few weeks) and snow melting (2 weeks) would be sufficient for necessary information. The flow meters installed at the rectangular notch appeared to be difficult to calibrate and gave contradictory results. Overall, the spillway was dry, as water infiltrated into the pond, and stagnant water close to the edge might be registered as flow. Water level monitoring revealed that the infiltration capacity gradually decreased with time.
Frank, Jennifer L.; Bose, Bidyut; Schrobenhauser-Clonan, Alex
2014-01-01
This study aimed to assess the effectiveness of a universal yoga-based social-emotional wellness promotion program, Transformative Life Skills, on indicators of adolescent emotional distress, prosocial behavior, and attitudes toward violence in a high-risk sample. Participants included 49 students attending an alternative education school in an…
Andreas, Nicholas J; Hyde, Matthew J; Herbert, Bronwen R; Jeffries, Suzan; Santhakumaran, Shalini; Mandalia, Sundhiya; Holmes, Elaine; Modi, Neena
2016-07-07
We tested the hypothesis that there is a positive association between maternal body mass index (BMI) and the concentration of appetite-regulating hormones leptin, insulin, ghrelin and resistin in breast milk. We also aimed to describe the change in breast milk hormone concentration within each feed, and over time. Mothers were recruited from the postpartum ward at a university hospital in London. Breast milk samples were collected at the participants' homes. We recruited 120 healthy, primiparous, breastfeeding mothers, aged over 18 years. Mothers who smoked, had multiple births or had diabetes were excluded. Foremilk and hindmilk samples were collected from 105 women at 1 week postpartum and 92 women at 3 months postpartum. We recorded maternal and infant anthropometric measurements at each sample collection and measured hormone concentrations using a multiplex assay. The concentration of leptin in foremilk correlated with maternal BMI at the time of sample collection, at 7 days (r=0.31, p=0.02) and 3 months postpartum (r=0.30, p=milk ghrelin and resistin were not correlated with maternal BMI. Ghrelin concentrations at 3 months postpartum were increased in foremilk compared with hindmilk (p=0.01). Concentrations of ghrelin were increased in hindmilk collected at 1 week postpartum compared with samples collected at 3 months postpartum (p=0.03). A trend towards decreased insulin concentrations in hindmilk was noted. Concentrations of leptin and resistin were not seen to alter over a feed. A positive correlation between maternal BMI and foremilk leptin concentration at both time points studied, and foremilk insulin at 3 months postpartum was observed. This may have implications for infant appetite regulation and obesity risk. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Advanced Markov chain Monte Carlo methods learning from past samples
Liang, Faming; Carrol, Raymond J
2010-01-01
This book provides comprehensive coverage of simulation of complex systems using Monte Carlo methods. Developing algorithms that are immune to the local trap problem has long been considered as the most important topic in MCMC research. Various advanced MCMC algorithms which address this problem have been developed include, the modified Gibbs sampler, the methods based on auxiliary variables and the methods making use of past samples. The focus of this book is on the algorithms that make use of past samples. This book includes the multicanonical algorithm, dynamic weighting, dynamically weight
Interactive Control System, Intended Strategy, Implemented Strategy dan Emergent Strategy
Tubagus Ismail; Darjat Sudrajat
2012-01-01
The purpose of this study was to examine the relationship between management control system (MCS) and strategy formation processes, namely: intended strategy, emergent strategy and impelemented strategy. The focus of MCS in this study was interactive control system. The study was based on Structural Equation Modeling (SEM) as its multivariate analyses instrument. The samples were upper middle managers of manufacturing company in Banten Province, DKI Jakarta Province and West Java Province. AM...
Bauermeister, José A; Zimmerman, Marc A; Johns, Michelle M; Glowacki, Pietreck; Stoddard, Sarah; Volz, Erik
2012-09-01
We used a web version of Respondent-Driven Sampling (webRDS) to recruit a sample of young adults (ages 18-24) and examined whether this strategy would result in alcohol and other drug (AOD) prevalence estimates comparable to national estimates (National Survey on Drug Use and Health [NSDUH]). We recruited 22 initial participants (seeds) via Facebook to complete a web survey examining AOD risk correlates. Sequential, incentivized recruitment continued until our desired sample size was achieved. After correcting for webRDS clustering effects, we contrasted our AOD prevalence estimates (past 30 days) to NSDUH estimates by comparing the 95% confidence intervals of prevalence estimates. We found comparable AOD prevalence estimates between our sample and NSDUH for the past 30 days for alcohol, marijuana, cocaine, Ecstasy (3,4-methylenedioxymethamphetamine, or MDMA), and hallucinogens. Cigarette use was lower than NSDUH estimates. WebRDS may be a suitable strategy to recruit young adults online. We discuss the unique strengths and challenges that may be encountered by public health researchers using webRDS methods.
Aguirre, M A; Selva, E J; Hidalgo, M; Canals, A
2015-01-01
A rapid and efficient Dispersive Liquid-Liquid Microextraction (DLLME) followed by Laser-Induced Breakdown Spectroscopy detection (LIBS) was evaluated for simultaneous determination of Cr, Cu, Mn, Ni and Zn in water samples. Metals in the samples were extracted with tetrachloromethane as pyrrolidinedithiocarbamate (APDC) complexes, using vortex agitation to achieve dispersion of the extractant solvent. Several DLLME experimental factors affecting extraction efficiency were optimized with a multivariate approach. Under optimum DLLME conditions, DLLME-LIBS method was found to be of about 4.0-5.5 times more sensitive than LIBS, achieving limits of detection of about 3.7-5.6 times lower. To assess accuracy of the proposed DLLME-LIBS procedure, a certified reference material of estuarine water was analyzed. Copyright © 2014 Elsevier B.V. All rights reserved.
Smolich Beverly D
2003-02-01
Full Text Available Abstract Background Microarray-based gene expression profiling is a powerful approach for the identification of molecular biomarkers of disease, particularly in human cancers. Utility of this approach to measure responses to therapy is less well established, in part due to challenges in obtaining serial biopsies. Identification of suitable surrogate tissues will help minimize limitations imposed by those challenges. This study describes an approach used to identify gene expression changes that might serve as surrogate biomarkers of drug activity. Methods Expression profiling using microarrays was applied to peripheral blood mononuclear cell (PBMC samples obtained from patients with advanced colorectal cancer participating in a Phase III clinical trial. The PBMC samples were harvested pre-treatment and at the end of the first 6-week cycle from patients receiving standard of care chemotherapy or standard of care plus SU5416, a vascular endothelial growth factor (VEGF receptor tyrosine kinase (RTK inhibitor. Results from matched pairs of PBMC samples from 23 patients were queried for expression changes that consistently correlated with SU5416 administration. Results Thirteen transcripts met this selection criterion; six were further tested by quantitative RT-PCR analysis of 62 additional samples from this trial and a second SU5416 Phase III trial of similar design. This method confirmed four of these transcripts (CD24, lactoferrin, lipocalin 2, and MMP-9 as potential biomarkers of drug treatment. Discriminant analysis showed that expression profiles of these 4 transcripts could be used to classify patients by treatment arm in a predictive fashion. Conclusions These results establish a foundation for the further exploration of peripheral blood cells as a surrogate system for biomarker analyses in clinical oncology studies.
DePrimo, Samuel E; Wong, Lily M; Khatry, Deepak B; Nicholas, Susan L; Manning, William C; Smolich, Beverly D; O'Farrell, Anne-Marie; Cherrington, Julie M
2003-01-01
Microarray-based gene expression profiling is a powerful approach for the identification of molecular biomarkers of disease, particularly in human cancers. Utility of this approach to measure responses to therapy is less well established, in part due to challenges in obtaining serial biopsies. Identification of suitable surrogate tissues will help minimize limitations imposed by those challenges. This study describes an approach used to identify gene expression changes that might serve as surrogate biomarkers of drug activity. Expression profiling using microarrays was applied to peripheral blood mononuclear cell (PBMC) samples obtained from patients with advanced colorectal cancer participating in a Phase III clinical trial. The PBMC samples were harvested pre-treatment and at the end of the first 6-week cycle from patients receiving standard of care chemotherapy or standard of care plus SU5416, a vascular endothelial growth factor (VEGF) receptor tyrosine kinase (RTK) inhibitor. Results from matched pairs of PBMC samples from 23 patients were queried for expression changes that consistently correlated with SU5416 administration. Thirteen transcripts met this selection criterion; six were further tested by quantitative RT-PCR analysis of 62 additional samples from this trial and a second SU5416 Phase III trial of similar design. This method confirmed four of these transcripts (CD24, lactoferrin, lipocalin 2, and MMP-9) as potential biomarkers of drug treatment. Discriminant analysis showed that expression profiles of these 4 transcripts could be used to classify patients by treatment arm in a predictive fashion. These results establish a foundation for the further exploration of peripheral blood cells as a surrogate system for biomarker analyses in clinical oncology studies
Li, X.; Ding, Y.; He, X.; Han, T.
2017-12-01
Meltwater chemistry was examined at the Dongkemadi Glacier (DG) basin on the Tibetan Plateau (TP) over a full melt season of 2013. Results showed that concentrations of most solutes (e.g. Li, B, Sc, Fe, Rb, Sr, Mo, Ba and U) (Group 1) exhibit pronounced seasonal variations, but some (e.g. NH4+, F-, Al, Cr, Mn, Co, Cu, Zn, Y, Cd, Sn, Pb, Bi and Th) (Group 2) show a random change. Concentration-discharge for Group 1 was dominated by a well-defined power law relation, with the magnitude of exponent (-0.79 to -0.24) and R2 values (plower on rising than falling limbs. This has important implications for efforts to estimate daily concentrations for Group 1 from glaciers where only glacial discharge is available. Although concentrations of some solutes are not related to discharge, fluxes of almost all solutes, except for NH4+, Cu, Zn, Cd, Sn, are positively correlated to discharge, exhibiting a good power law relation (0.27water quality. This implies that glacier streams are facing a risk of water quality deterioration in future warming climate. Annual solute flux is almost 2 times higher than estimate by recent study, highlighting importance of continuous sampling in the field. Annual flux for most elements should be estimated by using samples collected once daily at high flow rather than twice at high and low flows respectively, which will reduce the deviation of annual export estimation. However, for some elements (e.g. Cr, Zn, Al), they may need high-frequency sampling. This study helps to reevaluate chemical denudation rates and solute export from glacial catchments.
Fayad, Laura M; Mugera, Charles; Soldatos, Theodoros; Flammang, Aaron; del Grande, Filippo
2013-07-01
We demonstrate the clinical use of an MR angiography sequence performed with sparse k-space sampling (MRA), as a method for dynamic contrast-enhanced (DCE)-MRI, and apply it to the assessment of sarcomas for treatment response. Three subjects with sarcomas (2 with osteosarcoma, 1 with high-grade soft tissue sarcomas) underwent MRI after neoadjuvant therapy/prior to surgery, with conventional MRI (T1-weighted, fluid-sensitive, static post-contrast T1-weighted sequences) and DCE-MRI (MRA, time resolution = 7-10 s, TR/TE 2.4/0.9 ms, FOV 40 cm(2)). Images were reviewed by two observers in consensus who recorded image quality (1 = diagnostic, no significant artifacts, 2 = diagnostic, 75 % with good response, >75 % with poor response). DCE-MRI findings were concordant with histological response (arterial enhancement with poor response, no arterial enhancement with good response). Unlike conventional DCE-MRI sequences, an MRA sequence with sparse k-space sampling is easily integrated into a routine musculoskeletal tumor MRI protocol, with high diagnostic quality. In this preliminary work, tumor enhancement characteristics by DCE-MRI were used to assess treatment response.
Fayad, Laura M.; Mugera, Charles; Grande, Filippo del; Soldatos, Theodoros; Flammang, Aaron
2013-01-01
We demonstrate the clinical use of an MR angiography sequence performed with sparse k-space sampling (MRA), as a method for dynamic contrast-enhanced (DCE)-MRI, and apply it to the assessment of sarcomas for treatment response. Three subjects with sarcomas (2 with osteosarcoma, 1 with high-grade soft tissue sarcomas) underwent MRI after neoadjuvant therapy/prior to surgery, with conventional MRI (T1-weighted, fluid-sensitive, static post-contrast T1-weighted sequences) and DCE-MRI (MRA, time resolution = 7-10 s, TR/TE 2.4/0.9 ms, FOV 40 cm 2 ). Images were reviewed by two observers in consensus who recorded image quality (1 = diagnostic, no significant artifacts, 2 = diagnostic, 75 % with good response, >75 % with poor response). DCE-MRI findings were concordant with histological response (arterial enhancement with poor response, no arterial enhancement with good response). Unlike conventional DCE-MRI sequences, an MRA sequence with sparse k-space sampling is easily integrated into a routine musculoskeletal tumor MRI protocol, with high diagnostic quality. In this preliminary work, tumor enhancement characteristics by DCE-MRI were used to assess treatment response. (orig.)
Marazuela, M.D., E-mail: marazuela@quim.ucm.es [Department of Analytical Chemistry, Faculty of Chemistry, Universidad Complutense de Madrid, E-28040 Madrid (Spain); Bogialli, S [Department of Chemistry, University of Rome ' La Sapienza' , Piazza Aldo Moro, 5 00185 Rome (Italy)
2009-07-10
The determination of trace residues and contaminants in food has been of growing concern over the past few years. Residual antibacterials in food constitute a risk to human health, especially because they can contribute to the transmission of antibiotic-resistant pathogenic bacteria through the food chain. Therefore, to ensure food safety EU and USA regulatory agencies have established lists of forbidden or banned substances and tolerance levels for authorized veterinary drugs (e.g. antibacterials). In addition, the EU Commission Decision 2002/657/EC has set requirements about the performance of analytical methods for the determination of veterinary drug residues in food and feedstuffs. During the past years, the use of powerful mass spectrometric detectors in combination with innovative chromatographic technologies has solved many problems related to sensitivity and selectivity of this type of analysis. However sample preparation still remains as the bottleneck step, mainly in terms of analysis time and sources of error. This review covering research published between 2004 and 2008 intends to provide an update overview of the past five years, on recent trends in sample preparation for the determination of antibacterial residues in foods, making special emphasis in on-line, high-throughput, multi-class methods and including several applications in detail.
Tiago Campos Pereira
2007-01-01
Full Text Available The RNA interference (RNAi technique is a recent technology that uses double-stranded RNA molecules to promote potent and specific gene silencing. The application of this technique to molecular biology has increased considerably, from gene function identification to disease treatment. However, not all small interfering RNAs (siRNAs are equally efficient, making target selection an essential procedure. Here we present Strand Analysis (SA, a free online software tool able to identify and classify the best RNAi targets based on Gibbs free energy (deltaG. Furthermore, particular features of the software, such as the free energy landscape and deltaG gradient, may be used to shed light on RNA-induced silencing complex (RISC activity and RNAi mechanisms, which makes the SA software a distinct and innovative tool.
Sobolev, S. L., E-mail: sobolev@icp.ac.ru [Russian Academy of Sciences, Institute of Problems of Chemical Physics (Russian Federation)
2017-03-15
An analytical model has been developed to describe the influence of solute trapping during rapid alloy solidification on the components of the Gibbs free energy change at the phase interface with emphasis on the solute drag energy. For relatively low interface velocity V < V{sub D}, where V{sub D} is the characteristic diffusion velocity, all the components, namely mixing part, local nonequilibrium part, and solute drag, significantly depend on solute diffusion and partitioning. When V ≥ V{sub D}, the local nonequilibrium effects lead to a sharp transition to diffusionless solidification. The transition is accompanied by complete solute trapping and vanishing solute drag energy, i.e. partitionless and “dragless” solidification.
Lobo, L.Q.; Ferreira, A.G.M.; Fonseca, I.M.A.; Senra, A.M.P.
2006-01-01
The vapour pressure of binary mixtures of hydrogen sulphide with ethane, propane, and n-butane was measured at T=182.33K covering most of the composition range. The excess Gibbs free energy of these mixtures has been derived from the measurements made. For the equimolar mixtures G m E (x 1 =0.5)=(835.5+/-5.8)J.mol -1 for (H 2 S+C 2 H 6 ) (820.1+/-2.4)J.mol -1 for (H 2 S+C 3 H 8 ), and (818.6+/-0.9)J.mol -1 for (H 2 S+n-C 4 H 10 ). The binary mixtures of H 2 S with ethane and with propane exhibit azeotropes, but that with n-butane does not
Radtke, Valentin; Ermantraut, Andreas; Himmel, Daniel; Koslowski, Thorsten; Leito, Ivo; Krossing, Ingo
2018-02-23
Described is a procedure for the thermodynamically rigorous, experimental determination of the Gibbs energy of transfer of single ions between solvents. The method is based on potential difference measurements between two electrochemical half cells with different solvents connected by an ideal ionic liquid salt bridge (ILSB). Discussed are the specific requirements for the IL with regard to the procedure, thus ensuring that the liquid junction potentials (LJP) at both ends of the ILSB are mostly canceled. The remaining parts of the LJPs can be determined by separate electromotive force measurements. No extra-thermodynamic assumptions are necessary for this procedure. The accuracy of the measurements depends, amongst others, on the ideality of the IL used, as shown in our companion paper Part II. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Cotes, S.; Fernandez Guillermet, A.; Sade, M.
1999-01-01
Very recent, accurate dilatometric measurements of the fcc hcp martensitic transformation (MT) temperatures are used to develop a new thermodynamic description of the fcc and hcp phases in the Fe-Mn-Si system, based on phenomenological models for the Gibbs energy function. The composition dependence of the driving forces for the fcc→hcp and the hcp→fcc MTs is established. Detailed calculations of the MT temperatures are reported, which are used to investigate the systematic effects of Si additions upon the MT temperatures of Fe-Mn alloys. A critical comparison with one of the most recent thermodynamic analyses of the Fe-Mn-Si system, which is due to Forsberg and Agren, is also presented. (orig.)
Lund, Marianne; Madsen, Mogens
2006-01-01
To illustrate important issues in optimization of a PCR assay with an internal control four different primer combinations for conventional PCR, two non-competitive and two competitive set-ups for real time PCR were used for detection of Campylobacter spp. in chicken faecal samples....... In the conventional PCR assays the internal control was genomic DNA from Yersinia ruckeri, which is not found in chicken faeces. This internal control was also used in one of the set LIPS in real time PCR. In the three other set-ups different DNA fragments of 109 bp length prepared from two oligos of each 66 bp...... by a simple extension reaction was used. All assays were optimized to avoid loss of target sensitivity due to the presence of the internal control by adjusting the amount of internal control primers in the duplex assays and the amount of internal control in all assays. Furthermore. the assays were tested...
Pallebage-Gamarallage, Menuka; Foxley, Sean; Menke, Ricarda A L; Huszar, Istvan N; Jenkinson, Mark; Tendler, Benjamin C; Wang, Chaoyue; Jbabdi, Saad; Turner, Martin R; Miller, Karla L; Ansorge, Olaf
2018-03-13
Amyotrophic lateral sclerosis (ALS) is a clinically and histopathologically heterogeneous neurodegenerative disorder, in which therapy is hindered by the rapid progression of disease and lack of biomarkers. Magnetic resonance imaging (MRI) has demonstrated its potential for detecting the pathological signature and tracking disease progression in ALS. However, the microstructural and molecular pathological substrate is poorly understood and generally defined histologically. One route to understanding and validating the pathophysiological correlates of MRI signal changes in ALS is to directly compare MRI to histology in post mortem human brains. The article delineates a universal whole brain sampling strategy of pathologically relevant grey matter (cortical and subcortical) and white matter tracts of interest suitable for histological evaluation and direct correlation with MRI. A standardised systematic sampling strategy that was compatible with co-registration of images across modalities was established for regions representing phosphorylated 43-kDa TAR DNA-binding protein (pTDP-43) patterns that were topographically recognisable with defined neuroanatomical landmarks. Moreover, tractography-guided sampling facilitated accurate delineation of white matter tracts of interest. A digital photography pipeline at various stages of sampling and histological processing was established to account for structural deformations that might impact alignment and registration of histological images to MRI volumes. Combined with quantitative digital histology image analysis, the proposed sampling strategy is suitable for routine implementation in a high-throughput manner for acquisition of large-scale histology datasets. Proof of concept was determined in the spinal cord of an ALS patient where multiple MRI modalities (T1, T2, FA and MD) demonstrated sensitivity to axonal degeneration and associated heightened inflammatory changes in the lateral corticospinal tract. Furthermore
Fatemeh Khaleghi
2016-05-01
Full Text Available Vitamin B9 or folic acid is an important food supplement with wide clinical applications. Due to its importance and its side effects in pregnant women, fast determination of this vitamin is very important. In this study we present a new fast and sensitive voltammetric sensor for the analysis of trace levels of vitamin B9 using a carbon paste electrode (CPE modified with 1,3-dipropylimidazolium bromide (1,3-DIBr as a binder and ZnO/CNTs nanocomposite as a mediator. The electro-oxidation signal of vitamin B9 at the surface of the 1,3-DIBr/ZnO/CNTs/CPE electrode appeared at 800 mV, which was about 95 mV less positive compared to the corresponding unmodified CPE. The oxidation current of vitamin B9 by square wave voltammetry (SWV increased linearly with its concentration in the range of 0.08–650 μM. The detection limit for vitamin B9 was 0.05 μM. Finally, the utility of the new 1,3-DIBr/ZnO/CNTs/CPE electrode was tested in the determination of vitamin B9 in food and pharmaceutical samples.
Interactive Control System, Intended Strategy, Implemented Strategy dan Emergent Strategy
Tubagus Ismail
2012-09-01
Full Text Available The purpose of this study was to examine the relationship between management control system (MCS and strategy formation processes, namely: intended strategy, emergent strategy and impelemented strategy. The focus of MCS in this study was interactive control system. The study was based on Structural Equation Modeling (SEM as its multivariate analyses instrument. The samples were upper middle managers of manufacturing company in Banten Province, DKI Jakarta Province and West Java Province. AMOS Software 16 program is used as an additional instrument to resolve the problem in SEM modeling. The study found that interactive control system brought a positive and significant influence on Intended strategy; interactive control system brought a positive and significant influence on implemented strategy; interactive control system brought a positive and significant influence on emergent strategy. The limitation of this study is that our empirical model only used one way relationship between the process of strategy formation and interactive control system.
Li, H.Q.; Yang, Y.S.; Tong, W.H.; Wang, Z.Y.
2007-01-01
With the effects of electronic structure and atomic size being introduced, the mixing enthalpy as well as the Gibbs energy of the ternary Zr-Al-Cu, Ni-Al-Cu, Zr-Ni-Al and quaternary Zr-Al-Ni-Cu systems are calculated based on quasiregular solution model. The computed results agree well with the experimental data. The sequence of Gibbs energies of different systems is: G Zr-Al-Ni-Cu Zr-Al-Ni Zr-Al-Cu Cu-Al-Ni . To Zr-Al-Cu, Ni-Al-Cu and Zr-Ni-Al, the lowest Gibbs energy locates in the composition range of X Zr 0.39-0.61, X Al = 0.38-0.61; X Ni = 0.39-0.61, X Al = 0.38-0.60 and X Zr = 0.32-0.67, X Al = 0.32-0.66, respectively. And to the Zr-Ni-Al-Cu system with 66.67% Zr, the lowest Gibbs energy is obtained in the region of X Al = 0.63-0.80, X Ni = 0.14-0.24
Despax, Aurélien; Perret, Christian; Garçon, Rémy; Hauet, Alexandre; Belleville, Arnaud; Le Coz, Jérôme; Favre, Anne-Catherine
2016-02-01
Streamflow time series provide baseline data for many hydrological investigations. Errors in the data mainly occur through uncertainty in gauging (measurement uncertainty) and uncertainty in the determination of the stage-discharge relationship based on gaugings (rating curve uncertainty). As the velocity-area method is the measurement technique typically used for gaugings, it is fundamental to estimate its level of uncertainty. Different methods are available in the literature (ISO 748, Q + , IVE), all with their own limitations and drawbacks. Among the terms forming the combined relative uncertainty in measured discharge, the uncertainty component relating to the limited number of verticals often includes a large part of the relative uncertainty. It should therefore be estimated carefully. In ISO 748 standard, proposed values of this uncertainty component only depend on the number of verticals without considering their distribution with respect to the depth and velocity cross-sectional profiles. The Q + method is sensitive to a user-defined parameter while it is questionable whether the IVE method is applicable to stream-gaugings performed with a limited number of verticals. To address the limitations of existing methods, this paper presents a new methodology, called FLow Analog UnceRtainty Estimation (FLAURE), to estimate the uncertainty component relating to the limited number of verticals. High-resolution reference gaugings (with 31 and more verticals) are used to assess the uncertainty component through a statistical analysis. Instead of subsampling purely randomly the verticals of these reference stream-gaugings, a subsampling method is developed in a way that mimicks the behavior of a hydrometric technician. A sampling quality index (SQI) is suggested and appears to be a more explanatory variable than the number of verticals. This index takes into account the spacing between verticals and the variation of unit flow between two verticals. To compute the
Carrillo-Carrión, C; Cárdenas, S; Valcárcel, M
2007-02-02
A vanguard/rearguard analytical strategy for the monitoring of the degradation of yoghurt samples is proposed. The method is based on the headspace-gas chromatography-mass spectrometry (HS-GC-MS) instrumental coupling. In this combination, the chromatographic column is firstly used as an interface between the HS and the MS (vanguard mode) avoiding separation of the volatile components by maintaining the chromatographic oven at high, constant temperature. By changing the thermal conditions of the oven, the aldehydes can be properly separated for individual identification/quantification (rearguard mode). In the vanguard method, the quantification of the volatile aldehydes was calculated through partial least square and given as a total index. The rearguard method permits the detection of the aldehydes at concentrations between 12 and 35 ng/g. Both methods were applied to the study of the environmental factors favouring the presence of the volatile aldehydes (C(5)-C(9)) in the yoghurt samples. Principal component analysis of the total concentration of aldehydes with the time (from 0 to 30 days) demonstrates the capability of the HS-MS coupling for the estimation of the quality losses of the samples. The results were corroborated by the HS-GC-MS which also indicates that pentanal was present in the yoghurt from the beginning of the study and the combination of light/oxygen was the most negative influence for sample conservation.
Metrology Sampling Strategies for Process Monitoring Applications
Vincent, Tyrone L.; Stirton, James Broc; Poolla, Kameshwar
2011-01-01
, economic pressures prompt a reduction in metrology, for both capital and cycle-time reasons. This paper explores the use of modeling and minimum-variance prediction as a method to select the sites for measurement on each wafer. The models are developed
Metrology Sampling Strategies for Process Monitoring Applications
Vincent, Tyrone L.
2011-11-01
Shrinking process windows in very large scale integration semiconductor manufacturing have already necessitated the development of control systems capable of addressing sub-lot-level variation. Within-wafer control is the next milestone in the evolution of advanced process control from lot-based and wafer-based control. In order to adequately comprehend and control within-wafer spatial variation, inline measurements must be performed at multiple locations across the wafer. At the same time, economic pressures prompt a reduction in metrology, for both capital and cycle-time reasons. This paper explores the use of modeling and minimum-variance prediction as a method to select the sites for measurement on each wafer. The models are developed using the standard statistical tools of principle component analysis and canonical correlation analysis. The proposed selection method is validated using real manufacturing data, and results indicate that it is possible to significantly reduce the number of measurements with little loss in the information obtained for the process control systems. © 2011 IEEE.
Clinical Strategies for Sampling Word Recognition Performance.
Schlauch, Robert S; Carney, Edward
2018-04-17
Computer simulation was used to estimate the statistical properties of searches for maximum word recognition ability (PB max). These involve presenting multiple lists and discarding all scores but that of the 1 list that produced the highest score. The simulations, which model limitations inherent in the precision of word recognition scores, were done to inform clinical protocols. A secondary consideration was a derivation of 95% confidence intervals for significant changes in score from phonemic scoring of a 50-word list. The PB max simulations were conducted on a "client" with flat performance intensity functions. The client's performance was assumed to be 60% initially and 40% for a second assessment. Thousands of estimates were obtained to examine the precision of (a) single lists and (b) multiple lists using a PB max procedure. This method permitted summarizing the precision for assessing a 20% drop in performance. A single 25-word list could identify only 58.4% of the cases in which performance fell from 60% to 40%. A single 125-word list identified 99.8% of the declines correctly. Presenting 3 or 5 lists to find PB max produced an undesirable finding: an increase in the word recognition score. A 25-word list produces unacceptably low precision for making clinical decisions. This finding holds in both single and multiple 25-word lists, as in a search for PB max. A table is provided, giving estimates of 95% critical ranges for successive presentations of a 50-word list analyzed by the number of phonemes correctly identified.
Citanovic, M.; Bezlaj, H.
1994-01-01
This presentation describes essential boat sampling activities: on site boat sampling process optimization and qualification; boat sampling of base material (beltline region); boat sampling of weld material (weld No. 4); problems accompanied with weld crown varieties, RPV shell inner radius tolerance, local corrosion pitting and water clarity. The equipment used for boat sampling is described too. 7 pictures
Zhang, L.-C.; Patone, M.
2017-01-01
We synthesise the existing theory of graph sampling. We propose a formal definition of sampling in finite graphs, and provide a classification of potential graph parameters. We develop a general approach of Horvitz–Thompson estimation to T-stage snowball sampling, and present various reformulations of some common network sampling methods in the literature in terms of the outlined graph sampling theory.
Yin, K.; Belonoshko, A. B.; Zhou, H.; Lu, X.
2016-12-01
The melting temperatures of materials in the interior of the Earth has significant implications in many areas of geophysics. The direct calculations of the melting point by atomic simulations would face substantial hysteresis problem. To overcome the hysteresis encountered in the atomic simulations there are a few different melting-point determination methods available nowadays, which are founded independently, such as the free energy method, the two-phase or coexistence method, and the Z method, etc. In this study, we provide a theoretical understanding the relations of these methods from a geometrical perspective based on a quantitative construction of the volume-entropy-energy thermodynamic surface, a model first proposed by J. Willard Gibbs in 1873. Then combining with an experimental data and/or a previous melting-point determination method, we apply this model to derive the high-pressure melting curves for several lower mantle minerals with less computational efforts relative to using previous methods only. Through this way, some polyatomic minerals at extreme pressures which are almost unsolvable before are calculated fully from first principles now.
BLOCH, Claude; DE DOMINICIS, Cyrano
1959-01-01
Starting from an expansion derived in a previous work, we study the contribution to the Gibbs potential of the two-body dynamical correlations, taking into account the statistical correlations. Such a contribution is of interest for low-density systems at low temperature. In the zero density limit, it reduces to the Beth-Uhlenbeck expression for the second virial coefficient. For a system of fermions in the zero temperature limit, it yields the contribution of the Brueckner reaction matrix to the ground state energy, plus, under certain conditions, additional terms of the form exp ( β / Δ /), where the Δ are the binding energies of 'bound states' of the type first discussed by L. Cooper. Finally, we study the wave function of two particles immersed in a medium (defined by its temperature and chemical potential). It satisfies an equation generalizing the Bethe-Goldstone equation for an arbitrary temperature. Reprint of a paper published in Nuclear Physics, 10, p. 509-526, 1959
Jarungthammachote, S.; Dutta, A.
2008-01-01
Spouted beds have been found in many applications, one of which is gasification. In this paper, the gasification processes of conventional and modified spouted bed gasifiers were considered. The conventional spouted bed is a central jet spouted bed, while the modified spouted beds are circular split spouted bed and spout-fluid bed. The Gibbs free energy minimization method was used to predict the composition of the producer gas. The major six components, CO, CO 2 , CH 4 , H 2 O, H 2 and N 2 , were determined in the mixture of the producer gas. The results showed that the carbon conversion in the gasification process plays an important role in the model. A modified model was developed by considering the carbon conversion in the constraint equations and in the energy balance calculation. The results from the modified model showed improvements. The higher heating values (HHV) were also calculated and compared with the ones from experiments. The agreements of the calculated and experimental values of HHV, especially in the case of the circular split spouted bed and the spout-fluid bed were observed
Vecchio, Stefano; Brunetti, Bruno
2009-01-01
The vapor pressures of the solid and liquid 2,4- and 3,4-dinitrobenzoic acids were determined by torsion-effusion and thermogravimetry under both isothermal and non-isothermal conditions, respectively. From the temperature dependence of vapor pressure derived by the experimental torsion-effusion and thermogravimetry data the molar enthalpies of sublimation Δ cr g H m 0 ( ) and vaporization Δ l g H m 0 ( ) were determined, respectively, at the middle of the respective temperature intervals. The melting temperatures and the molar enthalpies of fusion of these compounds were measured by d.s.c. Finally, the results obtained by all the methods proposed were corrected at the reference temperature of 298.15 K using the estimated heat capacity differences between gas and liquid for vaporization experiments and the estimated heat capacity differences between gas and solid for sublimation experiments. Therefore, the averages of the standard (p o = 0.1 MPa) molar enthalpies, entropies and Gibbs free energies of sublimation at 298.15 K, have been derived.
Umirzakov, I. H.
2018-01-01
The author comments on an article by Woodcock (Int J Thermophys 35:1770-1784, 2014), who investigates the idea of a critical line instead of a single critical point using the example of argon. In the introduction, Woodcock states that "The Van der Waals critical point does not comply with the Gibbs phase rule. Its existence is based upon a hypothesis rather than a thermodynamic definition". The present comment is a response to the statement by Woodcock. The comment mathematically demonstrates that a critical point is not only based on a hypothesis that is used to define values of two parameters of the Van der Waals equation of state. Instead, the author argues that a critical point is a direct consequence of the thermodynamic phase equilibrium conditions resulting in a single critical point. It is shown that the thermodynamic conditions result in the first and second partial derivatives of pressure with respect to volume at constant temperature at a critical point equal to zero which are usual conditions of an existence of a critical point.
A SHORT-DURATION EVENT AS THE CAUSE OF DUST EJECTION FROM MAIN-BELT COMET P/2012 F5 (GIBBS)
Moreno, F. [Instituto de Astrofisica de Andalucia, CSIC, Glorieta de la Astronomia s/n, E-18008 Granada (Spain); Licandro, J.; Cabrera-Lavers, A., E-mail: fernando@iaa.es [Instituto de Astrofisica de Canarias, c/Via Lactea s/n, E-38200 La Laguna, Tenerife (Spain)
2012-12-10
We present observations and an interpretative model of the dust environment of the Main-Belt Comet P/2010 F5 (Gibbs). The narrow dust trails observed can be interpreted unequivocally as an impulsive event that took place around 2011 July 1 with an uncertainty of {+-}10 days, and a duration of less than a day, possibly of the order of a few hours. The best Monte Carlo dust model fits to the observed trail brightness imply ejection velocities in the range 8-10 cm s{sup -1} for particle sizes between 30 cm and 130 {mu}m. This weak dependence of velocity on size contrasts with that expected from ice sublimation and agrees with that found recently for (596) Scheila, a likely impacted asteroid. The particles seen in the trail are found to follow a power-law size distribution of index Almost-Equal-To -3.7. Assuming that the slowest particles were ejected at the escape velocity of the nucleus, its size is constrained to about 200-300 m in diameter. The total ejected dust mass is {approx}> 5 Multiplication-Sign 10{sup 8} kg, which represents approximately 4%-20% of the nucleus mass.
Mauro de Melo Júnior
2012-03-01
Full Text Available The influence of tidal and diel changes on the exchange of Petrolisthes armatus planktonic larvae was studied at the Catuama inlet, which represents an intermediate system of marine and estuarine environments in the Northeast Brazil. To characterize the larval abundance and vertical distribution, samplings were carried out in August 2001 at neap tide and 3 stations, with 3 hours interval over 24 hours. Samples were taken at three or two depths at each station, with a plankton pump coupled to a 300 µm mesh size net. Petrolisthes armatus zoea I and II showed a mean of 26.3 ± 83.6 and 12 ± 38.8 ind m-3, respectively. During flood tides, the larvae were more concentrated in the midwater and surface, which avoided the transport to internal regions. In contrast, during ebb tides when the larvae were distributed in the three layers, the higher concentrations were found in the bottom, which avoided a major exportation. The diel dynamic of the larval fluxes was characterized by vertical migration behavior associated to the tidal regime, which suggested that the development of this decapod apparently occurs in the inner shelf (instead of the outer shelf off this peculiar ecosystem.A influência dos ciclos de maré e fotoperíodo sobre o fluxo de larvas planctônicas de Petrolisthes armatus foi estudada na barra de Catuama, que representa um ambiente intermediário entre os sistemas marinho e estuarino no nordeste do Brasil. Para caracterizar a abundância e a distribuição vertical das larvas, foram feitas coletas em agosto de 2001, durante a maré de quadratura, em 3 estações fixas e com intervalos de 3 horas, ao longo de um ciclo de 24 horas. Em cada estação, amostras foram coletadas em três ou duas profundidades, com o auxílio de uma bomba de sucção acoplada a uma rede de plâncton com 300 µm de abertura de malha. Os zoés I e II de Petrolisthes armatus apresentaram médias de 26,3 ± 83,6 e 12 ± 38,8 ind m-3, respectivamente. Durante as
Brus, D.J.
2015-01-01
In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling
Lu, Xiuyuan; Van Roy, Benjamin
2017-01-01
Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applica...
Rijkhoff, Jan; Bakker, Dik; Hengeveld, Kees
1993-01-01
In recent years more attention is paid to the quality of language samples in typological work. Without an adequate sampling strategy, samples may suffer from various kinds of bias. In this article we propose a sampling method in which the genetic criterion is taken as the most important: samples...... to determine how many languages from each phylum should be selected, given any required sample size....