Microeconomics : Equilibrium and Efficiency
Ten Raa, T.
2013-01-01
Microeconomics: Equilibrium and Efficiency teaches how to apply microeconomic theory in an innovative, intuitive and concise way. Using real-world, empirical examples, this book not only covers the building blocks of the subject, but helps gain a broad understanding of microeconomic theory and
Equilibrium sampling by reweighting nonequilibrium simulation trajectories.
Yang, Cheng; Wan, Biao; Xu, Shun; Wang, Yanting; Zhou, Xin
2016-03-01
Based on equilibrium molecular simulations, it is usually difficult to efficiently visit the whole conformational space of complex systems, which are separated into some metastable regions by high free energy barriers. Nonequilibrium simulations could enhance transitions among these metastable regions and then be applied to sample equilibrium distributions in complex systems, since the associated nonequilibrium effects can be removed by employing the Jarzynski equality (JE). Here we present such a systematical method, named reweighted nonequilibrium ensemble dynamics (RNED), to efficiently sample equilibrium conformations. The RNED is a combination of the JE and our previous reweighted ensemble dynamics (RED) method. The original JE reproduces equilibrium from lots of nonequilibrium trajectories but requires that the initial distribution of these trajectories is equilibrium. The RED reweights many equilibrium trajectories from an arbitrary initial distribution to get the equilibrium distribution, whereas the RNED has both advantages of the two methods, reproducing equilibrium from lots of nonequilibrium simulation trajectories with an arbitrary initial conformational distribution. We illustrated the application of the RNED in a toy model and in a Lennard-Jones fluid to detect its liquid-solid phase coexistence. The results indicate that the RNED sufficiently extends the application of both the original JE and the RED in equilibrium sampling of complex systems.
Equilibrium Molecular Thermodynamics from Kirkwood Sampling
Somani, Sandeep; Okamoto, Yuko; Ballard, Andrew J.; Wales, David J.
2015-01-01
We present two methods for barrierless equilibrium sampling of molecular systems based on the recently proposed Kirkwood method (J. Chem. Phys. 2009, 130, 134102). Kirkwood sampling employs low-order correlations among internal coordinates of a molecule for random (or non-Markovian) sampling of the high dimensional conformational space. This is a geometrical sampling method independent of the potential energy surface. The first method is a variant of biased Monte Carlo, wher...
Sharp Efficiency for Vector Equilibrium Problems on Banach Spaces
Directory of Open Access Journals (Sweden)
Si-Huan Li
2013-01-01
Full Text Available The concept of sharp efficient solution for vector equilibrium problems on Banach spaces is proposed. Moreover, the Fermat rules for local efficient solutions of vector equilibrium problems are extended to the sharp efficient solutions by means of the Clarke generalized differentiation and the normal cone. As applications, some necessary optimality conditions and sufficient optimality conditions for local sharp efficient solutions of a vector optimization problem with an abstract constraint and a vector variational inequality are obtained, respectively.
Financial markets theory equilibrium, efficiency and information
Barucci, Emilio
2017-01-01
This work, now in a thoroughly revised second edition, presents the economic foundations of financial markets theory from a mathematically rigorous standpoint and offers a self-contained critical discussion based on empirical results. It is the only textbook on the subject to include more than two hundred exercises, with detailed solutions to selected exercises. Financial Markets Theory covers classical asset pricing theory in great detail, including utility theory, equilibrium theory, portfolio selection, mean-variance portfolio theory, CAPM, CCAPM, APT, and the Modigliani-Miller theorem. Starting from an analysis of the empirical evidence on the theory, the authors provide a discussion of the relevant literature, pointing out the main advances in classical asset pricing theory and the new approaches designed to address asset pricing puzzles and open problems (e.g., behavioral finance). Later chapters in the book contain more advanced material, including on the role of information in financial markets, non-c...
Non-equilibrium microwave plasma for efficient high temperature chemistry
van den Bekerom, D.C.M.; den Harder, N.; Minea, T.; Palomares Linares, J.M.; Bongers, W.; van de Sanden, M.C.M.; van Rooij, G.J.
2017-01-01
This article describes a flowing microwave reactor that is used to drive efficient non-equilibrium chemistry for the application of conversion/activation of stable molecules such as CO2, N2 and CH4. The goal of the procedure described here is to measure the in situ gas temperature and gas
Non-equilibrium Microwave Plasma for Efficient High Temperature Chemistry.
van den Bekerom, Dirk; den Harder, Niek; Minea, Teofil; Gatti, Nicola; Linares, Jose Palomares; Bongers, Waldo; van de Sanden, Richard; van Rooij, Gerard
2017-08-01
A flowing microwave plasma based methodology for converting electric energy into internal and/or translational modes of stable molecules with the purpose of efficiently driving non-equilibrium chemistry is discussed. The advantage of a flowing plasma reactor is that continuous chemical processes can be driven with the flexibility of startup times in the seconds timescale. The plasma approach is generically suitable for conversion/activation of stable molecules such as CO2, N2 and CH4. Here the reduction of CO2 to CO is used as a model system: the complementary diagnostics illustrate how a baseline thermodynamic equilibrium conversion can be exceeded by the intrinsic non-equilibrium from high vibrational excitation. Laser (Rayleigh) scattering is used to measure the reactor temperature and Fourier Transform Infrared Spectroscopy (FTIR) to characterize in situ internal (vibrational) excitation as well as the effluent composition to monitor conversion and selectivity.
Equilibrium sampling for a thermodynamic assessment of contaminated sediments
DEFF Research Database (Denmark)
) govern diffusive uptake and partitioning. Equilibrium sampling of sediment was introduced 15 years ago to measure Cfree, and it has since developed into a straightforward, precise and sensitive approach for determining Cfree and other exposure parameters that allow for thermodynamic assessment...... of polluted sediments. Glass jars with µm-thin silicone coatings on the inner walls can be used for ex situ equilibration while a device housing several silicone-coated fibers can be used for in situ equilibration. In both cases, parallel sampling with varying silicone thicknesses can be applied to confirm...... will focus at the latest developments in equilibrium sampling concepts and methods. Further, we will explain how these approaches can provide a new basis for a thermodynamic assessment of polluted sediments....
Equilibrium sampling for a thermodynamic assessment of contaminated sediments
DEFF Research Database (Denmark)
Mayer, Philipp; Nørgaard Schmidt, Stine; Mäenpää, Kimmo
Hydrophobic organic contaminants (HOCs) reaching the aquatic environment are largely stored in sediments. The risk of contaminated sediments is challenging to assess since traditional exhaustive extraction methods yield total HOC concentrations, whereas freely dissolved concentrations (Cfree......) govern diffusive uptake and partitioning. Equilibrium sampling of sediment was introduced 15 years ago to measure Cfree, and it has since developed into a straightforward, precise and sensitive approach for determining Cfree and other exposure parameters that allow for thermodynamic assessment...... of polluted sediments. Glass jars with µm-thin silicone coatings on the inner walls can be used for ex situ equilibration while a device housing several silicone-coated fibers can be used for in situ equilibration. In both cases, parallel sampling with varying silicone thicknesses can be applied to confirm...
Chen, Yunjie; Roux, Benoît
2014-09-21
Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct
Chen, Yunjie; Roux, Benoît
2014-09-01
Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct
Equilibrium sampling of hydrophobic organic chemicals in sediments: challenges and new approaches
DEFF Research Database (Denmark)
Schaefer, S.; Mayer, Philipp; Becker, B.
2015-01-01
) are considered to be the effective concentrations for diffusive uptake and partitioning, and they can be measured by equilibrium sampling. We have thus applied glass jars with multiple coating thicknesses for equilibrium sampling of HOCs in sediment samples from various sites in different German rivers...
Non-equilibrium umbrella sampling applied to force spectroscopy of soft matter.
Gao, Y X; Wang, G M; Williams, D R M; Williams, Stephen R; Evans, Denis J; Sevick, E M
2012-02-07
Physical systems often respond on a timescale which is longer than that of the measurement. This is particularly true in soft matter where direct experimental measurement, for example in force spectroscopy, drives the soft system out of equilibrium and provides a non-equilibrium measure. Here we demonstrate experimentally for the first time that equilibrium physical quantities (such as the mean square displacement) can be obtained from non-equilibrium measurements via umbrella sampling. Our model experimental system is a bead fluctuating in a time-varying optical trap. We also show this for simulated force spectroscopy on a complex soft molecule--a piston-rotaxane.
Towards Cost-efficient Sampling Methods
Peng, Luo; Yongli, Li; Chong, Wu
2014-01-01
The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper presents two new sampling methods based on the perspective that a small part of vertices with high node degree can possess the most structure information of a network. The two proposed sampling methods are efficient in sampling the nodes with high degree. The first new sampling method is improved on the basis of the stratified random sampling method and...
Study on direct determination of uranium and efficient equilibrium factor by gamma-ray spectrometer
International Nuclear Information System (INIS)
Liu Chunkui
1990-01-01
The test principle, test set and surveying methods for conducting gamma-ray spectrometry on conveyer are presented. The conversion coefficient of the spectrometer has been found by using duallinear regression analysis of uranium and radon and their higher and lower bands of gamma-ray spectra. The efficient equilibrium factor can be quickly determined, and the direct determination of uranium in the non-equilibrium condition of uranium and radium can be made
Willem H. Buiter
1987-01-01
Excess volatility tests for financial market efficiency maintain the hypothesis of risk-neutrality. This permits the specification of the benchmark efficient market price as the present discounted value of expected future dividends. By departing from the risk-neutrality assumption in a stripped-down version of Lucas's general equilibrium asset pricing model, I show that asset prices determined in a competitive asset market and efficient by construction can nevertheless violate the variance bo...
Toward cost-efficient sampling methods
Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie
2015-09-01
The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.
DEFF Research Database (Denmark)
Mäenpää, Kimmo; Leppänen, Matti T.; Reichenberg, Fredrik
2011-01-01
with respect to equilibrium partitioning concentrations in lipids (Clipid,partitioning): (i) Solid phase microextraction in the headspace above the sample (HS-SPME) required optimization for its application to PCBs, and it was calibrated above external partitioning standards in olive oil. (ii) Equilibrium...
Forsythe, Robert; Suchanek, Gerry L
1984-01-01
The recent literature on economies with an incomplete set of markets has been devoted to the study of the efficiency properties of collective stockholder decision mechanisms for guiding the behavior of firms when the restrictive Ekern-Wilson spanning condition is not satisfied. The results have been essentially negative; a majority voting rule and controlling interest rules will not yield efficient equilibrium allocations in general. However, in a recent paper, Helpman and Razin (1978) sugges...
Jahnke, Annika; MacLeod, Matthew; Wickström, Håkan; Mayer, Philipp
2014-10-07
Equilibrium partitioning (EqP) theory is currently the most widely used approach for linking sediment pollution by persistent hydrophobic organic chemicals to bioaccumulation. Most applications of the EqP approach assume (I) a generic relationship between organic carbon-normalized chemical concentrations in sediments and lipid-normalized concentrations in biota and (II) that bioaccumulation does not induce levels exceeding those expected from equilibrium partitioning. Here, we demonstrate that assumption I can be obviated by equilibrating a silicone sampler with chemicals in sediment, measuring chemical concentrations in the silicone, and applying lipid/silicone partition ratios to yield concentrations in lipid at thermodynamic equilibrium with the sediment (CLip⇌Sed). Furthermore, we evaluated the validity of assumption II by comparing CLip⇌Sed of selected persistent, bioaccumulative and toxic pollutants (polychlorinated biphenyls (PCBs) and hexachlorobenzene (HCB)) to lipid-normalized concentrations for a range of biota from a Swedish background lake. PCBs in duck mussels, roach, eel, pikeperch, perch and pike were mostly below the equilibrium partitioning level relative to the sediment, i.e., lipid-normalized concentrations were ≤CLip⇌Sed, whereas HCB was near equilibrium between biota and sediment. Equilibrium sampling allows straightforward, sensitive and precise measurement of CLip⇌Sed. We propose CLip⇌Sed as a metric of the thermodynamic potential for bioaccumulation of persistent organic chemicals from sediment useful to prioritize management actions to remediate contaminated sites.
Göppel, Tobias; Palyulin, Vladimir V; Gerland, Ulrich
2016-07-27
An out-of-equilibrium physical environment can drive chemical reactions into thermodynamically unfavorable regimes. Under prebiotic conditions such a coupling between physical and chemical non-equilibria may have enabled the spontaneous emergence of primitive evolutionary processes. Here, we study the coupling efficiency within a theoretical model that is inspired by recent laboratory experiments, but focuses on generic effects arising whenever reactant and product molecules have different transport coefficients in a flow-through system. In our model, the physical non-equilibrium is represented by a drift-diffusion process, which is a valid coarse-grained description for the interplay between thermophoresis and convection, as well as for many other molecular transport processes. As a simple chemical reaction, we consider a reversible dimerization process, which is coupled to the transport process by different drift velocities for monomers and dimers. Within this minimal model, the coupling efficiency between the non-equilibrium transport process and the chemical reaction can be analyzed in all parameter regimes. The analysis shows that the efficiency depends strongly on the Damköhler number, a parameter that measures the relative timescales associated with the transport and reaction kinetics. Our model and results will be useful for a better understanding of the conditions for which non-equilibrium environments can provide a significant driving force for chemical reactions in a prebiotic setting.
Stability of equilibrium states in finite samples of smectic C* liquid crystals
International Nuclear Information System (INIS)
Stewart, I W
2005-01-01
Equilibrium solutions for a sample of ferroelectric smectic C (SmC*) liquid crystal in the 'bookshelf' geometry under the influence of a tilted electric field will be presented. A linear stability criterion is identified and used to confirm stability for typical materials possessing either positive or negative dielectric anisotropy. The theoretical response times for perturbations to the equilibrium solutions are calculated numerically and found to be consistent with estimates for response times in ferroelectric smectic C liquid crystals reported elsewhere in the literature for non-tilted fields
Stability of equilibrium states in finite samples of smectic C* liquid crystals
Energy Technology Data Exchange (ETDEWEB)
Stewart, I W [Department of Mathematics, University of Strathclyde, Livingstone Tower, 26 Richmond Street, Glasgow G1 1XH (United Kingdom)
2005-03-04
Equilibrium solutions for a sample of ferroelectric smectic C (SmC*) liquid crystal in the 'bookshelf' geometry under the influence of a tilted electric field will be presented. A linear stability criterion is identified and used to confirm stability for typical materials possessing either positive or negative dielectric anisotropy. The theoretical response times for perturbations to the equilibrium solutions are calculated numerically and found to be consistent with estimates for response times in ferroelectric smectic C liquid crystals reported elsewhere in the literature for non-tilted fields.
Suh, Donghyuk; Radak, Brian K.; Chipot, Christophe; Roux, Benoît
2018-01-01
Molecular dynamics (MD) trajectories based on classical equations of motion can be used to sample the configurational space of complex molecular systems. However, brute-force MD often converges slowly due to the ruggedness of the underlying potential energy surface. Several schemes have been proposed to address this problem by effectively smoothing the potential energy surface. However, in order to recover the proper Boltzmann equilibrium probability distribution, these approaches must then rely on statistical reweighting techniques or generate the simulations within a Hamiltonian tempering replica-exchange scheme. The present work puts forth a novel hybrid sampling propagator combining Metropolis-Hastings Monte Carlo (MC) with proposed moves generated by non-equilibrium MD (neMD). This hybrid neMD-MC propagator comprises three elementary elements: (i) an atomic system is dynamically propagated for some period of time using standard equilibrium MD on the correct potential energy surface; (ii) the system is then propagated for a brief period of time during what is referred to as a "boosting phase," via a time-dependent Hamiltonian that is evolved toward the perturbed potential energy surface and then back to the correct potential energy surface; (iii) the resulting configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-end momentum reversal prescription is used at the end of the neMD trajectories to guarantee that the hybrid neMD-MC sampling propagator obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The hybrid neMD-MC sampling propagator is designed and implemented to enhance the sampling by relying on the accelerated MD and solute tempering schemes. It is also combined with the adaptive biased force sampling algorithm to examine. Illustrative tests with specific biomolecular systems indicate that the method can yield
Sampling efficiency of the Moore egg collector
Worthington, Thomas A.; Brewer, Shannon K.; Grabowski, Timothy B.; Mueller, Julia
2013-01-01
Quantitative studies focusing on the collection of semibuoyant fish eggs, which are associated with a pelagic broadcast-spawning reproductive strategy, are often conducted to evaluate reproductive success. Many of the fishes in this reproductive guild have suffered significant reductions in range and abundance. However, the efficiency of the sampling gear used to evaluate reproduction is often unknown and renders interpretation of the data from these studies difficult. Our objective was to assess the efficiency of a modified Moore egg collector (MEC) using field and laboratory trials. Gear efficiency was assessed by releasing a known quantity of gellan beads with a specific gravity similar to that of eggs from representatives of this reproductive guild (e.g., the Arkansas River Shiner Notropis girardi) into an outdoor flume and recording recaptures. We also used field trials to determine how discharge and release location influenced gear efficiency given current methodological approaches. The flume trials indicated that gear efficiency ranged between 0.0% and 9.5% (n = 57) in a simple 1.83-m-wide channel and was positively related to discharge. Efficiency in the field trials was lower, ranging between 0.0% and 3.6%, and was negatively related to bead release distance from the MEC and discharge. The flume trials indicated that the gellan beads were not distributed uniformly across the channel, although aggregation was reduced at higher discharges. This clustering of passively drifting particles should be considered when selecting placement sites for an MEC; further, the use of multiple devices may be warranted in channels with multiple areas of concentrated flow.
Kim, Pil-Gon; Roh, Ji-Yeon; Hong, Yongseok; Kwon, Jung-Hwan
2017-10-01
Passive sampling can be applied for measuring the freely dissolved concentration of hydrophobic organic chemicals (HOCs) in soil pore water. When using passive samplers under field conditions, however, there are factors that might affect passive sampling equilibrium and kinetics, such as soil water saturation. To determine the effects of soil water saturation on passive sampling, the equilibrium and kinetics of passive sampling were evaluated by observing changes in the distribution coefficient between sampler and soil (K sampler/soil ) and the uptake rate constant (k u ) at various soil water saturations. Polydimethylsiloxane (PDMS) passive samplers were deployed into artificial soils spiked with seven selected polycyclic aromatic hydrocarbons (PAHs). In dry soil (0% water saturation), both K sampler/soil and k u values were much lower than those in wet soils likely due to the contribution of adsorption of PAHs onto soil mineral surfaces and the conformational changes in soil organic matter. For high molecular weight PAHs (chrysene, benzo[a]pyrene, and dibenzo[a,h]anthracene), both K sampler/soil and k u values increased with increasing soil water saturation, whereas they decreased with increasing soil water saturation for low molecular weight PAHs (phenanthrene, anthracene, fluoranthene, and pyrene). Changes in the sorption capacity of soil organic matter with soil water content would be the main cause of the changes in passive sampling equilibrium. Henry's law constant could explain the different behaviors in uptake kinetics of the selected PAHs. The results of this study would be helpful when passive samplers are deployed under various soil water saturations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Nonequilibrium candidate Monte Carlo is an efficient tool for equilibrium simulation
Energy Technology Data Exchange (ETDEWEB)
Nilmeier, J. P.; Crooks, G. E.; Minh, D. D. L.; Chodera, J. D.
2011-10-24
Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. While generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.
DEFF Research Database (Denmark)
Jahnke, Annika; Mayer, Philipp; Adolfsson-Erici, Margaretha
2011-01-01
of the equilibrium sampling technique, while at the same time confirming that the fugacity capacity of these lipid-rich tissues for PCBs was dominated by the lipid fraction. Equilibrium sampling was also applied to homogenates of the same fish tissues. The PCB concentrations in the PDMS were 1.2 to 2.0 times higher...... in the homogenates (statistically significant in 18 of 21 cases, phomogenization increased the chemical activity of the PCBs and decreased the fugacity capacity of the tissue. This observation has implications for equilibrium sampling and partition coefficients determined using tissue...... homogenates....
Institute of Scientific and Technical Information of China (English)
SU Qiong; ZHENG Rui; CHEN Yong; CHENG Jian-Ping
2004-01-01
This paper reports the observed changes for equilibrium factors between 226Ra and 222Rn with sealing time of the samples. The samples include soil, raw coal, mineral water, cement, rock, etc. Especially the conceptions of "pre-equilibrium time" and "pre-equilibrium factor" have been put forward and methods of measuring and processing data have been given which can be used for rapidly reporting activity of 226Ra in samples with unknown equilibrium factor. It is definitely concluded that, using methods given in the paper, a test report will be completed in 3～7days, instead of one month, after receiving the sample whose activity is not lower than LLD of the spectrometer.
Muijs, B.|info:eu-repo/dai/nl/194995526; Jonker, M.T.O.|info:eu-repo/dai/nl/175518793
2012-01-01
Over the past couple of years, several analytical methods have been developed for assessing the bioavailability of environmental contaminants in sediments and soils. Comparison studies suggest that equilibrium passive sampling methods generally provide the better estimates of internal concentrations
DEFF Research Database (Denmark)
Schäfer, Sabine; Antoni, Catherine; Möhlenkamp, Christel
2015-01-01
Equilibrium sampling can be applied to measure freely dissolved concentrations (cfree) of hydrophobic organic chemicals (HOCs) that are considered effective concentrations for diffusive uptake and partitioning. It can also yield concentrations in lipids at thermodynamic equilibrium...... with the sediment (Clip⇔sed) by multiplying concentrations in the equilibrium sampling polymer with lipid to polymer partition coefficients. We have applied silicone coated glass jars for equilibrium sampling of seven ‘indicator’ polychlorinated biphenyls (PCBs) in sediment samples from ten locations along...... bioaccumulation and the thermodynamic potential of sediment-associated HOCs for partitioning into lipids. This novel approach gives clearer and more consistent results compared to conventional approaches that are based on total concentrations in sediment and biota-sediment accumulation factors. We propose...
International Nuclear Information System (INIS)
Lu, Yingying; Liu, Yu; Zhou, Meifang
2017-01-01
This paper explores the rebound effect of different energy types in China based on a static computable general equilibrium model. A one-off 5% energy efficiency improvement is imposed on five different types of energy, respectively, in all the 135 production sectors in China. The rebound effect is measured both on the production level and on the economy-wide level for each type of energy. The results show that improving energy efficiency of using electricity has the largest positive impact on GDP among the five energy types. Inter-fuel substitutability does not affect the macroeconomic results significantly, but long-run impact is usually greater than the short-run impact. For the exports-oriented sectors, those that are capital-intensive get big negative shock in the short run while those that are labour-intensive get hurt in the long run. There is no “backfire” effect; however, improving efficiency of using electricity can cause negative rebound, which implies that improving the energy efficiency of using electricity might be a good policy choice under China's current energy structure. In general, macro-level rebound is larger than production-level rebound. Primary energy goods show larger rebound effect than secondary energy goods. In addition, the paper points out that the policy makers in China should look at the rebound effect in the long term rather than in the short term. The energy efficiency policy would be a good and effective policy choice for energy conservation in China when it still has small inter-fuel substitution. - Highlights: • Primary energy goods show larger rebound effect than secondary energy goods. • Improving efficiency of using electricity can cause negative rebound. • The energy efficiency policy would be an effective policy choice for China. • Policy-makers should consider the rebound effect in the longer term.
Jahnke, Annika; Mayer, Philipp; Adolfsson-Erici, Margaretha; McLachlan, Michael S
2011-07-01
Equilibrium sampling of organic pollutants into the silicone polydimethylsiloxane (PDMS) has recently been applied in biological tissues including fish. Pollutant concentrations in PDMS can then be multiplied with lipid/PDMS distribution coefficients (D(Lipid,PDMS) ) to obtain concentrations in fish lipids. In the present study, PDMS thin films were used for equilibrium sampling of polychlorinated biphenyls (PCBs) in intact tissue of two eels and one salmon. A classical exhaustive extraction technique to determine lipid-normalized PCB concentrations, which assigns the body burden of the chemical to the lipid fraction of the fish, was additionally applied. Lipid-based PCB concentrations obtained by equilibrium sampling were 85 to 106% (Norwegian Atlantic salmon), 108 to 128% (Baltic Sea eel), and 51 to 83% (Finnish lake eel) of those determined using total extraction. This supports the validity of the equilibrium sampling technique, while at the same time confirming that the fugacity capacity of these lipid-rich tissues for PCBs was dominated by the lipid fraction. Equilibrium sampling was also applied to homogenates of the same fish tissues. The PCB concentrations in the PDMS were 1.2 to 2.0 times higher in the homogenates (statistically significant in 18 of 21 cases, p equilibrium sampling and partition coefficients determined using tissue homogenates. Copyright © 2011 SETAC.
Su, Ji; Yang, Lisha; Lu, Mi; Lin, Hongfei
2015-03-01
A highly efficient, reversible hydrogen storage-evolution process has been developed based on the ammonium bicarbonate/formate redox equilibrium over the same carbon-supported palladium nanocatalyst. This heterogeneously catalyzed hydrogen storage system is comparable to the counterpart homogeneous systems and has shown fast reaction kinetics of both the hydrogenation of ammonium bicarbonate and the dehydrogenation of ammonium formate under mild operating conditions. By adjusting temperature and pressure, the extent of hydrogen storage and evolution can be well controlled in the same catalytic system. Moreover, the hydrogen storage system based on aqueous-phase ammonium formate is advantageous owing to its high volumetric energy density. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Baltussen, H.A.; David, F.; Sandra, P.J.F.; Janssen, J.G.M.; Cramers, C.A.M.G.
1999-01-01
A novel approach for sample enrichment, namely, equilibrium sorptive enrichment (ESE), is presented. A packed bed of sorption (or partitioning) material is used to enrich volatiles from gaseous samples. Normally, air sampling is stopped before breakthrough occurs, but this approach is not very
Lin, Wei; Jiang, Ruifen; Shen, Yong; Xiong, Yaxin; Hu, Sizi; Xu, Jianqiao; Ouyang, Gangfeng
2018-04-13
Pre-equilibrium passive sampling is a simple and promising technique for studying sampling kinetics, which is crucial to determine the distribution, transfer and fate of hydrophobic organic compounds (HOCs) in environmental water and organisms. Environmental water samples contain complex matrices that complicate the traditional calibration process for obtaining the accurate rate constants. This study proposed a QSAR model to predict the sampling rate constants of HOCs (polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs) and pesticides) in aqueous systems containing complex matrices. A homemade flow-through system was established to simulate an actual aqueous environment containing dissolved organic matter (DOM) i.e. humic acid (HA) and (2-Hydroxypropyl)-β-cyclodextrin (β-HPCD)), and to obtain the experimental rate constants. Then, a quantitative structure-activity relationship (QSAR) model using Genetic Algorithm-Multiple Linear Regression (GA-MLR) was found to correlate the experimental rate constants to the system state including physicochemical parameters of the HOCs and DOM which were calculated and selected as descriptors by Density Functional Theory (DFT) and Chem 3D. The experimental results showed that the rate constants significantly increased as the concentration of DOM increased, and the enhancement factors of 70-fold and 34-fold were observed for the HOCs in HA and β-HPCD, respectively. The established QSAR model was validated as credible (R Adj. 2 =0.862) and predictable (Q 2 =0.835) in estimating the rate constants of HOCs for complex aqueous sampling, and a probable mechanism was developed by comparison to the reported theoretical study. The present study established a QSAR model of passive sampling rate constants and calibrated the effect of DOM on the sampling kinetics. Copyright © 2018 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Cannon, Cody; Dobson, Patrick; Conrad, Mark
2014-01-01
The Eastern Snake River Plain (ESRP) is an area of high regional heat flux due the movement of the North American Plate over the Yellowstone Hotspot beginning ca.16 Ma. Temperature gradients between 45-60 °C/km (up to double the global average) have been calculated from deep wells that penetrate the upper aquifer system (Blackwell 1989). Despite the high geothermal potential, thermal signatures from hot springs and wells are effectively masked by the rapid flow of cold groundwater through the highly permeable basalts of the Eastern Snake River Plain aquifer (ESRPA) (up to 500+ m thick). This preliminary study is part of an effort to more accurately predict temperatures of the ESRP deep thermal reservoir while accounting for the effects of the prolific cold water aquifer system above. This study combines the use of traditional geothermometry, mixing models, and a multicomponent equilibrium geothermometry (MEG) tool to investigate the geothermal potential of the ESRP. In March, 2014, a collaborative team including members of the University of Idaho, the Idaho National Laboratory, and the Lawrence Berkeley National Laboratory collected 14 thermal water samples from and adjacent to the Eastern Snake River Plain. The preliminary results of chemical analyses and geothermometry applied to these samples are presented herein.
Energy Technology Data Exchange (ETDEWEB)
Cannon, Cody [Univ. of Idaho, Idaho Falls, ID (United States). Center for Advanced Studies; Wood, Thomas [Univ. of Idaho, Idaho Falls, ID (United States). Center for Advanced Studies; Neupane, Ghanashyam [Idaho National Lab. (INL), Idaho Falls, ID (United States). Center for Advanced Studies; McLing, Travis [Idaho National Lab. (INL), Idaho Falls, ID (United States). Center for Advanced Studies; Mattson, Earl [Idaho National Lab. (INL), Idaho Falls, ID (United States); Dobson, Patrick [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Conrad, Mark [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2014-10-01
The Eastern Snake River Plain (ESRP) is an area of high regional heat flux due the movement of the North American Plate over the Yellowstone Hotspot beginning ca.16 Ma. Temperature gradients between 45-60 °C/km (up to double the global average) have been calculated from deep wells that penetrate the upper aquifer system (Blackwell 1989). Despite the high geothermal potential, thermal signatures from hot springs and wells are effectively masked by the rapid flow of cold groundwater through the highly permeable basalts of the Eastern Snake River Plain aquifer (ESRPA) (up to 500+ m thick). This preliminary study is part of an effort to more accurately predict temperatures of the ESRP deep thermal reservoir while accounting for the effects of the prolific cold water aquifer system above. This study combines the use of traditional geothermometry, mixing models, and a multicomponent equilibrium geothermometry (MEG) tool to investigate the geothermal potential of the ESRP. In March, 2014, a collaborative team including members of the University of Idaho, the Idaho National Laboratory, and the Lawrence Berkeley National Laboratory collected 14 thermal water samples from and adjacent to the Eastern Snake River Plain. The preliminary results of chemical analyses and geothermometry applied to these samples are presented herein.
DEFF Research Database (Denmark)
Belkadi, Abdelkrim; Yan, Wei; Moggia, Elsa
2013-01-01
Compositional reservoir simulations are widely used to simulate reservoir processes with strong compositional effects, such as gas injection. The equations of state (EoS) based phase equilibrium calculation is a time consuming part in this type of simulations. The phase equilibrium problem can....... Application of the shadow region method to skip stability analysis can further cut the phase equilibrium calculation time. Copyright 2013, Society of Petroleum Engineers....
DEFF Research Database (Denmark)
Lang, Susann-Cathrin; Hursthouse, Andrew; Mayer, Philipp
2015-01-01
Solid Phase Microextraction (SPME) was applied to provide the first large scale dataset of freely dissolved concentrations for 9 polycyclic aromatic hydrocarbons (PAHs) in Baltic Sea sediment cores. Polydimethylsiloxane (PDMS) coated glass fibers were used for ex-situ equilibrium sampling followed...
Schäfer, Sabine; Antoni, Catherine; Möhlenkamp, Christel; Claus, Evelyn; Reifferscheid, Georg; Heininger, Peter; Mayer, Philipp
2015-11-01
Equilibrium sampling can be applied to measure freely dissolved concentrations (cfree) of hydrophobic organic chemicals (HOCs) that are considered effective concentrations for diffusive uptake and partitioning. It can also yield concentrations in lipids at thermodynamic equilibrium with the sediment (clip⇌sed) by multiplying concentrations in the equilibrium sampling polymer with lipid to polymer partition coefficients. We have applied silicone coated glass jars for equilibrium sampling of seven 'indicator' polychlorinated biphenyls (PCBs) in sediment samples from ten locations along the River Elbe to measure cfree of PCBs and their clip⇌sed. For three sites, we then related clip⇌sed to lipid-normalized PCB concentrations (cbio,lip) that were determined independently by the German Environmental Specimen Bank in common bream, a fish species living in close contact with the sediment: (1) In all cases, cbio,lip were below clip⇌sed, (2) there was proportionality between the two parameters with high R(2) values (0.92-1.00) and (3) the slopes of the linear regressions were very similar between the three stations (0.297; 0.327; 0.390). These results confirm the close link between PCB bioaccumulation and the thermodynamic potential of sediment-associated HOCs for partitioning into lipids. This novel approach gives clearer and more consistent results compared to conventional approaches that are based on total concentrations in sediment and biota-sediment accumulation factors. We propose to apply equilibrium sampling for determining bioavailability and bioaccumulation potential of HOCs, since this technique can provide a thermodynamic basis for the risk assessment and management of contaminated sediments. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Sampling efficiency for species composition assessments using the ...
African Journals Online (AJOL)
A pilot survey to determine the sampling efficiency of the wheel-point method, using the nearest plant method, to assess species composition (using replicate similarity related to sampling intensity, and total sampling time) was conducted on three plot sizes (20 x 20m, 30 x 30m, 40 x 40m) at two sites in a semi-arid savanna.
DEFF Research Database (Denmark)
Jahnke, Annika; MacLeod, Matthew; Wickström, Håkan
2014-01-01
Equilibrium partitioning (EqP) theory is currently the most widely used approach for linking sediment pollution by persistent hydrophobic organic chemicals to bioaccumulation. Most applications of the EqP approach assume (I) a generic relationship between organic carbon-normalized chemical...... chemical concentrations in the silicone, and applying lipid/silicone partition ratios to yield concentrations in lipid at thermodynamic equilibrium with the sediment (CLip⇌Sed). Furthermore, we evaluated the validity of assumption II by comparing CLip⇌Sed of selected persistent, bioaccumulative and toxic...... organic chemicals from sediment useful to prioritize management actions to remediate contaminated sites....
Efficient sampling algorithms for Monte Carlo based treatment planning
International Nuclear Information System (INIS)
DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.
1998-01-01
Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed
Efficient estimation for ergodic diffusions sampled at high frequency
DEFF Research Database (Denmark)
Sørensen, Michael
A general theory of efficient estimation for ergodic diffusions sampled at high fre- quency is presented. High frequency sampling is now possible in many applications, in particular in finance. The theory is formulated in term of approximate martingale estimating functions and covers a large class...
An efficient method for sampling the essential subspace of proteins
Amadei, A; Linssen, A.B M; de Groot, B.L.; van Aalten, D.M.F.; Berendsen, H.J.C.
A method is presented for a more efficient sampling of the configurational space of proteins as compared to conventional sampling techniques such as molecular dynamics. The method is based on the large conformational changes in proteins revealed by the ''essential dynamics'' analysis. A form of
International Nuclear Information System (INIS)
Allan, Grant; Hanley, Nick; McGregor, Peter; Swales, Kim; Turner, Karen
2007-01-01
The conventional wisdom is that improving energy efficiency will lower energy use. However, there is an extensive debate in the energy economics/policy literature concerning 'rebound' effects. These occur because an improvement in energy efficiency produces a fall in the effective price of energy services. The response of the economic system to this price fall at least partially offsets the expected beneficial impact of the energy efficiency gain. In this paper we use an economy-energy-environment computable general equilibrium (CGE) model for the UK to measure the impact of a 5% across the board improvement in the efficiency of energy use in all production sectors. We identify rebound effects of the order of 30-50%, but no backfire (no increase in energy use). However, these results are sensitive to the assumed structure of the labour market, key production elasticities, the time period under consideration and the mechanism through which increased government revenues are recycled back to the economy
DEFF Research Database (Denmark)
Mäenpää, Kimmo; Leppänen, Matti T.; Figueiredo, Kaisa
2015-01-01
Equilibrium sampling devices can be applied to study and monitor the exposure and fate of hydrophobic organic chemicals on a thermodynamic basis. They can be used to determine freely dissolved concentrations and chemical activity ratios and to predict equilibrium partitioning concentrations...... of hydrophobic organic chemicals in biota lipids. The authors' aim was to assess the equilibrium status of polychlorinated biphenyls (PCBs) in a contaminated lake ecosystem and along its discharge course using equilibrium sampling devices for measurements in sediment and water and by also analyzing biota....... The authors used equilibrium sampling devices (silicone rubber and polyethylene [PE]) to determine freely dissolved concentrations and chemical activities of PCBs in the water column and sediment porewater and calculated for both phases the corresponding equilibrium concentrations and chemical activities...
Phan, Stephanie; Salentinig, Stefan; Hawley, Adrian; Boyd, Ben J
2015-10-01
Lipid-based formulations are gaining interest for use as drug delivery systems for poorly water-soluble drug compounds. During digestion, the lipolysis products self-assemble with endogenous surfactants in the gastrointestinal tract to form colloidal structures, enabling enhanced drug solubilisation. Although earlier studies in the literature focus on assembled equilibrium systems, little is known about structure formation under dynamic lipolysis conditions. The purpose of this study was to investigate the likely colloidal structure formation in the small intestine after the ingestion of lipids, under equilibrium and dynamic conditions. The structural aspects were studied using small angle X-ray scattering and dynamic light scattering, and were found to depend on lipid composition, lipid chain length, prandial state and emulsification. Incorporation of phospholipids and lipolysis products into bile salt micelles resulted in swelling of the structure. At insufficient bile salt concentrations, a co-existing lamellar phase was observed, due to a reduction in the solubilisation capacity for lipolysis products. Emulsification accelerated the rate of lipolysis and structure formation. Copyright © 2015 Elsevier B.V. All rights reserved.
Approximate determination of efficiency for activity measurements of cylindrical samples
Energy Technology Data Exchange (ETDEWEB)
Helbig, W [Nuclear Engineering and Analytics Rossendorf, Inc. (VKTA), Dresden (Germany); Bothe, M [Nuclear Engineering and Analytics Rossendorf, Inc. (VKTA), Dresden (Germany)
1997-03-01
Some calibration samples are necessary with the same geometrical parameters but of different materials, containing known activities A homogeniously distributed. Their densities are measured, their mass absorption coefficients may be unknown. These calibration samples are positioned in the counting geometry, for instance directly on the detector. The efficiency function {epsilon}(E) for each sample is gained by measuring the gamma spectra and evaluating all usable gamma energy peaks. From these {epsilon}(E) the common valid {epsilon}{sub geom}(E) will be deduced. For this purpose the functions {epsilon}{sub mu}(E) for these samples have to be established. (orig.)
Efficient maximal Poisson-disk sampling and remeshing on surfaces
Guo, Jianwei; Yan, Dongming; Jia, Xiaohong; Zhang, Xiaopeng
2015-01-01
Poisson-disk sampling is one of the fundamental research problems in computer graphics that has many applications. In this paper, we study the problem of maximal Poisson-disk sampling on mesh surfaces. We present a simple approach that generalizes the 2D maximal sampling framework to surfaces. The key observation is to use a subdivided mesh as the sampling domain for conflict checking and void detection. Our approach improves the state-of-the-art approach in efficiency, quality and the memory consumption.
Efficient maximal Poisson-disk sampling and remeshing on surfaces
Guo, Jianwei
2015-02-01
Poisson-disk sampling is one of the fundamental research problems in computer graphics that has many applications. In this paper, we study the problem of maximal Poisson-disk sampling on mesh surfaces. We present a simple approach that generalizes the 2D maximal sampling framework to surfaces. The key observation is to use a subdivided mesh as the sampling domain for conflict checking and void detection. Our approach improves the state-of-the-art approach in efficiency, quality and the memory consumption.
International Nuclear Information System (INIS)
Pfingsten, W.
1996-01-01
Safety assessments for radioactive waste repositories require a detailed knowledge of physical, chemical, hydrological, and geological processes for long time spans. In the past, individual models for hydraulics, transport, or geochemical processes were developed more or less separately to great sophistication for the individual processes. Such processes are especially important in the near field of a waste repository. Attempts have been made to couple at least two individual processes to get a more adequate description of geochemical systems. These models are called coupled codes; they couple predominantly a multicomponent transport model with a chemical reaction model. Here reactive transport is modeled by the sequentially coupled code MCOTAC that couples one-dimensional advective, dispersive, and diffusive transport with chemical equilibrium complexation and precipitation/dissolution reactions in a porous medium. Transport, described by a random walk of multispecies particles, and chemical equilibrium calculations are solved separately, coupled only by an exchange term. The modular-structured code was applied to incongruent dissolution of hydrated silicate gels, to movement of multiple solid front systems, and to an artificial, numerically difficult heterogeneous redox problem. These applications show promising features with respect to applicability to relevant problems and possibilities of extensions
Directory of Open Access Journals (Sweden)
Xiao-Jun Yu
2014-02-01
Full Text Available The efficiency loss of mixed equilibrium associated with two categories of users is investigated in this paper. The first category of users are altruistic users (AU who have the same altruism coefficient and try to minimize their own perceived cost that assumed to be a linear combination of selfish component and altruistic component. The second category of users are Logit-based stochastic users (LSU who choose the route according to the Logit-based stochastic user equilibrium (SUE principle. The variational inequality (VI model is used to formulate the mixed route choice behaviours associated with AU and LSU. The efficiency loss caused by the two categories of users is analytically derived and the relations to some network parameters are discussed. The numerical tests validate our analytical results. Our result takes the results in the existing literature as its special cases.
Sampling the equilibrium kinetic network of Trp-cage in explicit solvent
Du, W.; Bolhuis, P.G.
2014-01-01
We employed the single replica multiple state transition interface sampling (MSTIS) approach to sample the kinetic (un) folding network of Trp-cage mini-protein in explicit water. Cluster analysis yielded 14 important metastable states in the network. The MSTIS simulation thus resulted in a full 14
Recovery efficiencies for Burkholderia thailandensis from various aerosol sampling media
Directory of Open Access Journals (Sweden)
Paul eDabisch
2012-06-01
Full Text Available Burkholderia thailandensis is used in the laboratory as a surrogate of the more virulent B. pseudomallei. Since inhalation is believed to be a natural route of infection for B. pseudomallei, many animal studies with B. pseudomallei and B. thailandensis utilize the inhalation route of exposure. The aim of the present study was to quantify the recovery efficiency of culturable B. thailandensis from several common aerosol sampling devices to ensure that collected microorganisms could be reliably recovered post-collection. The sampling devices tested included 25-mm gelatin filters, 25-mm stainless steel disks used in Mercer cascade impactors, and two types of glass impingers. The results demonstrate that while several processing methods tested resulted in significantly lower physical recovery efficiencies than other methods, it was possible to obtain culturable recovery efficiencies for B. thailandensis and physical recovery efficiencies for 1 μm fluorescent spheres of at least 0.95 from all of the sampling media tested given an appropriate sample processing procedure. The results of the present study also demonstrated that the bubbling action of liquid media in all-glass impingers (AGIs can result in physical loss of material from the collection medium, although additional studies are needed to verify the exact mechanisms involved. Overall, the results of this study demonstrate that the collection mechanism as well as the post-collection processing method can significantly affect the recovery from and retention of culturable microorganisms in sampling media, potentially affecting the calculated airborne concentration and any subsequent estimations of risk or dose derived from such data.
Rusina, Tatsiana P; Carlsson, Pernilla; Vrana, Branislav; Smedes, Foppe
2017-10-03
Passive sampling is widely used to measure levels of contaminants in various environmental matrices, including fish tissue. Equilibrium passive sampling (EPS) of persistent organic pollutants (POP) in fish tissue has been hitherto limited to application in lipid-rich tissue. We tested several exposure methods to extend EPS applicability to lean tissue. Thin-film polydimethylsiloxane (PDMS) passive samplers were exposed statically to intact fillet and fish homogenate and dynamically by rolling with cut fillet cubes. The release of performance reference compounds (PRC) dosed to passive samplers prior to exposure was used to monitor the exchange process. The sampler-tissue exchange was isotropic, and PRC were shown to be good indicators of sampler-tissue equilibration status. The dynamic exposures demonstrated equilibrium attainment in less than 2 days for all three tested fish species, including lean fish containing 1% lipid. Lipid-based concentrations derived from EPS were in good agreement with lipid-normalized concentrations obtained using conventional solvent extraction. The developed in-tissue EPS method is robust and has potential for application in chemical monitoring of biota and bioaccumulation studies.
Characteristics and Sampling Efficiencies of Two Personal Aerosol Samplers
2007-07-01
stainless steel and plastic; therefore, it can be decontaminated easily by immersing in decontamination solution. 9 PAS-2 PAS-1 Figure 3. Picture of PAS...portable, and easy to use for decontamination . The sampling efficiency tests were conducted with monodisperse 0.5-, 1-, and 2.1-gtm fluorescent...Scientific, Corp., Palo Alto, CA). The PSL aerosols were generated using a 24 jet Collison nebulizer and then passed through a radioactive isotope (Kr-85
Nezarat, Amin; Dastghaibifard, G H
2015-01-01
One of the most complex issues in the cloud computing environment is the problem of resource allocation so that, on one hand, the cloud provider expects the most profitability and, on the other hand, users also expect to have the best resources at their disposal considering the budget constraints and time. In most previous work conducted, heuristic and evolutionary approaches have been used to solve this problem. Nevertheless, since the nature of this environment is based on economic methods, using such methods can decrease response time and reducing the complexity of the problem. In this paper, an auction-based method is proposed which determines the auction winner by applying game theory mechanism and holding a repetitive game with incomplete information in a non-cooperative environment. In this method, users calculate suitable price bid with their objective function during several round and repetitions and send it to the auctioneer; and the auctioneer chooses the winning player based the suggested utility function. In the proposed method, the end point of the game is the Nash equilibrium point where players are no longer inclined to alter their bid for that resource and the final bid also satisfies the auctioneer's utility function. To prove the response space convexity, the Lagrange method is used and the proposed model is simulated in the cloudsim and the results are compared with previous work. At the end, it is concluded that this method converges to a response in a shorter time, provides the lowest service level agreement violations and the most utility to the provider.
Directory of Open Access Journals (Sweden)
T. A. Kuchmenko
2013-01-01
Full Text Available In the article discussed the possibility of blood sample’s assessment with the following diagnostic characteristics: "endometriosis", "fibroids", "uterine body cancer" by the signals of multisensor system. It has been found that blood samples can be reliably ranking into groups according to their diagnostic characteristics using the geometry, square of "visual prints" and the sorption effectiveness parameters max ij А.
Network Sampling with Memory: A proposal for more efficient sampling from social networks
Mouw, Ted; Verdery, Ashton M.
2013-01-01
Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)—the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a “List” mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a “Search” mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS. PMID:24159246
Sampling Efficiency and Performance of Selected Thoracic Aerosol Samplers.
Görner, Peter; Simon, Xavier; Boivin, Alexis; Bau, Sébastien
2017-08-01
Measurement of worker exposure to a thoracic health-related aerosol fraction is necessary in a number of occupational situations. This is the case of workplaces with atmospheres polluted by fibrous particles, such as cotton dust or asbestos, and by particles inducing irritation or bronchoconstriction such as acid mists or flour dust. Three personal and two static thoracic aerosol samplers were tested under laboratory conditions. Sampling efficiency with respect to particle aerodynamic diameter was measured in a horizontal low wind tunnel and in a vertical calm air chamber. Sampling performance was evaluated against conventional thoracic penetration. Three of the tested samplers performed well, when sampling the thoracic aerosol at nominal flow rate and two others performed well at optimized flow rate. The limit of flow rate optimization was found when using cyclone samplers. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
A high-efficiency neutron coincidence counter for small samples
International Nuclear Information System (INIS)
Miller, M.C.; Menlove, H.O.; Russo, P.A.
1991-01-01
The inventory sample coincidence counter (INVS) has been modified to enhance its performance. The new design is suitable for use with a glove box sample-well (in-line application) as well as for use in the standard at-line mode. The counter has been redesigned to count more efficiently and be less sensitive to variations in sample position. These factors lead to a higher degree of precision and accuracy in a given counting period and allow for the practical use of the INVS counter with gamma-ray isotopics to obtain a plutonium assay independent of operator declarations and time-consuming chemicals analysis. A calculation study was performed using the Los Alamos transport code MCNP to optimize the design parameters. 5 refs., 7 figs., 8 tabs
Efficient sampling of complex network with modified random walk strategies
Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei
2018-02-01
We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.
Efficient computation of smoothing splines via adaptive basis sampling
Ma, Ping
2015-06-24
© 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n^{3}). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.
Efficient computation of smoothing splines via adaptive basis sampling
Ma, Ping; Huang, Jianhua Z.; Zhang, Nan
2015-01-01
© 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n^{3}). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.
Sampling efficiency of modified 37-mm sampling cassettes using computational fluid dynamics.
Anthony, T Renée; Sleeth, Darrah; Volckens, John
2016-01-01
In the U.S., most industrial hygiene practitioners continue to rely on the closed-face cassette (CFC) to assess worker exposures to hazardous dusts, primarily because ease of use, cost, and familiarity. However, mass concentrations measured with this classic sampler underestimate exposures to larger particles throughout the inhalable particulate mass (IPM) size range (up to aerodynamic diameters of 100 μm). To investigate whether the current 37-mm inlet cap can be redesigned to better meet the IPM sampling criterion, computational fluid dynamics (CFD) models were developed, and particle sampling efficiencies associated with various modifications to the CFC inlet cap were determined. Simulations of fluid flow (standard k-epsilon turbulent model) and particle transport (laminar trajectories, 1-116 μm) were conducted using sampling flow rates of 10 L min(-1) in slow moving air (0.2 m s(-1)) in the facing-the-wind orientation. Combinations of seven inlet shapes and three inlet diameters were evaluated as candidates to replace the current 37-mm inlet cap. For a given inlet geometry, differences in sampler efficiency between inlet diameters averaged less than 1% for particles through 100 μm, but the largest opening was found to increase the efficiency for the 116 μm particles by 14% for the flat inlet cap. A substantial reduction in sampler efficiency was identified for sampler inlets with side walls extending beyond the dimension of the external lip of the current 37-mm CFC. The inlet cap based on the 37-mm CFC dimensions with an expanded 15-mm entry provided the best agreement with facing-the-wind human aspiration efficiency. The sampler efficiency was increased with a flat entry or with a thin central lip adjacent to the new enlarged entry. This work provides a substantial body of sampling efficiency estimates as a function of particle size and inlet geometry for personal aerosol samplers.
Energy Technology Data Exchange (ETDEWEB)
Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.
2017-08-21
Molecular dynamics (MD) simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules but are limited by the timescale barrier, i.e., we may be unable to efficiently obtain properties because we need to run microseconds or longer simulations using femtoseconds time steps. While there are several existing methods to overcome this timescale barrier and efficiently sample thermodynamic and/or kinetic properties, problems remain in regard to being able to sample un- known systems, deal with high-dimensional space of collective variables, and focus the computational effort on slow timescales. Hence, a new sampling method, called the “Concurrent Adaptive Sampling (CAS) algorithm,” has been developed to tackle these three issues and efficiently obtain conformations and pathways. The method is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective vari- ables and uses macrostates (a partition of the collective variable space) to enhance the sampling. The exploration is done by running a large number of short simula- tions, and a clustering technique is used to accelerate the sampling. In this paper, we introduce the new methodology and show results from two-dimensional models and bio-molecules, such as penta-alanine and triazine polymer
DEFF Research Database (Denmark)
Andreetta, Christian
-likelihood estimators for the form factors employed in the Debye formula, a theoretical forward model for SAXS profiles. The resulting computation compares favorably with the state of the art tool in the field, the program CRYSOL in the suite ATSAS. A faster, parallel implementation on Graphical Processor Units (GPUs......The present work describes the design and the implementation of a protocol for arbitrary precision computation of Small Angle X-ray Scattering (SAXS) profiles, and its inclusion in a probabilistic framework for protein structure determination. This protocol identifies a set of maximum...... of protein structures all fitting the experimental data. For the first time, we describe in full atomic detail a set of different conformations attainable by flexible polypeptides in solution. This method is not limited by assumptions in shape or size of the samples. It allows therefore to investigate...
Efficient triangulation of Poisson-disk sampled point sets
Guo, Jianwei
2014-05-06
In this paper, we present a simple yet efficient algorithm for triangulating a 2D input domain containing a Poisson-disk sampled point set. The proposed algorithm combines a regular grid and a discrete clustering approach to speedup the triangulation. Moreover, our triangulation algorithm is flexible and performs well on more general point sets such as adaptive, non-maximal Poisson-disk sets. The experimental results demonstrate that our algorithm is robust for a wide range of input domains and achieves significant performance improvement compared to the current state-of-the-art approaches. © 2014 Springer-Verlag Berlin Heidelberg.
An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method
International Nuclear Information System (INIS)
Campolina, Daniel; Lima, Paulo Rubens I.; Pereira, Claubia; Veloso, Maria Auxiliadora F.
2015-01-01
Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)
Comparing Server Energy Use and Efficiency Using Small Sample Sizes
Energy Technology Data Exchange (ETDEWEB)
Coles, Henry C.; Qin, Yong; Price, Phillip N.
2014-11-01
This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a
Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M; Harper, Martin
2014-01-01
This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the
Mäenpää, Kimmo; Leppänen, Matti T; Figueiredo, Kaisa; Mayer, Philipp; Gilbert, Dorothea; Jahnke, Annika; Gil-Allué, Carmen; Akkanen, Jarkko; Nybom, Inna; Herve, Sirpa
2015-11-01
Equilibrium sampling devices can be applied to study and monitor the exposure and fate of hydrophobic organic chemicals on a thermodynamic basis. They can be used to determine freely dissolved concentrations and chemical activity ratios and to predict equilibrium partitioning concentrations of hydrophobic organic chemicals in biota lipids. The authors' aim was to assess the equilibrium status of polychlorinated biphenyls (PCBs) in a contaminated lake ecosystem and along its discharge course using equilibrium sampling devices for measurements in sediment and water and by also analyzing biota. The authors used equilibrium sampling devices (silicone rubber and polyethylene [PE]) to determine freely dissolved concentrations and chemical activities of PCBs in the water column and sediment porewater and calculated for both phases the corresponding equilibrium concentrations and chemical activities in model lipids. Overall, the studied ecosystem appeared to be in disequilibrium for the studied phases: sediment, water, and biota. Chemical activities of PCBs were higher in sediment than in water, which implies that the sediment functioned as a partitioning source of PCBs and that net diffusion occurred from the sediment to the water column. Measured lipid-normalized PCB concentrations in biota were generally below equilibrium lipid concentrations relative to the sediment (CLip ⇌Sed ) or water (CLip ⇌W ), indicating that PCB levels in the organisms were below the maximum partitioning levels. The present study shows the application versatility of equilibrium sampling devices in the field and facilitates a thermodynamic understanding of exposure and fate of PCBs in a contaminated lake and its discharge course. © 2015 SETAC.
New Hybrid Monte Carlo methods for efficient sampling. From physics to biology and statistics
International Nuclear Information System (INIS)
Akhmatskaya, Elena; Reich, Sebastian
2011-01-01
We introduce a class of novel hybrid methods for detailed simulations of large complex systems in physics, biology, materials science and statistics. These generalized shadow Hybrid Monte Carlo (GSHMC) methods combine the advantages of stochastic and deterministic simulation techniques. They utilize a partial momentum update to retain some of the dynamical information, employ modified Hamiltonians to overcome exponential performance degradation with the system’s size and make use of multi-scale nature of complex systems. Variants of GSHMCs were developed for atomistic simulation, particle simulation and statistics: GSHMC (thermodynamically consistent implementation of constant-temperature molecular dynamics), MTS-GSHMC (multiple-time-stepping GSHMC), meso-GSHMC (Metropolis corrected dissipative particle dynamics (DPD) method), and a generalized shadow Hamiltonian Monte Carlo, GSHmMC (a GSHMC for statistical simulations). All of these are compatible with other enhanced sampling techniques and suitable for massively parallel computing allowing for a range of multi-level parallel strategies. A brief description of the GSHMC approach, examples of its application on high performance computers and comparison with other existing techniques are given. Our approach is shown to resolve such problems as resonance instabilities of the MTS methods and non-preservation of thermodynamic equilibrium properties in DPD, and to outperform known methods in sampling efficiency by an order of magnitude. (author)
Starr, Ross M.
2002-01-01
This study derives the monetary structure of transactions, the use of commodity or fiat money, endogenously from transaction costs in a segmented market general equilibrium model. Market segmentation means there are separate budget constraints for each transaction: budgets balance in each transaction separately. Transaction costs imply differing bid and ask (selling and buying) prices. The most liquid instruments are those with the lowest proportionate bid/ask spread in equilibrium. Exist...
On efficiency of some ratio estimators in double sampling design ...
African Journals Online (AJOL)
In this paper, three sampling ratio estimators in double sampling design were proposed with the intention of finding an alternative double sampling design estimator to the conventional ratio estimator in double sampling design discussed by Cochran (1997), Okafor (2002) , Raj (1972) and Raj and Chandhok (1999).
Efficiency and accuracy of Monte Carlo (importance) sampling
Waarts, P.H.
2003-01-01
Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed
Characteristics and Sampling Efficiency of Eight-Unit Linear Slot Impactor (EULSI)
National Research Council Canada - National Science Library
Kesavan, Jana S
2006-01-01
...%, respectively. The l-mum PSL and Bg particles yielded a sampling efficiency of 2l% + 3 and 17% + 12 respectively. The standard deviation for the measured sampling efficiency of the Bg particles was higher...
The efficiency of systematic sampling in stereology-reconsidered
DEFF Research Database (Denmark)
Gundersen, Hans Jørgen Gottlieb; Jensen, Eva B. Vedel; Kieu, K
1999-01-01
In the present paper, we summarize and further develop recent research in the estimation of the variance of stereological estimators based on systematic sampling. In particular, it is emphasized that the relevant estimation procedure depends on the sampling density. The validity of the variance...... estimation is examined in a collection of data sets, obtained by systematic sampling. Practical recommendations are also provided in a separate section....
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses
Adaptive cluster sampling: An efficient method for assessing inconspicuous species
Andrea M. Silletti; Joan Walker
2003-01-01
Restorationistis typically evaluate the success of a project by estimating the population sizes of species that have been planted or seeded. Because total census is raely feasible, they must rely on sampling methods for population estimates. However, traditional random sampling designs may be inefficient for species that, for one reason or another, are challenging to...
Sampling strategies for efficient estimation of tree foliage biomass
Hailemariam Temesgen; Vicente Monleon; Aaron Weiskittel; Duncan Wilson
2011-01-01
Conifer crowns can be highly variable both within and between trees, particularly with respect to foliage biomass and leaf area. A variety of sampling schemes have been used to estimate biomass and leaf area at the individual tree and stand scales. Rarely has the effectiveness of these sampling schemes been compared across stands or even across species. In addition,...
Ding, Weili; Lu, Ming
2007-01-01
Lacking guidance of general equilibrium (GE) theories in public economics and the corresponding proper mechanisms, China has not surprisingly witnessed an inequality in educational expenditures across regions as well as insufficiency of funds for education in poor areas. It is wrongly thought that what happens is due to the decentralized financing…
Zawadowicz, M. A.; Del Negro, L. A.
2010-12-01
Hazardous air pollutants (HAPs) are usually present in the atmosphere at pptv-level, requiring measurements with high sensitivity and minimal contamination. Commonly used evacuated canister methods require an overhead in space, money and time that often is prohibitive to primarily-undergraduate institutions. This study optimized an analytical method based on solid-phase microextraction (SPME) of ambient gaseous matrix, which is a cost-effective technique of selective VOC extraction, accessible to an unskilled undergraduate. Several approaches to SPME extraction and sample analysis were characterized and several extraction parameters optimized. Extraction time, temperature and laminar air flow velocity around the fiber were optimized to give highest signal and efficiency. Direct, dynamic extraction of benzene from a moving air stream produced better precision (±10%) than sampling of stagnant air collected in a polymeric bag (±24%). Using a low-polarity chromatographic column in place of a standard (5%-Phenyl)-methylpolysiloxane phase decreased the benzene detection limit from 2 ppbv to 100 pptv. The developed method is simple and fast, requiring 15-20 minutes per extraction and analysis. It will be field-validated and used as a field laboratory component of various undergraduate Chemistry and Environmental Studies courses.
Efficient Sample Tracking With OpenLabFramework
DEFF Research Database (Denmark)
List, Markus; Schmidt, Steffen; Trojnar, Jakub
2014-01-01
of samples created and need to be replaced with state-of-the-art laboratory information management systems. Such systems have been developed in large numbers, but they are often limited to specific research domains and types of data. One domain so far neglected is the management of libraries of vector clones...... and genetically engineered cell lines. OpenLabFramework is a newly developed web-application for sample tracking, particularly laid out to fill this gap, but with an open architecture allowing it to be extended for other biological materials and functional data. Its sample tracking mechanism is fully customizable...
Efficient Unbiased Rendering using Enlightened Local Path Sampling
DEFF Research Database (Denmark)
Kristensen, Anders Wang
measurements, which are the solution to the adjoint light transport problem. The second is a representation of the distribution of radiance and importance in the scene. We also derive a new method of particle sampling, which is advantageous compared to existing methods. Together we call the resulting algorithm....... The downside to using these algorithms is that they can be slow to converge. Due to the nature of Monte Carlo methods, the results are random variables subject to variance. This manifests itself as noise in the images, which can only be reduced by generating more samples. The reason these methods are slow...... is because of a lack of eeffective methods of importance sampling. Most global illumination algorithms are based on local path sampling, which is essentially a recipe for constructing random walks. Using this procedure paths are built based on information given explicitly as part of scene description...
Elsheikh, Ahmed H.; Hoteit, Ibrahim; Wheeler, Mary Fanett
2014-01-01
An efficient Bayesian calibration method based on the nested sampling (NS) algorithm and non-intrusive polynomial chaos method is presented. Nested sampling is a Bayesian sampling algorithm that builds a discrete representation of the posterior
International Nuclear Information System (INIS)
Abdi, M. R.; Mostajaboddavati, M.; Hassanzadeh, S.; Faghihian, H.; Rezaee, Kh.; Kamali, M.
2006-01-01
A nonlinear function in combination with the method of mixing activity calibrated is applied for fitting the experimental peak efficiency of HPGe spectrometers in 59-2614 keV energy range. The preparation of Marinelli beaker standards of mixed gamma and RG-set at secular equilibrium with its daughter radionuclides was studied. Standards were prepared by mixing of known amounts of 13B a, 241 Am, 152 Eu, 207 Bi, 24 Na, Al 2 O 3 powder and soil. The validity of these standards was checked by comparison with certified standard reference material RG-set and IAEA-Soil-6 Self-absorption was measured for the activity calculation of the gamma-ray lines about series of 238 U daughter, 232 Th series, 137 Cs and 40 K in soil samples. Self-absorption in the sample will depend on a number of factor including sample composition, density, sample size and gamma-ray energy. Seven Marinelli beaker standards were prepared in different degrees of compaction with bulk density ( ρ) of 1.000 to 1.600 g cm -3 . The detection efficiency versus density was obtained and the equation of self-absorption correction factors calculated for soil samples
Efficiency of snake sampling methods in the Brazilian semiarid region.
Mesquita, Paula C M D; Passos, Daniel C; Cechin, Sonia Z
2013-09-01
The choice of sampling methods is a crucial step in every field survey in herpetology. In countries where time and financial support are limited, the choice of the methods is critical. The methods used to sample snakes often lack objective criteria, and the traditional methods have apparently been more important when making the choice. Consequently researches using not-standardized methods are frequently found in the literature. We have compared four commonly used methods for sampling snake assemblages in a semiarid area in Brazil. We compared the efficacy of each method based on the cost-benefit regarding the number of individuals and species captured, time, and financial investment. We found that pitfall traps were the less effective method in all aspects that were evaluated and it was not complementary to the other methods in terms of abundance of species and assemblage structure. We conclude that methods can only be considered complementary if they are standardized to the objectives of the study. The use of pitfall traps in short-term surveys of the snake fauna in areas with shrubby vegetation and stony soil is not recommended.
Onychomycosis: Sampling, diagnosing as efficiant part of hospital pharmacology
Directory of Open Access Journals (Sweden)
Ignjatović Vesna A.
2014-01-01
Full Text Available Introduction Onychomycosis is a fungal infection of one or more nails. Causes of onychomycosis are dermatophytes, yeasts and non-dermatophyte molds, but the most common cause is Trichophytonrubrum (T. Rubrum from the group of dermatophyte fungi. The aims Using sampling determination of the most common clinical type of onychomycosis, lokalization and involvement of the nail plate, and monitoring the efficacy of methods/tests in the diagnosis of nail onychomycosis. Material and methods This paper is a part of academic IV phase study. The study included 30 patients with onychomycosis. Each sample was seeded on Sabouraud Dextrose Agar (SDA and Diluted SDA (D-SDA at 28°C and 37°C, as well as the Dermatophyte Test Medium (DTM at 28°C. Identification of isolated fungi to the level of genus/species has been based on macroscopic and microscopic characteristics by KOH and Blancophor fluorescent dye. PCR were performed to detect T. Rubrum-specific and pan-dermatophyte multiplex PCR product. Informed consent was obtained from all patients. Results The most common clinical form was subungual lateral distal onychomycosis (DLSOof the hands and feet pollex fingernails, while the size of the involvement of the nail plate was 1/2 - 1/3 in the majority of patients. Cultivation gave a positive result in 50% of cases and the most commonly isolated microorganism was the T. Rubrum. For negative cultures (50% the PCR was carried out which demonstrated high sensitivity and T. Rubrum remained the most frequently detected. Conclusions Using the methods of cultivation and PCR, onychomycosis was confirmed in 28 (93.3% patients. Cultivation gave a negative result in 50% of cases, while the PCR was positive in 86.6%. Our research shows the highest incidence of T. Rubrum (60%. In continuation of this study will be analyzed the choice and effectiveness of therapy.
Casavant, Benjamin P; Guckenberger, David J; Beebe, David J; Berry, Scott M
2014-07-01
Sample preparation is a major bottleneck in many biological processes. Paramagnetic particles (PMPs) are a ubiquitous method for isolating analytes of interest from biological samples and are used for their ability to thoroughly sample a solution and be easily collected with a magnet. There are three main methods by which PMPs are used for sample preparation: (1) removal of fluid from the analyte-bound PMPs, (2) removal of analyte-bound PMPs from the solution, and (3) removal of the substrate (with immobilized analyte-bound PMPs). In this paper, we explore the third and least studied method for PMP-based sample preparation using a platform termed Sliding Lid for Immobilized Droplet Extractions (SLIDE). SLIDE leverages principles of surface tension and patterned hydrophobicity to create a simple-to-operate platform for sample isolation (cells, DNA, RNA, protein) and preparation (cell staining) without the need for time-intensive wash steps, use of immiscible fluids, or precise pinning geometries. Compared to other standard isolation protocols using PMPs, SLIDE is able to perform rapid sample preparation with low (0.6%) carryover of contaminants from the original sample. The natural recirculation occurring within the pinned droplets of SLIDE make possible the performance of multistep cell staining protocols within the SLIDE by simply resting the lid over the various sample droplets. SLIDE demonstrates a simple easy to use platform for sample preparation on a range of complex biological samples.
Directory of Open Access Journals (Sweden)
Leonard Charles Ferrington Jr
2014-12-01
Full Text Available Relative efficiencies of standard dip-net sampling (SDN versus collections of surface-floating pupal exuviae (SFPE were determined for detecting Chironomidae at catchment and site scales and at subfamily/tribe-, genus- and species-levels based on simultaneous, equal-effort sampling on a monthly basis for one year during a biodiversity assessment of Bear Run Nature Reserve. Results showed SFPE was more efficient than SDN at catchment scales for detecting both genera and species. At site scales, SDN sampling was more efficient for assessment of a first-order site. No consistent pattern, except for better efficiency of SFPE to detect Orthocladiinae genera, was observed at genus-level for two second-order sites. However, SFPE was consistently more efficient at detecting species of Orthocladiinae, Chironomini and Tanytarsini at the second order sites. SFPE was more efficient at detecting both genera and species at two third-order sites. The differential efficiencies of the two methods are concluded to be related to stream order and size, substrate size, flow and water velocity, depth and habitat heterogeneity, and differential ability to discriminate species among pupal exuviae specimens versus larval specimens. Although both approaches are considered necessary for comprehensive biodiversity assessments of Chironomidae, our results suggest that there is an optimal, but different, allocation of sampling effort for detecting Chironomidae across stream orders and at differing spatial and taxonomic scales.Article submitted 13. August 2014, accepted 31. October 2014, published 22. December 2014.
International Nuclear Information System (INIS)
Ajemba, R.O.
2015-01-01
The adsorption performance of modified Nkalagu bentonite in removing Congo red (CR) from solution was investigated. The raw bentonite was modified by three different physicochemical methods: thermal activation (TA), acid activation (AA), and combined acid and thermal activation (ATA). The Congo red adsorption increased with increase in contact time, initial dye concentration, adsorbent dosage, temperature, and pH change. The results of the kinetics analysis of the adsorption data revealed that adsorption follows pseudo second-order kinetics. Analysis of the equilibrium data showed that Langmuir isotherm provided a better fit to the data. Evaluation of the thermodynamic parameters revealed that adsorption process is spontaneous and endothermic. The results from this study suggest that a combination of thermal and acid activation is an effective modification method to improve adsorption capacity of bentonite and makes the bentonite as low-cost adsorbent for removal of water pollutants. (author)
International Nuclear Information System (INIS)
Malik, Muhammad Arif; Schoenbach, Karl H
2012-01-01
Energetic and scalable non-equilibrium plasma was formed in pure water vapour at atmospheric pressure between wire-to-strip electrodes on a dielectric surface with one of the electrodes extended forming a conductive plane on the back side of the dielectric surface. The energy deposition increased by an order of magnitude compared with the conventional pulsed corona discharges under the same conditions. The scalability was demonstrated by operating two electrode assemblies with a common conductive plane between two dielectric layers. The energy yields for hydrogen and hydrogen peroxide generation were measured as ∼1.2 g H 2 /kWh and ∼4 g H 2 O 2 /kWh. (fast track communication)
Energy Technology Data Exchange (ETDEWEB)
Worms, Isabelle A.M. [CABE - Analytical and Biophysical Environmental Chemistry, University of Geneva, 30 quai Ernest Ansermet 1211 Geneva 4 (Switzerland); Wilkinson, Kevin J. [Department of Chemistry, University of Montreal C.P. 6128, succursale Centre-ville Montreal, H3C 3J7 (Canada)], E-mail: KJ.Wilkinson@umontreal.ca
2008-05-26
In natural waters, the determination of free metal concentrations is a key parameter for studying bioavailability. Unfortunately, few analytical tools are available for determining Ni speciation at the low concentrations found in natural waters. In this paper, an ion exchange technique (IET) that employs a Dowex resin is evaluated for its applicability to measure [Ni{sup 2+}] in freshwaters. The presence of major cations (e.g. Na, Ca and Mg) reduced both the times that were required for equilibration and the partition coefficient to the resin ({lambda}{sup '}{sub Ni}). IET measurements of [Ni{sup 2+}] in the presence of known ligands (citrate, diglycolate, sulfoxine, oxine and diethyldithiocarbamate) were verified by thermodynamic speciation models (MINEQL{sup +} and VisualMINTEQ). Results indicated that the presence of hydrophobic complexes (e.g. Ni(DDC){sub 2}{sup 0}) lead to an overestimation of the Ni{sup 2+} fraction. On the other hand, [Ni{sup 2+}] measurements that were made in the presence of amphiphilic complexes formed with humic substances (standard aquatic humic acid (SRHA) and standard aquatic fulvic acid (SRFA)) were well correlated to free ion concentrations that were calculated using a NICA-DONNAN model. An analytical method is also presented here to reduce the complexity of the calibration (due to the presence of many other cations) for the use of Dowex equilibrium ion exchange technique in natural waters.
Design of sample analysis device for iodine adsorption efficiency test in NPPs
International Nuclear Information System (INIS)
Ji Jinnan
2015-01-01
In nuclear power plants, iodine adsorption efficiency test is used to check the iodine adsorption efficiency of the iodine adsorber. The iodine adsorption efficiency can be calculated through the analysis of the test sample, and thus to determine if the performance of the adsorber meets the requirement on the equipment operation and emission. Considering the process of test and actual demand, in this paper, a special device for the analysis of this kind of test sample is designed. The application shows that the device is with convenient operation and high reliability and accurate calculation, and improves the experiment efficiency and reduces the experiment risk. (author)
An Efficient Constraint Boundary Sampling Method for Sequential RBDO Using Kriging Surrogate Model
Energy Technology Data Exchange (ETDEWEB)
Kim, Jihoon; Jang, Junyong; Kim, Shinyu; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of); Cho, Sugil; Kim, Hyung Woo; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Busan (Korea, Republic of)
2016-06-15
Reliability-based design optimization (RBDO) requires a high computational cost owing to its reliability analysis. A surrogate model is introduced to reduce the computational cost in RBDO. The accuracy of the reliability depends on the accuracy of the surrogate model of constraint boundaries in the surrogated-model-based RBDO. In earlier researches, constraint boundary sampling (CBS) was proposed to approximate accurately the boundaries of constraints by locating sample points on the boundaries of constraints. However, because CBS uses sample points on all constraint boundaries, it creates superfluous sample points. In this paper, efficient constraint boundary sampling (ECBS) is proposed to enhance the efficiency of CBS. ECBS uses the statistical information of a kriging surrogate model to locate sample points on or near the RBDO solution. The efficiency of ECBS is verified by mathematical examples.
International Nuclear Information System (INIS)
Currie, D.R.
1980-02-01
Laboratory tests were made to decontaminate radiocarbon samples containing known amounts of contamination. Results for both acid-alkali treatment and acid hydrolysis indicate that decontamination is not 100% efficient
Energy Technology Data Exchange (ETDEWEB)
Yang, Y. Isaac [Institute of Theoretical and Computational Chemistry, College of Chemistry and Molecular Engineering, Peking University, Beijing 100871 (China); Zhang, Jun; Che, Xing; Yang, Lijiang; Gao, Yi Qin, E-mail: gaoyq@pku.edu.cn [Institute of Theoretical and Computational Chemistry, College of Chemistry and Molecular Engineering, Peking University, Beijing 100871 (China); Biodynamic Optical Imaging Center, Peking University, Beijing 100871 (China)
2016-03-07
In order to efficiently overcome high free energy barriers embedded in a complex energy landscape and calculate overall thermodynamics properties using molecular dynamics simulations, we developed and implemented a sampling strategy by combining the metadynamics with (selective) integrated tempering sampling (ITS/SITS) method. The dominant local minima on the potential energy surface (PES) are partially exalted by accumulating history-dependent potentials as in metadynamics, and the sampling over the entire PES is further enhanced by ITS/SITS. With this hybrid method, the simulated system can be rapidly driven across the dominant barrier along selected collective coordinates. Then, ITS/SITS ensures a fast convergence of the sampling over the entire PES and an efficient calculation of the overall thermodynamic properties of the simulation system. To test the accuracy and efficiency of this method, we first benchmarked this method in the calculation of ϕ − ψ distribution of alanine dipeptide in explicit solvent. We further applied it to examine the design of template molecules for aromatic meta-C—H activation in solutions and investigate solution conformations of the nonapeptide Bradykinin involving slow cis-trans isomerizations of three proline residues.
Efficiency comparisons of fish sampling gears for a lentic ecosystem health assessments in Korea
Directory of Open Access Journals (Sweden)
Jeong-Ho Han
2016-12-01
Full Text Available The key objective of this study was to analyze the sampling efficiency of various fish sampling gears for a lentic ecosystem health assessment. A fish survey for the lentic ecosystem health assessment model was sampled twice from 30 reservoirs during 2008–2012. During the study, fishes of 81 species comprising 53,792 individuals were sampled from 30 reservoirs. A comparison of sampling gears showed that casting nets were the best sampling gear with high species richness (69 species, whereas minnow traps were the worst gear with low richness (16 species. Fish sampling efficiency, based on the number of individual catch per unit effort, was best in fyke nets (28,028 individuals and worst in minnow traps (352 individuals. When we compared trammel nets and kick nets versus fyke nets and casting nets, the former were useful in terms of the number of fish individuals but not in terms of the number of fish species.
Directory of Open Access Journals (Sweden)
Gaofei Yin
2017-11-01
Full Text Available Spatiotemporally representative Elementary Sampling Units (ESUs are required for capturing the temporal variations in surface spatial heterogeneity through field measurements. Since inaccessibility often coexists with heterogeneity, a cost-efficient sampling design is mandatory. We proposed a sampling strategy to generate spatiotemporally representative and cost-efficient ESUs based on the conditioned Latin hypercube sampling scheme. The proposed strategy was constrained by multi-temporal Normalized Difference Vegetation Index (NDVI imagery, and the ESUs were limited within a sampling feasible region established based on accessibility criteria. A novel criterion based on the Overlapping Area (OA between the NDVI frequency distribution histogram from the sampled ESUs and that from the entire study area was used to assess the sampling efficiency. A case study in Wanglang National Nature Reserve in China showed that the proposed strategy improves the spatiotemporally representativeness of sampling (mean annual OA = 74.7% compared to the single-temporally constrained (OA = 68.7% and the random sampling (OA = 63.1% strategies. The introduction of the feasible region constraint significantly reduces in-situ labour-intensive characterization necessities at expenses of about 9% loss in the spatiotemporal representativeness of the sampling. Our study will support the validation activities in Wanglang experimental site providing a benchmark for locating the nodes of automatic observation systems (e.g., LAINet which need a spatially distributed and temporally fixed sampling design.
International Nuclear Information System (INIS)
Grenier, M.; Bigu, J.
1982-07-01
The calibration procedures used for a working level meter (WLM) of the grab-sampling type are presented in detail. The WLM tested is a Pylon WL-1000C working level meter and it was calibrated for radon/thoron daughter counting efficiency (E), for sampling pump flow rate (Q) and other variables of interest. For the instrument calibrated at the Elliot Lake Laboratory, E was 0.22 +- 0.01 while Q was 4.50 +- 0.01 L/min
DIAGNOSIS OF FINANCIAL EQUILIBRIUM
Directory of Open Access Journals (Sweden)
SUCIU GHEORGHE
2013-04-01
Full Text Available The analysis based on the balance sheet tries to identify the state of equilibrium (disequilibrium that exists in a company. The easiest way to determine the state of equilibrium is by looking at the balance sheet and at the information it offers. Because in the balance sheet there are elements that do not reflect their real value, the one established on the market, they must be readjusted, and those elements which are not related to the ordinary operating activities must be eliminated. The diagnosis of financial equilibrium takes into account 2 components: financing sources (ownership equity, loaned, temporarily attracted. An efficient financial equilibrium must respect 2 fundamental requirements: permanent sources represented by ownership equity and loans for more than 1 year should finance permanent needs, and temporary resources should finance the operating cycle.
Directory of Open Access Journals (Sweden)
C. F. D. Rocha
Full Text Available Studies on anurans in restinga habitats are few and, as a result, there is little information on which methods are more efficient for sampling them in this environment. Ten methods are usually used for sampling anuran communities in tropical and sub-tropical areas. In this study we evaluate which methods are more appropriate for this purpose in the restinga environment of Parque Nacional da Restinga de Jurubatiba. We analyzed six methods among those usually used for anuran samplings. For each method, we recorded the total amount of time spent (in min., the number of researchers involved, and the number of species captured. We calculated a capture efficiency index (time necessary for a researcher to capture an individual frog in order to make comparable the data obtained. Of the methods analyzed, the species inventory (9.7 min/searcher /ind.- MSI; richness = 6; abundance = 23 and the breeding site survey (9.5 MSI; richness = 4; abundance = 22 were the most efficient. The visual encounter inventory (45.0 MSI and patch sampling (65.0 MSI methods were of comparatively lower efficiency restinga, whereas the plot sampling and the pit-fall traps with drift-fence methods resulted in no frog capture. We conclude that there is a considerable difference in efficiency of methods used in the restinga environment and that the complete species inventory method is highly efficient for sampling frogs in the restinga studied and may be so in other restinga environments. Methods that are usually efficient in forested areas seem to be of little value in open restinga habitats.
Rocha, C F D; Van Sluys, M; Hatano, F H; Boquimpani-Freitas, L; Marra, R V; Marques, R V
2004-11-01
Studies on anurans in restinga habitats are few and, as a result, there is little information on which methods are more efficient for sampling them in this environment. Ten methods are usually used for sampling anuran communities in tropical and sub-tropical areas. In this study we evaluate which methods are more appropriate for this purpose in the restinga environment of Parque Nacional da Restinga de Jurubatiba. We analyzed six methods among those usually used for anuran samplings. For each method, we recorded the total amount of time spent (in min.), the number of researchers involved, and the number of species captured. We calculated a capture efficiency index (time necessary for a researcher to capture an individual frog) in order to make comparable the data obtained. Of the methods analyzed, the species inventory (9.7 min/searcher /ind.- MSI; richness = 6; abundance = 23) and the breeding site survey (9.5 MSI; richness = 4; abundance = 22) were the most efficient. The visual encounter inventory (45.0 MSI) and patch sampling (65.0 MSI) methods were of comparatively lower efficiency restinga, whereas the plot sampling and the pit-fall traps with drift-fence methods resulted in no frog capture. We conclude that there is a considerable difference in efficiency of methods used in the restinga environment and that the complete species inventory method is highly efficient for sampling frogs in the restinga studied and may be so in other restinga environments. Methods that are usually efficient in forested areas seem to be of little value in open restinga habitats.
International Nuclear Information System (INIS)
Harb, S.; Salahel Din, K.; Abbady, A.
2009-01-01
In this paper, we describe a method of calibrating of efficiency of a HPGe gamma-ray spectrometry of bulk environmental samples (Tea, crops, water, and soil) is a significant part of the environmental radioactivity measurements. Here we will discuss the full energy peak efficiency (FEPE) of three HPGe detectors it as a consequence, it is essential that the efficiency is determined for each set-up employed. Besides to take full advantage at gamma-ray spectrometry, a set of efficiency at several energies which covers the wide the range in energy, the large the number of radionuclides whose concentration can be determined to measure the main natural gamma-ray emitters, the efficiency should be known at least from 46.54 keV ( 210 Pb) to 1836 keV ( 88 Y). Radioactive sources were prepared from two different standards, a first mixed standard QC Y 40 containing 210 Pb, 241 Am, 109 Cd, and Co 57 , and the second QC Y 48 containing 241 Am, 109 Cd, 57 Co, 139 Ce, 113 Sn, 85 Sr, 137 Cs, 88 Y, and 60 Co is necessary in order to calculate the activity of the different radionuclides contained in a sample. In this work, we will study the efficiency calibration as a function of different parameters as:- Energy of gamma ray from 46.54 keV ( 210 Pb) to 1836 keV ( 88 Y), three different detectors A, B, and C, geometry of containers (point source, marinelli beaker, and cylindrical bottle 1 L), height of standard soil samples in bottle 250 ml, and density of standard environmental samples. These standard environmental sample must be measured before added standard solution because we will use the same environmental samples in order to consider the self absorption especially and composition in the case of volume samples.
International Nuclear Information System (INIS)
Grau Carles, A.; Grau Malonda, A.; Rodriguez Barquero, L.
1993-01-01
The CIEMAT/NIST tracer method has successfully standardized nuclides with diverse quench values and decay schemes in liquid scintillation counting. However, the counting efficiency is computed inaccurately for extremely quenched samples. This article shows that when samples are extremely quenched, the counting efficiency in high-energy beta-ray nuclides depends principally on the Cherenkov effect. A new technique is described for quench determination, which makes the measurement of counting efficiency possible when scintillation counting approaches zero. A new efficiency computation model for pure beta-ray nuclides is also described. The results of the model are tested experimentally for 89 Sr, 90 Y, 36 Cl and 204 Tl nuclides with independence of the quench level. (orig.)
Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.
You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary
2011-02-01
The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure
Efficient free energy calculations by combining two complementary tempering sampling methods.
Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun
2017-01-14
Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.
Energy Technology Data Exchange (ETDEWEB)
Anon.
1984-12-15
From 3-6 September the First International Workshop on Local Equilibrium in Strong Interaction Physics took place in Bad-Honnef at the Physics Centre of the German Physical Society. A number of talks covered the experimental and theoretical investigation of the 'hotspots' effect, both in high energy particle physics and in intermediate energy nuclear physics.
African Journals Online (AJOL)
context of antimicrobial therapy in malnutrition. Dialysis has in the past presented technical problems, being complicated and time-consuming. A new dialysis system based on the equilibrium technique has now become available, and it is the principles and practical application of this apparatus (Kontron Diapack; Kontron.
van Damme, E.E.C.
2000-01-01
An outcome in a noncooperative game is said to be self-enforcing, or a strategic equilibrium, if, whenever it is recommended to the players, no player has an incentive to deviate from it.This paper gives an overview of the concepts that have been proposed as formalizations of this requirement and of
Ismail, M.S.
2014-01-01
We introduce a new concept which extends von Neumann and Morgenstern's maximin strategy solution by incorporating `individual rationality' of the players. Maximin equilibrium, extending Nash's value approach, is based on the evaluation of the strategic uncertainty of the whole game. We show that
Standardized Method for Measuring Collection Efficiency from Wipe-sampling of Trace Explosives.
Verkouteren, Jennifer R; Lawrence, Jeffrey A; Staymates, Matthew E; Sisco, Edward
2017-04-10
One of the limiting steps to detecting traces of explosives at screening venues is effective collection of the sample. Wipe-sampling is the most common procedure for collecting traces of explosives, and standardized measurements of collection efficiency are needed to evaluate and optimize sampling protocols. The approach described here is designed to provide this measurement infrastructure, and controls most of the factors known to be relevant to wipe-sampling. Three critical factors (the applied force, travel distance, and travel speed) are controlled using an automated device. Test surfaces are chosen based on similarity to the screening environment, and the wipes can be made from any material considered for use in wipe-sampling. Particle samples of the explosive 1,3,5-trinitroperhydro-1,3,5-triazine (RDX) are applied in a fixed location on the surface using a dry-transfer technique. The particle samples, recently developed to simulate residues made after handling explosives, are produced by inkjet printing of RDX solutions onto polytetrafluoroethylene (PTFE) substrates. Collection efficiency is measured by extracting collected explosive from the wipe, and then related to critical sampling factors and the selection of wipe material and test surface. These measurements are meant to guide the development of sampling protocols at screening venues, where speed and throughput are primary considerations.
Reachable Distance Space: Efficient Sampling-Based Planning for Spatially Constrained Systems
Xinyu Tang,
2010-01-25
Motion planning for spatially constrained robots is difficult due to additional constraints placed on the robot, such as closure constraints for closed chains or requirements on end-effector placement for articulated linkages. It is usually computationally too expensive to apply sampling-based planners to these problems since it is difficult to generate valid configurations. We overcome this challenge by redefining the robot\\'s degrees of freedom and constraints into a new set of parameters, called reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the number of the robot\\'s degrees of freedom. In addition to supporting efficient sampling of configurations, we show that the RD-space formulation naturally supports planning and, in particular, we design a local planner suitable for use by sampling-based planners. We demonstrate the effectiveness and efficiency of our approach for several systems including closed chain planning with multiple loops, restricted end-effector sampling, and on-line planning for drawing/sculpting. We can sample single-loop closed chain systems with 1,000 links in time comparable to open chain sampling, and we can generate samples for 1,000-link multi-loop systems of varying topologies in less than a second. © 2010 The Author(s).
International Nuclear Information System (INIS)
Haddad, Kh.
2009-02-01
Gamma spectrometry forms the most important and capable tool for measuring radioactive materials. Determination of the efficiency and attenuation correction factors is the most tedious problem in the gamma spectrometric assay of bulk samples. A new experimental and easy method for these correction factors determination using self radiation was proposed in this work. An experimental study of the correlation between self attenuation correction factor and sample thickness and its practical application was also introduced. The work was performed on NORM and uranyl nitrate bulk sample. The results of proposed methods agreed with those of traditional ones.(author)
Meloni, Roberto; Camilloni, Carlo; Tiana, Guido
2014-02-11
The denatured state of polypeptides and proteins, stabilized by chemical denaturants like urea and guanidine chloride, displays residual secondary structure when studied by nuclear-magnetic-resonance spectroscopy. However, these experimental techniques are weakly sensitive, and thus molecular-dynamics simulations can be useful to complement the experimental findings. To sample the denatured state, we made use of massively-parallel computers and of a variant of the replica exchange algorithm, in which the different branches, connected with unbiased replicas, favor the formation and disruption of local secondary structure. The algorithm is applied to the second hairpin of GB1 in water, in urea, and in guanidine chloride. We show with the help of different criteria that the simulations converge to equilibrium. It results that urea and guanidine chloride, besides inducing some polyproline-II structure, have different effect on the hairpin. Urea disrupts completely the native region and stabilizes a state which resembles a random coil, while guanidine chloride has a milder effect.
Chau, Nancy H.
2009-01-01
This paper presents a capability-augmented model of on the job search, in which sweatshop conditions stifle the capability of the working poor to search for a job while on the job. The augmented setting unveils a sweatshop equilibrium in an otherwise archetypal Burdett-Mortensen economy, and reconciles a number of oft noted yet perplexing features of sweatshop economies. We demonstrate existence of multiple rational expectation equilibria, graduation pathways out of sweatshops in complete abs...
Farhat, A.; Menif, M.; Rezig, H.
2013-09-01
This paper analyses the spectral efficiency of Optical Code Division Multiple Access (OCDMA) system using Importance Sampling (IS) technique. We consider three configurations of OCDMA system namely Direct Sequence (DS), Spectral Amplitude Coding (SAC) and Fast Frequency Hopping (FFH) that exploits the Fiber Bragg Gratings (FBG) based encoder/decoder. We evaluate the spectral efficiency of the considered system by taking into consideration the effect of different families of unipolar codes for both coherent and incoherent sources. The results show that the spectral efficiency of OCDMA system with coherent source is higher than the incoherent case. We demonstrate also that DS-OCDMA outperforms both others in terms of spectral efficiency in all conditions.
Bootstrap-DEA analysis of BRICS’ energy efficiency based on small sample data
International Nuclear Information System (INIS)
Song, Ma-Lin; Zhang, Lin-Ling; Liu, Wei; Fisher, Ron
2013-01-01
Highlights: ► The BRICS’ economies have flourished with increasingly energy consumptions. ► The analyses and comparison of energy efficiency are conducted among the BRICS. ► As a whole, there is low energy efficiency but a growing trend of BRICS. ► The BRICS should adopt relevant energy policies based on their own conditions. - Abstract: As a representative of many emerging economies, BRICS’ economies have been greatly developed in recent years. Meanwhile, the proportion of energy consumption of BRICS to the whole world consumption has increased. Therefore, it is significant to analyze and compare the energy efficiency among them. This paper firstly utilizes a Super-SBM model to measure and calculate the energy efficiency of BRICS, then analyzes their present status and development trend. Further, Bootstrap is applied to modify the values based on DEA derived from small sample data, and finally the relationship between energy efficiency and carbon emissions is measured. Results show that energy efficiency of BRICS as a whole is low but has a quickly increasing trend. Also, the relationship between energy efficiency and carbon emissions vary from country to country because of their different energy structures. The governments of BRICS should make some relevant energy policies according to their own conditions
Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems
Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros
2015-04-01
In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the
Su, Wei-Chung; Tolchinsky, Alexander D; Chen, Bean T; Sigaev, Vladimir I; Cheng, Yung Sung
2012-09-01
The need to determine occupational exposure to bioaerosols has notably increased in the past decade, especially for microbiology-related workplaces and laboratories. Recently, two new cyclone-based personal bioaerosol samplers were developed by the National Institute for Occupational Safety and Health (NIOSH) in the USA and the Research Center for Toxicology and Hygienic Regulation of Biopreparations (RCT & HRB) in Russia to monitor bioaerosol exposure in the workplace. Here, a series of wind tunnel experiments were carried out to evaluate the physical sampling performance of these two samplers in moving air conditions, which could provide information for personal biological monitoring in a moving air environment. The experiments were conducted in a small wind tunnel facility using three wind speeds (0.5, 1.0 and 2.0 m s(-1)) and three sampling orientations (0°, 90°, and 180°) with respect to the wind direction. Monodispersed particles ranging from 0.5 to 10 μm were employed as the test aerosols. The evaluation of the physical sampling performance was focused on the aspiration efficiency and capture efficiency of the two samplers. The test results showed that the orientation-averaged aspiration efficiencies of the two samplers closely agreed with the American Conference of Governmental Industrial Hygienists (ACGIH) inhalable convention within the particle sizes used in the evaluation tests, and the effect of the wind speed on the aspiration efficiency was found negligible. The capture efficiencies of these two samplers ranged from 70% to 80%. These data offer important information on the insight into the physical sampling characteristics of the two test samplers.
Elsheikh, Ahmed H.
2014-02-01
An efficient Bayesian calibration method based on the nested sampling (NS) algorithm and non-intrusive polynomial chaos method is presented. Nested sampling is a Bayesian sampling algorithm that builds a discrete representation of the posterior distributions by iteratively re-focusing a set of samples to high likelihood regions. NS allows representing the posterior probability density function (PDF) with a smaller number of samples and reduces the curse of dimensionality effects. The main difficulty of the NS algorithm is in the constrained sampling step which is commonly performed using a random walk Markov Chain Monte-Carlo (MCMC) algorithm. In this work, we perform a two-stage sampling using a polynomial chaos response surface to filter out rejected samples in the Markov Chain Monte-Carlo method. The combined use of nested sampling and the two-stage MCMC based on approximate response surfaces provides significant computational gains in terms of the number of simulation runs. The proposed algorithm is applied for calibration and model selection of subsurface flow models. © 2013.
Luca Anderlini; Daniele Terlizzese
2009-01-01
We build a simple model of trust as an equilibrium phenomenon, departing from standard "selfish" preferences in a minimal way. Agents who are on the receiving end of an other to transact can choose whether to cheat and take away the entire surplus, taking into account a "cost of cheating." The latter has an idiosyncratic component (an agent's type), and a socially determined one. The smaller the mass of agents who cheat, the larger the cost of cheating suffered by those who cheat. Depending o...
DEFF Research Database (Denmark)
Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb
2008-01-01
Quantification of tissue properties is improved using the general proportionator sampling and estimation procedure: automatic image analysis and non-uniform sampling with probability proportional to size (PPS). The complete region of interest is partitioned into fields of view, and every field...... of view is given a weight (the size) proportional to the total amount of requested image analysis features in it. The fields of view sampled with known probabilities proportional to individual weight are the only ones seen by the observer who provides the correct count. Even though the image analysis...... cerebellum, total number of orexin positive neurons in transgenic mice brain, and estimating the absolute area and the areal fraction of β islet cells in dog pancreas. The proportionator was at least eight times more efficient (precision and time combined) than traditional computer controlled sampling....
Efficient approach for reliability-based optimization based on weighted importance sampling approach
International Nuclear Information System (INIS)
Yuan, Xiukai; Lu, Zhenzhou
2014-01-01
An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology
In Vitro Efficient Expansion of Tumor Cells Deriving from Different Types of Human Tumor Samples
Directory of Open Access Journals (Sweden)
Ilaria Turin
2014-03-01
Full Text Available Obtaining human tumor cell lines from fresh tumors is essential to advance our understanding of antitumor immune surveillance mechanisms and to develop new ex vivo strategies to generate an efficient anti-tumor response. The present study delineates a simple and rapid method for efficiently establishing primary cultures starting from tumor samples of different types, while maintaining the immuno-histochemical characteristics of the original tumor. We compared two different strategies to disaggregate tumor specimens. After short or long term in vitro expansion, cells analyzed for the presence of malignant cells demonstrated their neoplastic origin. Considering that tumor cells may be isolated in a closed system with high efficiency, we propose this methodology for the ex vivo expansion of tumor cells to be used to evaluate suitable new drugs or to generate tumor-specific cytotoxic T lymphocytes or vaccines.
International Nuclear Information System (INIS)
Fukutsu, Kumiko; Yamada, Yuji; Kurihara, Osamu; Akashi, Makoto; Momose, Takumaro; Miyabe, Kenjiro
2008-01-01
At nuclear emergency accident such as inhalation intake of alpha nuclide, an indispensable nasal swab method has not been used for the precise internal dose estimation. One of the reasons is uncertainty in its radiation measurement, so that precise measurement with alpha spectrometry was examined for filter samples simulating nasal swab. It was confirmed that the alpha spectrometry made possible the distinction between solution and particulate in addition to the nuclide identification. The alpha activity in swab sample was precisely evaluated only when the detection efficiency was determined considering the self-absorption with filter fibers. Another big problem of wiping efficiency in nasal swabbing is still remain, but this study certainly raised the usefulness of the nasal swab method for rapid response in emergency. (author)
Convergence and Efficiency of Adaptive Importance Sampling Techniques with Partial Biasing
Fort, G.; Jourdain, B.; Lelièvre, T.; Stoltz, G.
2018-04-01
We propose a new Monte Carlo method to efficiently sample a multimodal distribution (known up to a normalization constant). We consider a generalization of the discrete-time Self Healing Umbrella Sampling method, which can also be seen as a generalization of well-tempered metadynamics. The dynamics is based on an adaptive importance technique. The importance function relies on the weights (namely the relative probabilities) of disjoint sets which form a partition of the space. These weights are unknown but are learnt on the fly yielding an adaptive algorithm. In the context of computational statistical physics, the logarithm of these weights is, up to an additive constant, the free-energy, and the discrete valued function defining the partition is called the collective variable. The algorithm falls into the general class of Wang-Landau type methods, and is a generalization of the original Self Healing Umbrella Sampling method in two ways: (i) the updating strategy leads to a larger penalization strength of already visited sets in order to escape more quickly from metastable states, and (ii) the target distribution is biased using only a fraction of the free-energy, in order to increase the effective sample size and reduce the variance of importance sampling estimators. We prove the convergence of the algorithm and analyze numerically its efficiency on a toy example.
DEFF Research Database (Denmark)
Vårdal, Linda; Gjelstad, Astrid; Huang, Chuixiu
2017-01-01
to be highly efficient for providing phospholipid-free extracts. CONCLUSION: Ultra-HPLC-MS/MS analysis of the donor solutions revealed that the phospholipids principally remained in the plasma samples. This proved that the phospholipids did not migrate in the electrical field and they were prevented from......AIM: For the first time, extracts obtained from human plasma samples by electromembrane extraction (EME) were investigated comprehensively with particular respect to phospholipids using ultra-high-performance liquid chromatography tandem mass spectrometry (UHPLC-MS/MS). Thhe purpose...
Vrugt, Jasper A.; Beven, Keith J.
2018-04-01
This essay illustrates some recent developments to the DiffeRential Evolution Adaptive Metropolis (DREAM) MATLAB toolbox of Vrugt (2016) to delineate and sample the behavioural solution space of set-theoretic likelihood functions used within the GLUE (Limits of Acceptability) framework (Beven and Binley, 1992, 2014; Beven and Freer, 2001; Beven, 2006). This work builds on the DREAM(ABC) algorithm of Sadegh and Vrugt (2014) and enhances significantly the accuracy and CPU-efficiency of Bayesian inference with GLUE. In particular it is shown how lack of adequate sampling in the model space might lead to unjustified model rejection.
International Nuclear Information System (INIS)
Yamano, N.; Brockmann, J.E.
1989-05-01
This report describes the features and use of the Aerosol Sampling and Transport Efficiency Calculation (ASTEC) Code. The ASTEC code has been developed to assess aerosol transport efficiency source term experiments at Sandia National Laboratories. This code also has broad application for aerosol sampling and transport efficiency calculations in general as well as for aerosol transport considerations in nuclear reactor safety issues. 32 refs., 31 figs., 7 tabs
Directory of Open Access Journals (Sweden)
Trejo S
2015-06-01
Full Text Available Salvador Trejo, José J Toscano-Flores, Esmeralda Matute, María de Lourdes Ramírez-Dueñas Laboratorio de Neuropsicología y Neurolingüística, Instituto de Neurociencias CUCBA, Guadalajara, Jalisco, Mexico Abstract: The aim of this study was to obtain the genotype and gene frequency from parents of children with attention-deficit/hyperactivity disorder (ADHD and then assess the Hardy–Weinberg equilibrium of genotype frequency of the variable number tandem repeat (VNTR III exon of the dopamine receptor D4 (DRD4 gene. The genotypes of the III exon of 48 bp VNTR repeats of the DRD4 gene were determined by polymerase chain reaction in a sample of 30 parents of ADHD cases. In the 60 chromosomes analyzed, the following frequencies of DRD4 gene polymorphisms were observed: six chromosomes (c with two repeat alleles (r (10%; 1c with 3r (1.5%; 36c with 4r (60%; 1c with 5r (1.5%; and 16c with 7r (27%. The genotypic distribution of the 30 parents was two parents (p with 2r/2r (6.67%; 1p with 2r/4r (3.33%; 1p with 2r/5r (3.33%; 1p with 3r/4r (3.33%; 15p with 4r/4r (50%; 4p with 4r/7r (13.33; and 6p with 7r/7r (20%. A Hardy–Weinberg disequilibrium (χ2=13.03, P<0.01 was found due to an over-representation of the 7r/7r genotype. These results suggest that the 7r polymorphism of the DRD4 gene is associated with the ADHD condition in a Mexican population. Keywords: ADHD, parents, DRD4, HWE
International Nuclear Information System (INIS)
Scheele, R.D.; Bredt, P.R.; Sell, R.L.
1997-09-01
Water content plays a crucial role in the strategy developed by Webb et al. to prevent propagating or sustainable chemical reactions in the organic-bearing wastes stored in the 20 Organic Tank Watch List tanks at the U.S. Department of Energy''s Hanford Site. Because of water''s importance in ensuring that the organic-bearing wastes continue to be stored safely, Duke Engineering and Services Hanford commissioned the Pacific Northwest National Laboratory to investigate the effect of water partial pressure (P H2O ) on the water content of organic-bearing or representative wastes. Of the various interrelated controlling factors affecting the water content in wastes, P H2O is the most susceptible to being controlled by the and Hanford Site''s environmental conditions and, if necessary, could be managed to maintain the water content at an acceptable level or could be used to adjust the water content back to an acceptable level. Of the various waste types resulting from weapons production and waste-management operations at the Hanford Site, determined that saltcake wastes are the most likely to require active management to maintain the wastes in a Conditionally Safe condition. Webb et al. identified Tank U-105 as a Conditionally Safe saltcake tank. A Conditionally Safe waste is one that is currently safe based on waste classification criteria but could, if dried, be classified as open-quotes Unsafe.close quotes To provide information on the behavior of organic-bearing wastes, the Westinghouse Hanford Company provided us with four waste samples taken from Tank 241-U-105 (U-105) to determine the effect of P H2O on their equilibrium water content
Associations of rumen parameters with feed efficiency and sampling routine in beef cattle.
Lam, S; Munro, J C; Zhou, M; Guan, L L; Schenkel, F S; Steele, M A; Miller, S P; Montanholi, Y R
2017-11-10
Characterizing ruminal parameters in the context of sampling routine and feed efficiency is fundamental to understand the efficiency of feed utilization in the bovine. Therefore, we evaluated microbial and volatile fatty acid (VFA) profiles, rumen papillae epithelial and stratum corneum thickness and rumen pH (RpH) and temperature (RT) in feedlot cattle. In all, 48 cattle (32 steers plus 16 bulls), fed a high moisture corn and haylage-based ration, underwent a productive performance test to determine residual feed intake (RFI) using feed intake, growth, BW and composition traits. Rumen fluid was collected, then RpH and RT logger were inserted 5.5±1 days before slaughter. At slaughter, the logger was recovered and rumen fluid and rumen tissue were sampled. The relative daily time spent in specific RpH and RT ranges were determined. Polynomial regression analysis was used to characterize RpH and RT circadian patterns. Animals were divided into efficient and inefficient groups based on RFI to compare productive performance and ruminal parameters. Efficient animals consumed 1.8 kg/day less dry matter than inefficient cattle (P⩽0.05) while achieving the same productive performance (P⩾0.10). Ruminal bacteria population was higher (P⩽0.05) (7.6×1011 v. 4.3×1011 copy number of 16S rRNA gene/ml rumen fluid) and methanogen population was lower (P⩽0.05) (2.3×109 v. 4.9×109 copy number of 16S rRNA gene/ml rumen fluid) in efficient compared with inefficient cattle at slaughter with no differences (P⩾0.10) between samples collected on-farm. No differences (P⩾0.10) in rumen fluid VFA were also observed between feed efficiency groups either on-farm or at slaughter. However, increased (P⩽0.05) acetate, and decreased (P⩽0.05) propionate, butyrate, valerate and caproate concentrations were observed at slaughter compared with on-farm. Efficient had increased (P⩽0.05) rumen epithelium thickness (136 v. 126 µm) compared with inefficient cattle. Efficient animals
Shu, Tongxin; Xia, Min; Chen, Jiahong; Silva, Clarence de
2017-11-05
Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA) is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO) and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA), while achieving around the same Normalized Mean Error (NME), DDASA is superior in saving 5.31% more battery energy.
Directory of Open Access Journals (Sweden)
Tongxin Shu
2017-11-01
Full Text Available Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA, while achieving around the same Normalized Mean Error (NME, DDASA is superior in saving 5.31% more battery energy.
Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina
2017-06-13
Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.
Optimal sampling plan for clean development mechanism energy efficiency lighting projects
International Nuclear Information System (INIS)
Ye, Xianming; Xia, Xiaohua; Zhang, Jiangfeng
2013-01-01
Highlights: • A metering cost minimisation model is built to assist the sampling plan for CDM projects. • The model minimises the total metering cost by the determination of optimal sample size. • The required 90/10 criterion sampling accuracy is maintained. • The proposed metering cost minimisation model is applicable to other CDM projects as well. - Abstract: Clean development mechanism (CDM) project developers are always interested in achieving required measurement accuracies with the least metering cost. In this paper, a metering cost minimisation model is proposed for the sampling plan of a specific CDM energy efficiency lighting project. The problem arises from the particular CDM sampling requirement of 90% confidence and 10% precision for the small-scale CDM energy efficiency projects, which is known as the 90/10 criterion. The 90/10 criterion can be met through solving the metering cost minimisation problem. All the lights in the project are classified into different groups according to uncertainties of the lighting energy consumption, which are characterised by their statistical coefficient of variance (CV). Samples from each group are randomly selected to install power meters. These meters include less expensive ones with less functionality and more expensive ones with greater functionality. The metering cost minimisation model will minimise the total metering cost through the determination of the optimal sample size at each group. The 90/10 criterion is formulated as constraints to the metering cost objective. The optimal solution to the minimisation problem will therefore minimise the metering cost whilst meeting the 90/10 criterion, and this is verified by a case study. Relationships between the optimal metering cost and the population sizes of the groups, CV values and the meter equipment cost are further explored in three simulations. The metering cost minimisation model proposed for lighting systems is applicable to other CDM projects as
Efficient computation of the joint sample frequency spectra for multiple populations.
Kamm, John A; Terhorst, Jonathan; Song, Yun S
2017-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.
de Oliveira, Mário J
2017-01-01
This textbook provides an exposition of equilibrium thermodynamics and its applications to several areas of physics with particular attention to phase transitions and critical phenomena. The applications include several areas of condensed matter physics and include also a chapter on thermochemistry. Phase transitions and critical phenomena are treated according to the modern development of the field, based on the ideas of universality and on the Widom scaling theory. For each topic, a mean-field or Landau theory is presented to describe qualitatively the phase transitions. These theories include the van der Waals theory of the liquid-vapor transition, the Hildebrand-Heitler theory of regular mixtures, the Griffiths-Landau theory for multicritical points in multicomponent systems, the Bragg-Williams theory of order-disorder in alloys, the Weiss theory of ferromagnetism, the Néel theory of antiferromagnetism, the Devonshire theory for ferroelectrics and Landau-de Gennes theory of liquid crystals. This new edit...
Efficiency of Airborne Sample Analysis Platform (ASAP Bioaerosol Sampler for Pathogen Detection
Directory of Open Access Journals (Sweden)
Anurag eSharma
2015-05-01
Full Text Available The threat of bioterrorism and pandemics has highlighted the urgency for rapid and reliable bioaerosol detection in different environments. Safeguarding against such threats requires continuous sampling of the ambient air for pathogen detection. In this study we investigated the efficacy of the Airborne Sample Analysis Platform (ASAP 2800 bioaerosol sampler to collect representative samples of air and identify specific viruses suspended as bioaerosols. To test this concept, we aerosolized an innocuous replication-defective bovine adenovirus serotype 3 (BAdV3 in a controlled laboratory environment. The ASAP efficiently trapped the surrogate virus at 5×10E3 plaque-forming units (p.f.u. [2×10E5 genome copy equivalent] concentrations or more resulting in the successful detection of the virus using quantitative PCR. These results support the further development of ASAP for bioaerosol pathogen detection.
Sample-efficient Strategies for Learning in the Presence of Noise
DEFF Research Database (Denmark)
Cesa-Bianchi, N.; Dichterman, E.; Fischer, Paul
1999-01-01
In this paper, we prove various results about PAC learning in the presence of malicious noise. Our main interest is the sample size behavior of learning algorithms. We prove the first nontrivial sample complexity lower bound in this model by showing that order of &egr;/&Dgr;2 + d/&Dgr; (up...... to logarithmic factors) examples are necessary for PAC learning any target class of {#123;0,1}#125;-valued functions of VC dimension d, where &egr; is the desired accuracy and &eegr; = &egr;/(1 + &egr;) - &Dgr; the malicious noise rate (it is well known that any nontrivial target class cannot be PAC learned...... with accuracy &egr; and malicious noise rate &eegr; &egr;/(1 + &egr;), this irrespective to sample complexity). We also show that this result cannot be significantly improved in general by presenting efficient learning algorithms for the class of all subsets of d elements and the class of unions of at most d...
Ma, Yanyuan
2013-09-01
We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.
An Efficient Approach for Mars Sample Return Using Emerging Commercial Capabilities.
Gonzales, Andrew A; Stoker, Carol R
2016-06-01
Mars Sample Return is the highest priority science mission for the next decade as recommended by the 2011 Decadal Survey of Planetary Science [1]. This article presents the results of a feasibility study for a Mars Sample Return mission that efficiently uses emerging commercial capabilities expected to be available in the near future. The motivation of our study was the recognition that emerging commercial capabilities might be used to perform Mars Sample Return with an Earth-direct architecture, and that this may offer a desirable simpler and lower cost approach. The objective of the study was to determine whether these capabilities can be used to optimize the number of mission systems and launches required to return the samples, with the goal of achieving the desired simplicity. All of the major element required for the Mars Sample Return mission are described. Mission system elements were analyzed with either direct techniques or by using parametric mass estimating relationships. The analysis shows the feasibility of a complete and closed Mars Sample Return mission design based on the following scenario: A SpaceX Falcon Heavy launch vehicle places a modified version of a SpaceX Dragon capsule, referred to as "Red Dragon", onto a Trans Mars Injection trajectory. The capsule carries all the hardware needed to return to Earth Orbit samples collected by a prior mission, such as the planned NASA Mars 2020 sample collection rover. The payload includes a fully fueled Mars Ascent Vehicle; a fueled Earth Return Vehicle, support equipment, and a mechanism to transfer samples from the sample cache system onboard the rover to the Earth Return Vehicle. The Red Dragon descends to land on the surface of Mars using Supersonic Retropropulsion. After collected samples are transferred to the Earth Return Vehicle, the single-stage Mars Ascent Vehicle launches the Earth Return Vehicle from the surface of Mars to a Mars phasing orbit. After a brief phasing period, the Earth Return
Efficient and exact sampling of simple graphs with given arbitrary degree sequence.
Directory of Open Access Journals (Sweden)
Charo I Del Genio
Full Text Available Uniform sampling from graphical realizations of a given degree sequence is a fundamental component in simulation-based measurements of network observables, with applications ranging from epidemics, through social networks to Internet modeling. Existing graph sampling methods are either link-swap based (Markov-Chain Monte Carlo algorithms or stub-matching based (the Configuration Model. Both types are ill-controlled, with typically unknown mixing times for link-swap methods and uncontrolled rejections for the Configuration Model. Here we propose an efficient, polynomial time algorithm that generates statistically independent graph samples with a given, arbitrary, degree sequence. The algorithm provides a weight associated with each sample, allowing the observable to be measured either uniformly over the graph ensemble, or, alternatively, with a desired distribution. Unlike other algorithms, this method always produces a sample, without back-tracking or rejections. Using a central limit theorem-based reasoning, we argue, that for large , and for degree sequences admitting many realizations, the sample weights are expected to have a lognormal distribution. As examples, we apply our algorithm to generate networks with degree sequences drawn from power-law distributions and from binomial distributions.
A methodology for more efficient tail area sampling with discrete probability distribution
International Nuclear Information System (INIS)
Park, Sang Ryeol; Lee, Byung Ho; Kim, Tae Woon
1988-01-01
Monte Carlo Method is commonly used to observe the overall distribution and to determine the lower or upper bound value in statistical approach when direct analytical calculation is unavailable. However, this method would not be efficient if the tail area of a distribution is concerned. A new method entitled 'Two Step Tail Area Sampling' is developed, which uses the assumption of discrete probability distribution and samples only the tail area without distorting the overall distribution. This method uses two step sampling procedure. First, sampling at points separated by large intervals is done and second, sampling at points separated by small intervals is done with some check points determined at first step sampling. Comparison with Monte Carlo Method shows that the results obtained from the new method converge to analytic value faster than Monte Carlo Method if the numbers of calculation of both methods are the same. This new method is applied to DNBR (Departure from Nucleate Boiling Ratio) prediction problem in design of the pressurized light water nuclear reactor
Yu, Quan; Zhang, Qian; Lu, Xinqiong; Qian, Xiang; Ni, Kai; Wang, Xiaohao
2017-12-05
The performance of a miniature mass spectrometer in atmospheric analysis is closely related to the design of its sampling system. In this study, a simplified vacuum electrospray ionization (VESI) source was developed based on a combination of several techniques, including the discontinuous atmospheric pressure interface, direct capillary sampling, and pneumatic-assisted electrospray. Pulsed air was used as a vital factor to facilitate the operation of electrospray ionization in the vacuum chamber. This VESI device can be used as an efficient atmospheric sampling interface when coupled with a miniature rectilinear ion trap (RIT) mass spectrometer. The developed VESI-RIT instrument enables regular ESI analysis of liquid, and its qualitative and quantitative capabilities have been characterized by using various solution samples. A limit of detection of 8 ppb could be attained for arginine in a methanol solution. In addition, extractive electrospray ionization of organic compounds can be implemented by using the same VESI device, as long as the gas analytes are injected with the pulsed auxiliary air. This methodology can extend the use of the proposed VESI technique to rapid and online analysis of gaseous and volatile samples.
Energy Technology Data Exchange (ETDEWEB)
Giese, U.; Stenner, H.; Kettrup, A.
1989-05-01
When applicating diffusive sampling-systems to workplace air-monitoring it is necessary to know the behaviour of the diffusive-rate and the efficiency in dependence of concentration, exposition time and the type of pollutant. Especially concerning mixtures of pollutants there are negative influences by competition and mutual displacement possible. Diffusive-rate and discovery for CH/sub 2/Cl/sub 2/ and CHCl/sub 3/ were investigated using two different types of diffuse samplers. For this it was necessary to develop suitable defices for standard gas generation and for the exposition of diffusive-samplers to a standard gas mixture. (orig.).
Equilibrium calculations, ch. 6
International Nuclear Information System (INIS)
Deursen, A.P.J. van
1976-01-01
A calculation is presented of dimer intensities obtained in supersonic expansions. There are two possible limiting considerations; the dimers observed are already present in the source, in thermodynamic equilibrium, and are accelerated in the expansion. Destruction during acceleration is neglected, as are processes leading to newly formed dimers. On the other hand one can apply a kinetic approach, where formation and destruction processes are followed throughout the expansion. The difficulty of this approach stems from the fact that the density, temperature and rate constants have to be known at all distances from the nozzle. The simple point of view has been adopted and the measured dimer intensities are compared with the equilibrium concentration in the source. The comparison is performed under the assumption that the detection efficiency for dimers is twice the detection efficiency for monomers. The experimental evidence against the simple point of view that the dimers of the onset region are formed in the source already, under equilibrium conditions, is discussed. (Auth.)
Effects of the number of people on efficient capture and sample collection: A lion case study
Directory of Open Access Journals (Sweden)
Sam M. Ferreira
2013-05-01
Full Text Available Certain carnivore research projects and approaches depend on successful capture of individuals of interest. The number of people present at a capture site may determine success of a capture. In this study 36 lion capture cases in the Kruger National Park were used to evaluate whether the number of people present at a capture site influenced lion response rates and whether the number of people at a sampling site influenced the time it took to process the collected samples. The analyses suggest that when nine or fewer people were present, lions appeared faster at a call-up locality compared with when there were more than nine people. The number of people, however, did not influence the time it took to process the lions. It is proposed that efficient lion capturing should spatially separate capture and processing sites and minimise the number of people at a capture site.
Effects of the number of people on efficient capture and sample collection: a lion case study.
Ferreira, Sam M; Maruping, Nkabeng T; Schoultz, Darius; Smit, Travis R
2013-05-24
Certain carnivore research projects and approaches depend on successful capture of individuals of interest. The number of people present at a capture site may determine success of a capture. In this study 36 lion capture cases in the Kruger National Park were used to evaluate whether the number of people present at a capture site influenced lion response rates and whether the number of people at a sampling site influenced the time it took to process the collected samples. The analyses suggest that when nine or fewer people were present, lions appeared faster at a call-up locality compared with when there were more than nine people. The number of people, however, did not influence the time it took to process the lions. It is proposed that efficient lion capturing should spatially separate capture and processing sites and minimise the number of people at a capture site.
Inverse problems with non-trivial priors: efficient solution through sequential Gibbs sampling
DEFF Research Database (Denmark)
Hansen, Thomas Mejer; Cordua, Knud Skou; Mosegaard, Klaus
2012-01-01
Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample solutions to non-linear inverse problems. In principle, these methods allow incorporation of prior information of arbitrary complexity. If an analytical closed form description of the prior...... is available, which is the case when the prior can be described by a multidimensional Gaussian distribution, such prior information can easily be considered. In reality, prior information is often more complex than can be described by the Gaussian model, and no closed form expression of the prior can be given....... We propose an algorithm, called sequential Gibbs sampling, allowing the Metropolis algorithm to efficiently incorporate complex priors into the solution of an inverse problem, also for the case where no closed form description of the prior exists. First, we lay out the theoretical background...
Non-Uniform Sampling and J-UNIO Automation for Efficient Protein NMR Structure Determination.
Didenko, Tatiana; Proudfoot, Andrew; Dutta, Samit Kumar; Serrano, Pedro; Wüthrich, Kurt
2015-08-24
High-resolution structure determination of small proteins in solution is one of the big assets of NMR spectroscopy in structural biology. Improvements in the efficiency of NMR structure determination by advances in NMR experiments and automation of data handling therefore attracts continued interest. Here, non-uniform sampling (NUS) of 3D heteronuclear-resolved [(1)H,(1)H]-NOESY data yielded two- to three-fold savings of instrument time for structure determinations of soluble proteins. With the 152-residue protein NP_372339.1 from Staphylococcus aureus and the 71-residue protein NP_346341.1 from Streptococcus pneumonia we show that high-quality structures can be obtained with NUS NMR data, which are equally well amenable to robust automated analysis as the corresponding uniformly sampled data. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sandiford, P
1993-09-01
In recent years Lot quality assurance sampling (LQAS), a method derived from production-line industry, has been advocated as an efficient means to evaluate the coverage rates achieved by child immunization programmes. This paper examines the assumptions on which LQAS is based and the effect that these assumptions have on its utility as a management tool. It shows that the attractively low sample sizes used in LQAS are achieved at the expense of specificity unless unrealistic assumptions are made about the distribution of coverage rates amongst the immunization programmes to which the method is applied. Although it is a very sensitive test and its negative predictive value is probably high in most settings, its specificity and positive predictive value are likely to be low. The implications of these strengths and weaknesses with regard to management decision-making are discussed.
DEFF Research Database (Denmark)
Martinez Vega, Mabel Virginia; Wulfsohn, D.; Zamora, I.
2012-01-01
In situ assessment of fruit quality and yield can provide critical data for marketing and for logistical planning of the harvest, as well as for site-specific management. Our objective was to develop and validate efficient field sampling procedures for this purpose. We used the previously reported...... ‘fractionator’ tree sampling procedure and supporting handheld software (Gardi et al., 2007; Wulfsohn et al., 2012) to obtain representative samples of fruit from a 7.6-ha apple orchard (Malus ×domestica ‘Fuji Raku Raku’) in central Chile. The resulting sample consisted of 70 fruit on 56 branch segments...... of yield. Estimated marketable yield was 295.8±50.2 t. Field and packinghouse records indicated that of 348.2 t sent to packing (52.4 t or 15% higher than our estimate), 263.0 t was packed for export (32.8 t less or -12% error compared to our estimate). The estimated distribution of caliber compared very...
DEFF Research Database (Denmark)
Martinez, M.; Wulfsohn, Dvora-Laio; Zamora, I.
2012-01-01
In situ assessment of fruit quality and yield can provide critical data for marketing and for logistical planning of the harvest, as well as for site-specific management. Our objective was to develop and validate efficient field sampling procedures for this purpose. We used the previously reported...... 'fractionator' tree sampling procedure and supporting handheld software (Gardi et al., 2007; Wulfsohn et al., 2012) to obtain representative samples of fruit from a 7.6-ha apple orchard (Malus ×domestica 'Fuji Raku Raku') in central Chile. The resulting sample consisted of 70 fruit on 56 branch segments...... of yield. Estimated marketable yield was 295.8±50.2 t. Field and packinghouse records indicated that of 348.2 t sent to packing (52.4 t or 15% higher than our estimate), 263.0 t was packed for export (32.8 t less or -12% error compared to our estimate). The estimated distribution of caliber compared very...
High efficiency mixed species radioiodine air sampling, readout, and dose assessment system
International Nuclear Information System (INIS)
Distenfeld, C.; Klemish, J.
1976-05-01
Reactor accidents require monitoring to assess the impact to persons in the environment. This implies methods and apparatus to accurately and economically sample and evaluate possible released activity. The development of a prototype iodine air sampling system that can differentiate against noble gas activity and be evaluated by standard Civil Defense instrumentation is reported. The apparatus can efficiently (95 percent) collect organic or inorganic, particulate or gaseous radioiodine in concentrations below stable atmospheric iodine, and under severe ambient conditions. Response to noble fission gases was reduced to less than 4 x 10 -4 of an equal iodine airborne activity by heating the collector to approximately 100 0 C. Reliable sample size, +-5 percent, was achieved by using a simple air flow regulator. Thyroid dose commitment was mathematically and graphically related to the iodine isotope distribution expected in the environment and to the response of the Civil Defense CDV-700 instrument used to evaluate the sample. Sensitivity of the method allows dose assessment of 1 to 2 rads to a child's thyroid
Directory of Open Access Journals (Sweden)
Pedro Saa
2015-04-01
Full Text Available Kinetic models provide the means to understand and predict the dynamic behaviour of enzymes upon different perturbations. Despite their obvious advantages, classical parameterizations require large amounts of data to fit their parameters. Particularly, enzymes displaying complex reaction and regulatory (allosteric mechanisms require a great number of parameters and are therefore often represented by approximate formulae, thereby facilitating the fitting but ignoring many real kinetic behaviours. Here, we show that full exploration of the plausible kinetic space for any enzyme can be achieved using sampling strategies provided a thermodynamically feasible parameterization is used. To this end, we developed a General Reaction Assembly and Sampling Platform (GRASP capable of consistently parameterizing and sampling accurate kinetic models using minimal reference data. The former integrates the generalized MWC model and the elementary reaction formalism. By formulating the appropriate thermodynamic constraints, our framework enables parameterization of any oligomeric enzyme kinetics without sacrificing complexity or using simplifying assumptions. This thermodynamically safe parameterization relies on the definition of a reference state upon which feasible parameter sets can be efficiently sampled. Uniform sampling of the kinetics space enabled dissecting enzyme catalysis and revealing the impact of thermodynamics on reaction kinetics. Our analysis distinguished three reaction elasticity regions for common biochemical reactions: a steep linear region (0> ΔGr >-2 kJ/mol, a transition region (-2> ΔGr >-20 kJ/mol and a constant elasticity region (ΔGr <-20 kJ/mol. We also applied this framework to model more complex kinetic behaviours such as the monomeric cooperativity of the mammalian glucokinase and the ultrasensitive response of the phosphoenolpyruvate carboxylase of Escherichia coli. In both cases, our approach described appropriately not only
Sampling natural biofilms: a new route to build efficient microbial anodes.
Erable, Benjamin; Roncato, Marie-Anne; Achouak, Wafa; Bergel, Alain
2009-05-01
Electrochemically active biofilms were constructed on graphite anodes under constant polarization at -0.1V vs saturated calomel reference (SCE) with 10 mM acetate as substrate. The reactors were inoculated with three different microbial samples that were drawn from exactly the same place in a French Atlantic coastal port (i) by scraping the biofilm that had formed naturally on the surface of a floating bridge, (ii) by taking marine sediments just under the floating bridge, and (iii) by taking nearby beach sand. Current densities of 2.0 A/m2 were reached using the biofilm sample as inoculum while only 0.4 A/m2 and 0.8 A/m2 were obtained using the underlying sediments and the beach sand, respectively. The structure of bacterial communities forming biofilms was characterized by denaturing gradient gel electrophoresis (DGGE) analysis, and revealed differences between samples with the increase in relative intensities of some bands and the appearance of others. Bacteria close related to Bacteroidetes, Halomonas, and Marinobacterium were retrieved only from the efficient EA-biofilms formed from natural biofilms, whereas, bacteria close related to Mesoflavibacter were predominant on biofilm formed from sediments. The marine biofilm was selected as the inoculum to further optimize the microbial anode. Epifluorescence microscopy and SEM confirmed that maintaining the electrode under constant polarization promoted rapid settlement of the electrode surface by a bacterial monolayer film. The microbial anode was progressively adapted to the consumption of acetate by three serial additions of substrate, thus improving the Coulombic efficiency of acetate consumption from 31 to 89%. The possible oxidation of sulfide played only a very small part in the current production and the biofilm was not able to oxidize hydrogen. Graphite proved to be more efficient than dimensionally stable anode (DSA) or stainless steel butthis result might be due to differences in the surface roughness
GUIDE TO CALCULATING TRANSPORT EFFICIENCY OF AEROSOLS IN OCCUPATIONAL AIR SAMPLING SYSTEMS
Energy Technology Data Exchange (ETDEWEB)
Hogue, M.; Hadlock, D.; Thompson, M.; Farfan, E.
2013-11-12
This report will present hand calculations for transport efficiency based on aspiration efficiency and particle deposition losses. Because the hand calculations become long and tedious, especially for lognormal distributions of aerosols, an R script (R 2011) will be provided for each element examined. Calculations are provided for the most common elements in a remote air sampling system, including a thin-walled probe in ambient air, straight tubing, bends and a sample housing. One popular alternative approach would be to put such calculations in a spreadsheet, a thorough version of which is shared by Paul Baron via the Aerocalc spreadsheet (Baron 2012). To provide greater transparency and to avoid common spreadsheet vulnerabilities to errors (Burns 2012), this report uses R. The particle size is based on the concept of activity median aerodynamic diameter (AMAD). The AMAD is a particle size in an aerosol where fifty percent of the activity in the aerosol is associated with particles of aerodynamic diameter greater than the AMAD. This concept allows for the simplification of transport efficiency calculations where all particles are treated as spheres with the density of water (1g cm-3). In reality, particle densities depend on the actual material involved. Particle geometries can be very complicated. Dynamic shape factors are provided by Hinds (Hinds 1999). Some example factors are: 1.00 for a sphere, 1.08 for a cube, 1.68 for a long cylinder (10 times as long as it is wide), 1.05 to 1.11 for bituminous coal, 1.57 for sand and 1.88 for talc. Revision 1 is made to correct an error in the original version of this report. The particle distributions are based on activity weighting of particles rather than based on the number of particles of each size. Therefore, the mass correction made in the original version is removed from the text and the calculations. Results affected by the change are updated.
Liu, Y.; Pau, G. S. H.; Finsterle, S.
2015-12-01
Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simulated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure
Yu, Jing; Zhu, Shukui; Pang, Liling; Chen, Pin; Zhu, Gang-Tian
2018-03-09
Stable and reusable porphyrin-based magnetic nanocomposites were successfully synthesized for efficient extraction of polycyclic aromatic hydrocarbons (PAHs) from environmental water samples. Meso-Tetra (4-carboxyphenyl) porphyrin (TCPP), a kind of porphyrin, can connect the copolymer after amidation and was linked to Fe 3 O 4 @SiO 2 magnetic nanospheres via cross-coupling. Several characteristic techniques such as field emission scanning electron microscopy, transmission electron microscopy, X-ray diffraction, Fourier transform infrared spectrometry, vibrating sample magnetometry and a tensiometer were used to characterize the as-synthesized materials. The structure of the copolymer was similar to that of graphene, possessing sp 2 -conjugated carbon rings, but with an appropriate amount of delocalized π-electrons giving rise to the higher extraction efficiency for heavy PAHs without sacrificing the performance in the extraction of light PAHs. Six extraction parameters, including the TCPP:Fe 3 O 4 @SiO 2 (m:m) ratio, the amount of adsorbents, the type of desorption solvent, the desorption solvent volume, the adsorption time and the desorption time, were investigated. After the optimization of extraction conditions, a comparison of the extraction efficiency of Fe 3 O 4 @SiO 2 -TCPP and Fe 3 O 4 @SiO 2 @GO was carried out. The adsorption mechanism of TCPP to PAHs was studied by first-principles density functional theory (DFT) calculations. Combining experimental and calculated results, it was shown that the π-π stacking interaction was the main adsorption mechanism of TCPP for PAHs and that the amount of delocalized π-electrons plays an important role in the elution process. Under the optimal conditions, Fe 3 O 4 @SiO 2 -porphyrin showed good precision in intra-day (<8.9%) and inter-day (<13.0%) detection, low method detection limits (2-10 ng L -1 ), and wide linearity (10-10000 ng L -1 ). The method was applied to simultaneous analysis of 15 PAHs with
SymPix: A Spherical Grid for Efficient Sampling of Rotationally Invariant Operators
Seljebotn, D. S.; Eriksen, H. K.
2016-02-01
We present SymPix, a special-purpose spherical grid optimized for efficiently sampling rotationally invariant linear operators. This grid is conceptually similar to the Gauss-Legendre (GL) grid, aligning sample points with iso-latitude rings located on Legendre polynomial zeros. Unlike the GL grid, however, the number of grid points per ring varies as a function of latitude, avoiding expensive oversampling near the poles and ensuring nearly equal sky area per grid point. The ratio between the number of grid points in two neighboring rings is required to be a low-order rational number (3, 2, 1, 4/3, 5/4, or 6/5) to maintain a high degree of symmetries. Our main motivation for this grid is to solve linear systems using multi-grid methods, and to construct efficient preconditioners through pixel-space sampling of the linear operator in question. As a benchmark and representative example, we compute a preconditioner for a linear system that involves the operator \\widehat{{\\boldsymbol{D}}}+{\\widehat{{\\boldsymbol{B}}}}T{{\\boldsymbol{N}}}-1\\widehat{{\\boldsymbol{B}}}, where \\widehat{{\\boldsymbol{B}}} and \\widehat{{\\boldsymbol{D}}} may be described as both local and rotationally invariant operators, and {\\boldsymbol{N}} is diagonal in the pixel domain. For a bandwidth limit of {{\\ell }}{max} = 3000, we find that our new SymPix implementation yields average speed-ups of 360 and 23 for {\\widehat{{\\boldsymbol{B}}}}T{{\\boldsymbol{N}}}-1\\widehat{{\\boldsymbol{B}}} and \\widehat{{\\boldsymbol{D}}}, respectively, compared with the previous state-of-the-art implementation.
International Nuclear Information System (INIS)
Alfassi, Z.B.; Lavi, N.
2005-01-01
The effect of the density of the radioactive material packed in a Marinelli beaker on the counting efficiency was studied. It was found that for all densities (0.4-1.7g/cm 3) studied the counting efficiency (ε) fits the linear log-log dependence on the photon energy (E) above 200keV, i.e. obeying the equation ε=αE β (α, β-parameters). It was found that for each photon energy the counting efficiency is linearly dependent on the density (ρ) of the matrix. ε=a-bρ (a, b-parameters). The parameters of the linear dependence are energy dependent (linear log-log dependence), leading to a final equation for the counting efficiency of Marinelli beaker involving both density of the matrix and the photon energy: ε=α 1 .E β 1 -α 2 E β 2 ρ
An efficient modularized sample-based method to estimate the first-order Sobol' index
International Nuclear Information System (INIS)
Li, Chenzhao; Mahadevan, Sankaran
2016-01-01
Sobol' index is a prominent methodology in global sensitivity analysis. This paper aims to directly estimate the Sobol' index based only on available input–output samples, even if the underlying model is unavailable. For this purpose, a new method to calculate the first-order Sobol' index is proposed. The innovation is that the conditional variance and mean in the formula of the first-order index are calculated at an unknown but existing location of model inputs, instead of an explicit user-defined location. The proposed method is modularized in two aspects: 1) index calculations for different model inputs are separate and use the same set of samples; and 2) model input sampling, model evaluation, and index calculation are separate. Due to this modularization, the proposed method is capable to compute the first-order index if only input–output samples are available but the underlying model is unavailable, and its computational cost is not proportional to the dimension of the model inputs. In addition, the proposed method can also estimate the first-order index with correlated model inputs. Considering that the first-order index is a desired metric to rank model inputs but current methods can only handle independent model inputs, the proposed method contributes to fill this gap. - Highlights: • An efficient method to estimate the first-order Sobol' index. • Estimate the index from input–output samples directly. • Computational cost is not proportional to the number of model inputs. • Handle both uncorrelated and correlated model inputs.
Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.
2017-07-01
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity
Singh, Varoon; Purohit, Ajay Kumar; Chinthakindi, Sridhar; Goud D, Raghavender; Tak, Vijay; Pardasani, Deepak; Shrivastava, Anchal Roy; Dubey, Devendra Kumar
2016-02-19
Magnetic hydrophilic-lipophilic balance (MHLB) hybrid resin was prepared by precipitation polymerization using N-vinylpyrrolidone (PVP) and divinylbenzene (DVB) as monomers and Fe2O3 nanoparticles as magnetic material. These resins were successfully applied for the extraction of chemical warfare agents (CWAs) and their markers from water samples through magnetic dispersive solid-phase extraction (MDSPE). By varying the ratios of monomers, resin with desired hydrophilic-lipophilic balance was prepared for the extraction of CWAs and related esters of varying polarities. Amongst different composites Fe2O3 nanoparticles coated with 10% PVP+90% DVB exhibited the best recoveries varying between 70.32 and 97.67%. Parameters affecting the extraction efficiencies, such as extraction time, desorption time, nature and volume of desorption solvent, amount of extraction sorbent and the effect of salts on extraction were investigated. Under the optimized conditions, linearity was obtained in the range of 0.5-500 ng mL(-1) with correlation ranging from 0.9911-0.9980. Limits of detection and limits of quantification were 0.5-1.0 and 3.0-5.0 ng mL(-1) respectively with RSDs varying from 4.88-11.32% for markers of CWAs. Finally, the developed MDSPE method was employed for extraction of analytes from water samples of various sources and the OPCW proficiency test samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Testing the sampling efficiency of a nuclear power station stack monitor
Energy Technology Data Exchange (ETDEWEB)
Stroem, L.H. [Instrumentinvest, Nykoeping (Sweden)
1997-08-01
The test method comprises the injection of known amounts of monodisperse particles in the stack air stream, at a suitable point upstream of the sampling installation. To find a suitable injection polls, the gas flow was mapped by means of a tracer gas, released in various points in the stack base. The resulting concentration distributions at the stack sampler level were observed by means of an array of gas detectors. An injection point that produced symmetrical distribution over the stack area, and low concentrations at the stack walls was selected for the particle tests. Monodisperse particles of 6, 10, and 19 {mu}m aerodynamic diameter, tagged with dysprosium, were dispersed in the selected injection point. Particle concentration at the sampler level was measured. The losses to the stack walls were found to be less than 10 %. The particle concentrations at the four sampler inlets were calculated from the observed gas distribution. The amount calculated to be aspirated into the sampler piping was compared with the quantity collected by the sampling train ordinary filter, to obtain the sampling line transmission efficiency. 1 ref., 2 figs.
A high-efficiency acoustic chamber and the anomalous sample rotation
Wang, Taylor G.; Allen, J. L.
1992-01-01
A high efficiency acoustic chamber for the levitation and manipulation of liquid or molten samples in a microgravity environment has been developed. The chamber uses two acoustic drivers that are mounted at opposite corners of the chamber; by driving these at the same frequency, with 18-deg phase shifts, an increase in force of a factor of 3-4 is obtainable relative to the force of a single-driver system that is operated at the same power level. This enhancement is due to the increased coupling between the sound driver and the chamber. An anomalous rotation is noted to be associated with the chamber; this is found to be eliminated by a physically as-yet inexplicable empirical solution.
AREA EFFICIENT FRACTIONAL SAMPLE RATE CONVERSION ARCHITECTURE FOR SOFTWARE DEFINED RADIOS
Directory of Open Access Journals (Sweden)
Latha Sahukar
2014-09-01
Full Text Available The modern software defined radios (SDRs use complex signal processing algorithms to realize efficient wireless communication schemes. Several such algorithms require a specific symbol to sample ratio to be maintained. In this context the fractional rate converter (FRC becomes a crucial block in the receiver part of SDR. The paper presents an area optimized dynamic FRC block, for low power SDR applications. The limitations of conventional cascaded interpolator and decimator architecture for FRC are also presented. Extending the SINC function interpolation based architecture; towards high area optimization and providing run time configuration with time register are presented. The area and speed analysis are carried with Xilinx FPGA synthesis tools. Only 15% area occupancy with maximum clock speed of 133 MHz are reported on Spartan-6 Lx45 Field Programmable Gate Array (FPGA.
Rached, Nadhir B.; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul
2015-01-01
The outage capacity (OC) is among the most important performance metrics of communication systems operating over fading channels. Of interest in the present paper is the evaluation of the OC at the output of the Equal Gain Combining (EGC) and the Maximum Ratio Combining (MRC) receivers. In this case, it can be seen that this problem turns out to be that of computing the Cumulative Distribution Function (CDF) for the sum of independent random variables. Since finding a closedform expression for the CDF of the sum distribution is out of reach for a wide class of commonly used distributions, methods based on Monte Carlo (MC) simulations take pride of price. In order to allow for the estimation of the operating range of small outage probabilities, it is of paramount importance to develop fast and efficient estimation methods as naive Monte Carlo (MC) simulations would require high computational complexity. In this line, we propose in this work two unified, yet efficient, hazard rate twisting Importance Sampling (IS) based approaches that efficiently estimate the OC of MRC or EGC diversity techniques over generalized independent fading channels. The first estimator is shown to possess the asymptotic optimality criterion and applies for arbitrary fading models, whereas the second one achieves the well-desired bounded relative error property for the majority of the well-known fading variates. Moreover, the second estimator is shown to achieve the asymptotic optimality property under the particular Log-normal environment. Some selected simulation results are finally provided in order to illustrate the substantial computational gain achieved by the proposed IS schemes over naive MC simulations.
Rached, Nadhir B.
2015-11-13
The outage capacity (OC) is among the most important performance metrics of communication systems operating over fading channels. Of interest in the present paper is the evaluation of the OC at the output of the Equal Gain Combining (EGC) and the Maximum Ratio Combining (MRC) receivers. In this case, it can be seen that this problem turns out to be that of computing the Cumulative Distribution Function (CDF) for the sum of independent random variables. Since finding a closedform expression for the CDF of the sum distribution is out of reach for a wide class of commonly used distributions, methods based on Monte Carlo (MC) simulations take pride of price. In order to allow for the estimation of the operating range of small outage probabilities, it is of paramount importance to develop fast and efficient estimation methods as naive Monte Carlo (MC) simulations would require high computational complexity. In this line, we propose in this work two unified, yet efficient, hazard rate twisting Importance Sampling (IS) based approaches that efficiently estimate the OC of MRC or EGC diversity techniques over generalized independent fading channels. The first estimator is shown to possess the asymptotic optimality criterion and applies for arbitrary fading models, whereas the second one achieves the well-desired bounded relative error property for the majority of the well-known fading variates. Moreover, the second estimator is shown to achieve the asymptotic optimality property under the particular Log-normal environment. Some selected simulation results are finally provided in order to illustrate the substantial computational gain achieved by the proposed IS schemes over naive MC simulations.
Harada, Ryuhei; Nakamura, Tomotake; Shigeta, Yasuteru
2016-03-30
As an extension of the Outlier FLOODing (OFLOOD) method [Harada et al., J. Comput. Chem. 2015, 36, 763], the sparsity of the outliers defined by a hierarchical clustering algorithm, FlexDice, was considered to achieve an efficient conformational search as sparsity-weighted "OFLOOD." In OFLOOD, FlexDice detects areas of sparse distribution as outliers. The outliers are regarded as candidates that have high potential to promote conformational transitions and are employed as initial structures for conformational resampling by restarting molecular dynamics simulations. When detecting outliers, FlexDice defines a rank in the hierarchy for each outlier, which relates to sparsity in the distribution. In this study, we define a lower rank (first ranked), a medium rank (second ranked), and the highest rank (third ranked) outliers, respectively. For instance, the first-ranked outliers are located in a given conformational space away from the clusters (highly sparse distribution), whereas those with the third-ranked outliers are nearby the clusters (a moderately sparse distribution). To achieve the conformational search efficiently, resampling from the outliers with a given rank is performed. As demonstrations, this method was applied to several model systems: Alanine dipeptide, Met-enkephalin, Trp-cage, T4 lysozyme, and glutamine binding protein. In each demonstration, the present method successfully reproduced transitions among metastable states. In particular, the first-ranked OFLOOD highly accelerated the exploration of conformational space by expanding the edges. In contrast, the third-ranked OFLOOD reproduced local transitions among neighboring metastable states intensively. For quantitatively evaluations of sampled snapshots, free energy calculations were performed with a combination of umbrella samplings, providing rigorous landscapes of the biomolecules. © 2015 Wiley Periodicals, Inc.
Stock, Joachim W.; Kitzmann, Daniel; Patzer, A. Beate C.; Sedlmayr, Erwin
2018-06-01
For the calculation of complex neutral/ionized gas phase chemical equilibria, we present a semi-analytical versatile and efficient computer program, called FastChem. The applied method is based on the solution of a system of coupled nonlinear (and linear) algebraic equations, namely the law of mass action and the element conservation equations including charge balance, in many variables. Specifically, the system of equations is decomposed into a set of coupled nonlinear equations in one variable each, which are solved analytically whenever feasible to reduce computation time. Notably, the electron density is determined by using the method of Nelder and Mead at low temperatures. The program is written in object-oriented C++ which makes it easy to couple the code with other programs, although a stand-alone version is provided. FastChem can be used in parallel or sequentially and is available under the GNU General Public License version 3 at https://github.com/exoclime/FastChem together with several sample applications. The code has been successfully validated against previous studies and its convergence behavior has been tested even for extreme physical parameter ranges down to 100 K and up to 1000 bar. FastChem converges stable and robust in even most demanding chemical situations, which posed sometimes extreme challenges for previous algorithms.
Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.
Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca
2018-02-01
The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.
Zhao, Qun; Fang, Fei; Wu, Ci; Wu, Qi; Liang, Yu; Liang, Zhen; Zhang, Lihua; Zhang, Yukui
2016-03-17
An integrated sample preparation method, termed "imFASP", which combined in-situ filter-aided sample pretreatment and microwave-assisted trypsin digestion, was developed for preparation of microgram and even nanogram amounts of complex protein samples with high efficiency in 1 h. For imFASP method, proteins dissolved in 8 M urea were loaded onto a filter device with molecular weight cut off (MWCO) as 10 kDa, followed by in-situ protein preconcentration, denaturation, reduction, alkylation, and microwave-assisted tryptic digestion. Compared with traditional in-solution sample preparation method, imFASP method generated more protein and peptide identifications (IDs) from preparation of 45 μg Escherichia coli protein sample due to the higher efficiency, and the sample preparation throughput was significantly improved by 14 times (1 h vs. 15 h). More importantly, when the starting amounts of E. coli cell lysate decreased to nanogram level (50-500 ng), the protein and peptide identified by imFASP method were improved at least 30% and 44%, compared with traditional in-solution preparation method, suggesting dramatically higher peptide recovery of imFASP method for trace amounts of complex proteome samples. All these results demonstrate that the imFASP method developed here is of high potential for high efficient and high throughput preparation of trace amounts of complex proteome samples. Copyright © 2016 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Bruggeman, M.; Verheyen, L.; Vidmar, T.; Liu, B.
2016-01-01
We present a numerical fitting method for transmission data that outputs an equivalent sample composition. This output is used as input to a generalised efficiency transfer model based on the EFFTRAN software integrated in a LIMS. The procedural concept allows choosing between efficiency transfer with a predefined sample composition or with an experimentally determined composition based on a transmission measurement. The method can be used for simultaneous quantification of low-energy gamma emitters like "2"1"0Pb, "2"4"1Am, "2"3"4Th in typical environmental samples. - Highlights: • New fitting method for experimentally determined attenuation coefficients. • Generalised efficiency transfer with EFFTRAN based on transmission measurements. • Method of generalized efficiency transfer integrated in LIMS. • Method applicable to gamma-ray spectrometry of environmental samples.
Carlowitz, Christian; Girg, Thomas; Ghaleb, Hatem; Du, Xuan-Quang
2017-09-01
For ultra-high speed communication systems at high center frequencies above 100 GHz, we propose a disruptive change in system architecture to address major issues regarding amplifier chains with a large number of amplifier stages. They cause a high noise figure and high power consumption when operating close to the frequency limits of the underlying semiconductor technologies. Instead of scaling a classic homodyne transceiver system, we employ repeated amplification in single-stage amplifiers through positive feedback as well as synthesizer-free self-mixing demodulation at the receiver to simplify the system architecture notably. Since the amplitude and phase information for the emerging oscillation is defined by the input signal and the oscillator is only turned on for a very short time, it can be left unstabilized and thus come without a PLL. As soon as gain is no longer the most prominent issue, relaxed requirements for all the other major components allow reconsidering their implementation concepts to achieve further improvements compared to classic systems. This paper provides the first comprehensive overview of all major design aspects that need to be addressed upon realizing a SPARS-based transceiver. At system level, we show how to achieve high data rates and a noise performance comparable to classic systems, backed by scaled demonstrator experiments. Regarding the transmitter, design considerations for efficient quadrature modulation are discussed. For the frontend components that replace PA and LNA amplifier chains, implementation techniques for regenerative sampling circuits based on super-regenerative oscillators are presented. Finally, an analog-to-digital converter with outstanding performance and complete interfaces both to the analog baseband as well as to the digital side completes the set of building blocks for efficient ultra-high speed communication.
Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.
Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G
2016-06-01
This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.
Directory of Open Access Journals (Sweden)
Alejo J Irigoyen
Full Text Available Underwater visual census (UVC is the most common approach for estimating diversity, abundance and size of reef fishes in shallow and clear waters. Abundance estimation through UVC is particularly problematic in species occurring at low densities and/or highly aggregated because of their high variability at both spatial and temporal scales. The statistical power of experiments involving UVC techniques may be increased by augmenting the number of replicates or the area surveyed. In this work we present and test the efficiency of an UVC method based on diver towed GPS, the Tracked Roaming Transect (TRT, designed to maximize transect length (and thus the surveyed area with respect to diving time invested in monitoring, as compared to Conventional Strip Transects (CST. Additionally, we analyze the effect of increasing transect width and length on the precision of density estimates by comparing TRT vs. CST methods using different fixed widths of 6 and 20 m (FW3 and FW10, respectively and the Distance Sampling (DS method, in which perpendicular distance of each fish or group of fishes to the transect line is estimated by divers up to 20 m from the transect line. The TRT was 74% more time and cost efficient than the CST (all transect widths considered together and, for a given time, the use of TRT and/or increasing the transect width increased the precision of density estimates. In addition, since with the DS method distances of fishes to the transect line have to be estimated, and not measured directly as in terrestrial environments, errors in estimations of perpendicular distances can seriously affect DS density estimations. To assess the occurrence of distance estimation errors and their dependence on the observer's experience, a field experiment using wooden fish models was performed. We tested the precision and accuracy of density estimators based on fixed widths and the DS method. The accuracy of the estimates was measured comparing the actual
Melvin, Neal R; Poda, Daniel; Sutherland, Robert J
2007-10-01
When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.
Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W
2015-06-01
Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount. © The Authors 2015. Published by Oxford University Press on behalf of
Directory of Open Access Journals (Sweden)
Alyssa M Anderson
2012-10-01
Full Text Available Collections of Chironomidae surface-floating pupal exuviae (SFPE provide an effective means of assessing water quality in streams. Although not widely used in the United States, the technique is not new and has been shown to be more cost-efficient than traditional dip-net sampling techniques in organically enriched stream in an urban landscape. The intent of this research was to document the efficiency of sorting SFPE samples relative to dip-net samples in trout streams with catchments varying in amount of urbanization and differences in impervious surface. Samples of both SFPE and dip-nets were collected from 17 sample sites located on 12 trout streams in Duluth, MN, USA. We quantified time needed to sort subsamples of 100 macroinvertebrates from dip-net samples, and less than or greater than 100 chironomid exuviae from SFPE samples. For larger samples of SFPE, the time required to subsample up to 300 exuviae was also recorded. The average time to sort subsamples of 100 specimens was 22.5 minutes for SFPE samples, compared to 32.7 minutes for 100 macroinvertebrates in dip-net samples. Average time to sort up to 300 exuviae was 37.7 minutes. These results indicate that sorting SFPE samples is more time-efficient than traditional dip-net techniques in trout streams with varying catchment characteristics.doi: 10.5324/fn.v31i0.1380.Published online: 17 October 2012.
High-throughput sample adaptive offset hardware architecture for high-efficiency video coding
Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin
2018-03-01
A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.
Zeng, Xueqiang; Luo, Gang
2017-12-01
Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.
Efficient Multilevel and Multi-index Sampling Methods in Stochastic Differential Equations
Haji-Ali, Abdul Lateef
2016-05-22
Most problems in engineering and natural sciences involve parametric equations in which the parameters are not known exactly due to measurement errors, lack of measurement data, or even intrinsic variability. In such problems, one objective is to compute point or aggregate values, called “quantities of interest”. A rapidly growing research area that tries to tackle this problem is Uncertainty Quantification (UQ). As the name suggests, UQ aims to accurately quantify the uncertainty in quantities of interest. To that end, the approach followed in this thesis is to describe the parameters using probabilistic measures and then to employ probability theory to approximate the probabilistic information of the quantities of interest. In this approach, the parametric equations must be accurately solved for multiple values of the parameters to explore the dependence of the quantities of interest on these parameters, using various so-called “sampling methods”. In almost all cases, the parametric equations cannot be solved exactly and suitable numerical discretization methods are required. The high computational complexity of these numerical methods coupled with the fact that the parametric equations must be solved for multiple values of the parameters make UQ problems computationally intensive, particularly when the dimensionality of the underlying problem and/or the parameter space is high. This thesis is concerned with optimizing existing sampling methods and developing new ones. Starting with the Multilevel Monte Carlo (MLMC) estimator, we first prove its normality using the Lindeberg-Feller CLT theorem. We then design the Continuation Multilevel Monte Carlo (CMLMC) algorithm that efficiently approximates the parameters required to run MLMC. We also optimize the hierarchies of one-dimensional discretization parameters that are used in MLMC and analyze the tolerance splitting parameter between the statistical error and the bias constraints. An important contribution
International Nuclear Information System (INIS)
Hartwig, Johannes; Kockat, Judit; Schade, Wolfgang; Braungardt, Sibylle
2017-01-01
Energy efficiency is one of the fastest and most cost-effective contributions to a sustainable, secure and affordable energy system. Furthermore, the so-called “non-energy benefits”, “co-benefits” or “multiple benefits” of energy efficiency are receiving increased interest from policy makers and the scientific community. Among the various non-energy benefits of energy efficiency initiatives, the macroeconomic benefits play an important role. Our study presents a detailed analysis of the long-term macroeconomic effects of German energy efficiency policy including the industry and service sectors as well as residential energy demand. We quantify the macroeconomic effects of an ambitious energy efficiency scenario by combining bottom-up models with an extended dynamic input-output model. We study sectoral shifts within the economy regarding value added and employment compared to the baseline scenario. We provide an in-depth analysis of the effects of energy efficiency policy on consumers, individual industry sectors, and the economy as a whole. We find significant positive macroeconomic effects resulting from energy efficiency initiatives, with growth effects for both GDP and employment ranging between 0.88% and 3.38%. Differences in sectoral gains lead to a shift in the economy. Our methodological approach provides a comprehensive framework for analyzing the macroeconomic benefits of energy efficiency. - Highlights: • Integration of detailed sectoral models for energy demand with macroeconomic model. • Detailed assessment of effects of ambitious energy efficiency targets for Germany. • Positive macroeconomic effects can support policymaking and reduce uncertainty.
International Nuclear Information System (INIS)
Wo, Y.M.; Dainee Nor Fardzila Ahmad Tugi; Khairul Nizam Razali
2015-01-01
The effects of sample density on the measurement efficiency of the gamma spectrometry system were studied by using four sets multi nuclide standard sources of various densities between 0.3 - 1.4 g/ ml. The study was conducted on seven unit 25 % coaxial HPGe detector gamma spectrometry systems in Radiochemistry and Environment Laboratory (RAS). Difference on efficiency against gamma emitting radionuclides energy and measurement systems were compared and discussed. Correction factor for self absorption caused by difference in sample matrix density of the gamma systems were estimated. The correction factors are to be used in quantification of radionuclides concentration in various densities of service and research samples in RAS. (author)
Jeong, Yoonah; Schäffer, Andreas; Smith, Kilian
2018-06-15
In this work, Oasis HLB® beads were embedded in a silicone matrix to make a single phase passive sampler with a higher affinity for polar and ionisable compounds than silicone alone. The applicability of this mixed polymer sampler (MPS) was investigated for 34 aquatic contaminants (log K OW -0.03 to 6.26) in batch experiments. The influence of flow was investigated by comparing uptake under static and stirred conditions. The sampler characteristics of the MPS was assessed in terms of sampling rates (R S ) and sampler-water partition coefficients (K SW ), and these were compared to those of the polar organic chemical integrative sampler (POCIS) as a reference kinetic passive sampler. The MPS was characterized as an equilibrium sampler for both polar and non-polar compounds, with faster uptake rates and a shorter time to reach equilibrium than the POCIS. Water flow rate impacted sampling rates by up to a factor of 12 when comparing static and stirred conditions. In addition, the relative accumulation of compounds in the polyethersulfone (PES) membranes versus the inner Oasis HLB sorbent was compared for the POCIS, and ranged from <1% to 83% depending on the analyte properties. This is indicative of a potentially significant lag-phase for less polar compounds within POCIS. The findings of this study can be used to quantitatively describe the partitioning and kinetic behaviour of MPS and POCIS for a range of aquatic organic contaminants for application in field sampling. Copyright © 2018 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Luca Corazzini
2015-01-01
by error and noise in behavior. Results change when we consider a more general QRE specification with cross-subject heterogeneity in concerns for (group efficiency. In this case, we find that the majority of the subjects make contributions that are compatible with the hypothesis of preference for (group efficiency. A likelihood-ratio test confirms the superiority of the more general specification of the QRE model over alternative specifications.
Energy Technology Data Exchange (ETDEWEB)
Radlbauer, Rudolf, E-mail: rudolf.radlbauer@stpoelten.lknoe.a [MR Physics Group, Department of Radiology, Landesklinikum St. Poelten, Propst Fuehrer Strasse 4, 3100 St. Poelten (Austria); Lomoschitz, Friedrich, E-mail: friedrich.lomoschitz@stpoelten.lknoe.a [MR Physics Group, Department of Radiology, Landesklinikum St. Poelten, Propst Fuehrer Strasse 4, 3100 St. Poelten (Austria); Salomonowitz, Erich, E-mail: erich.salomonowitz@stpoelten.lknoe.a [MR Physics Group, Department of Radiology, Landesklinikum St. Poelten, Propst Fuehrer Strasse 4, 3100 St. Poelten (Austria); Eberhardt, Knut E., E-mail: info@mrt-kompetenzzentrum.d [MRT Competence Center Schloss Werneck, Balthasar-Neumann-Platz 2, 97440 Werneck (Germany); Stadlbauer, Andreas, E-mail: andi@nmr.a [MR Physics Group, Department of Radiology, Landesklinikum St. Poelten, Propst Fuehrer Strasse 4, 3100 St. Poelten (Austria); Department of Neurosurgery, University of Erlangen-Nuremberg, Schwabachanlage 6, 91054 Erlangen (Germany)
2010-08-15
The purpose of this study was to assess the effect of a driven equilibrium (DRIVE) pulse incorporated in a standard T1-weighted turbo spin echo (TSE) sequence as used in our routine MRI protocol for examination of pathologies of the knee. Sixteen consecutive patients with knee disorders were examined using the routine MRI protocol, including T1-weighted TSE-sequences with and without a DRIVE pulse. Signal-to-noise ratios (SNRs) and contrast-to-noise ratio (CNR) of anatomical structures and pathologies were calculated and compared for both sequences. The differences in diagnostic value of the T1-weighted images with and without DRIVE pulse were assessed. SNR was significantly higher on images acquired with DRIVE pulse for fluid, effusion, cartilage and bone. Differences in the SNR of meniscus and muscle between the two sequences were not statistically significant. CNR was significantly increased between muscle and effusion, fluid and cartilage, fluid and meniscus, cartilage and meniscus, bone and cartilage on images acquired using the DRIVE pulse. Diagnostic value of the T1-weighted images was found to be improved for delineation of anatomic structures and for diagnosing a variety of pathologies when a DRIVE pulse is incorporated in the sequence. Incorporation of a DRIVE pulse into a standard T1-weighted TSE-sequence leads to significant increase of SNR and CNR of both, anatomical structures and pathologies, and consequently to an increase in diagnostic value within the same acquisition time.
International Nuclear Information System (INIS)
Radlbauer, Rudolf; Lomoschitz, Friedrich; Salomonowitz, Erich; Eberhardt, Knut E.; Stadlbauer, Andreas
2010-01-01
The purpose of this study was to assess the effect of a driven equilibrium (DRIVE) pulse incorporated in a standard T1-weighted turbo spin echo (TSE) sequence as used in our routine MRI protocol for examination of pathologies of the knee. Sixteen consecutive patients with knee disorders were examined using the routine MRI protocol, including T1-weighted TSE-sequences with and without a DRIVE pulse. Signal-to-noise ratios (SNRs) and contrast-to-noise ratio (CNR) of anatomical structures and pathologies were calculated and compared for both sequences. The differences in diagnostic value of the T1-weighted images with and without DRIVE pulse were assessed. SNR was significantly higher on images acquired with DRIVE pulse for fluid, effusion, cartilage and bone. Differences in the SNR of meniscus and muscle between the two sequences were not statistically significant. CNR was significantly increased between muscle and effusion, fluid and cartilage, fluid and meniscus, cartilage and meniscus, bone and cartilage on images acquired using the DRIVE pulse. Diagnostic value of the T1-weighted images was found to be improved for delineation of anatomic structures and for diagnosing a variety of pathologies when a DRIVE pulse is incorporated in the sequence. Incorporation of a DRIVE pulse into a standard T1-weighted TSE-sequence leads to significant increase of SNR and CNR of both, anatomical structures and pathologies, and consequently to an increase in diagnostic value within the same acquisition time.
Non-equilibrium modelling of distillation
Wesselingh, JA; Darton, R
1997-01-01
There are nasty conceptual problems in the classical way of describing distillation columns via equilibrium stages, and efficiencies or HETP's. We can nowadays avoid these problems by simulating the behaviour of a complete column in one go using a non-equilibrium model. Such a model has phase
Equilibrium Droplets on Deformable Substrates: Equilibrium Conditions.
Koursari, Nektaria; Ahmed, Gulraiz; Starov, Victor M
2018-05-15
Equilibrium conditions of droplets on deformable substrates are investigated, and it is proven using Jacobi's sufficient condition that the obtained solutions really provide equilibrium profiles of both the droplet and the deformed support. At the equilibrium, the excess free energy of the system should have a minimum value, which means that both necessary and sufficient conditions of the minimum should be fulfilled. Only in this case, the obtained profiles provide the minimum of the excess free energy. The necessary condition of the equilibrium means that the first variation of the excess free energy should vanish, and the second variation should be positive. Unfortunately, the mentioned two conditions are not the proof that the obtained profiles correspond to the minimum of the excess free energy and they could not be. It is necessary to check whether the sufficient condition of the equilibrium (Jacobi's condition) is satisfied. To the best of our knowledge Jacobi's condition has never been verified for any already published equilibrium profiles of both the droplet and the deformable substrate. A simple model of the equilibrium droplet on the deformable substrate is considered, and it is shown that the deduced profiles of the equilibrium droplet and deformable substrate satisfy the Jacobi's condition, that is, really provide the minimum to the excess free energy of the system. To simplify calculations, a simplified linear disjoining/conjoining pressure isotherm is adopted for the calculations. It is shown that both necessary and sufficient conditions for equilibrium are satisfied. For the first time, validity of the Jacobi's condition is verified. The latter proves that the developed model really provides (i) the minimum of the excess free energy of the system droplet/deformable substrate and (ii) equilibrium profiles of both the droplet and the deformable substrate.
Aladaghlo, Zolfaghar; Fakhari, Alireza; Behbahani, Mohammad
2016-10-01
In this work, an efficient sample preparation method termed solvent-assisted dispersive solid-phase extraction was applied. The used sample preparation method was based on the dispersion of the sorbent (benzophenone) into the aqueous sample to maximize the interaction surface. In this approach, the dispersion of the sorbent at a very low milligram level was achieved by inserting a solution of the sorbent and disperser solvent into the aqueous sample. The cloudy solution created from the dispersion of the sorbent in the bulk aqueous sample. After pre-concentration of the butachlor, the cloudy solution was centrifuged and butachlor in the sediment phase dissolved in ethanol and determined by gas chromatography with flame ionization detection. Under the optimized conditions (solution pH = 7.0, sorbent: benzophenone, 2%, disperser solvent: ethanol, 500 μL, centrifuged at 4000 rpm for 3 min), the method detection limit for butachlor was 2, 3 and 3 μg/L for distilled water, waste water, and urine sample, respectively. Furthermore, the preconcentration factor was 198.8, 175.0, and 174.2 in distilled water, waste water, and urine sample, respectively. Solvent-assisted dispersive solid-phase extraction was successfully used for the trace monitoring of butachlor in urine and waste water samples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An energy-efficient adaptive sampling scheme for wireless sensor networks
Masoum, Alireza; Meratnia, Nirvana; Havinga, Paul J.M.
2013-01-01
Wireless sensor networks are new monitoring platforms. To cope with their resource constraints, in terms of energy and bandwidth, spatial and temporal correlation in sensor data can be exploited to find an optimal sampling strategy to reduce number of sampling nodes and/or sampling frequencies while
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Lautenbach, Ebbing; Santana, Evelyn; Lee, Abby; Tolomeo, Pam; Black, Nicole; Babson, Andrew; Perencevich, Eli N; Harris, Anthony D; Smith, Catherine A; Maslow, Joel
2008-04-01
We assessed the rate of recovery of fluoroquinolone-resistant and fluoroquinolone-susceptible Escherichia coli isolates from culture of frozen perirectal swab samples compared with the results for culture of the same specimen before freezing. Recovery rates for these 2 classes of E. coli were 91% and 83%, respectively. The majority of distinct strains recovered from the initial sample were also recovered from the frozen sample. The strains that were not recovered were typically present only in low numbers in the initial sample. These findings emphasize the utility of frozen surveillance samples.
Ion exchange equilibrium constants
Marcus, Y
2013-01-01
Ion Exchange Equilibrium Constants focuses on the test-compilation of equilibrium constants for ion exchange reactions. The book first underscores the scope of the compilation, equilibrium constants, symbols used, and arrangement of the table. The manuscript then presents the table of equilibrium constants, including polystyrene sulfonate cation exchanger, polyacrylate cation exchanger, polymethacrylate cation exchanger, polysterene phosphate cation exchanger, and zirconium phosphate cation exchanger. The text highlights zirconium oxide anion exchanger, zeolite type 13Y cation exchanger, and
Bovine milk sampling efficiency for pregnancy-associated glycoproteins (PAG) detection test
Energy Technology Data Exchange (ETDEWEB)
Silva, H. K. da; Cassoli, L.D.; Pantoja, J.F.C.; Cerqueira, P.H.R.; Coitinho, T.B.; Machado, P.F.
2016-07-01
Two experiments were conducted to verify whether the time of day at which a milk sample is collected and the possible carryover in the milking system may affect pregnancy-associated glycoproteins (PAG) levels and, consequently, the pregnancy test results in dairy cows. In experiment one, we evaluated the effect of time of day at which the milk sample is collected from 51 cows. In experiment two, which evaluated the possible occurrence of carryover in the milk meter milking system, milk samples from 94 cows belonging to two different farms were used. The samples were subjected to pregnancy test using ELISA methodology to measure PAG concentrations and to classify the samples as positive (pregnant), negative (nonpregnant), or suspicious (recheck). We found that the time of milking did not affect the PAG levels. As to the occurrence of carryover in the milk meter, the PAG levels of the samples collected from Farm-2 were heavily influenced by a carryover effect compared with the samples from Farm-1. Thus, milk samples submitted to a pregnancy test can be collected during the morning or the evening milking. When the sample is collected from the milk meters, periodic equipment maintenance should be noted, including whether the milk meter is totally drained between different animals’ milking and equipment cleaning between milking is performed correctly to minimize the occurrence of carryover, thereby avoiding the effect on PAG levels and, consequently, the pregnancy test results. Therefore, a single milk sample can be used for both milk quality tests and pregnancy test.
Bovine milk sampling efficiency for pregnancy-associated glycoproteins (PAG) detection test
International Nuclear Information System (INIS)
Silva, H. K. da; Cassoli, L.D.; Pantoja, J.F.C.; Cerqueira, P.H.R.; Coitinho, T.B.; Machado, P.F.
2016-01-01
Two experiments were conducted to verify whether the time of day at which a milk sample is collected and the possible carryover in the milking system may affect pregnancy-associated glycoproteins (PAG) levels and, consequently, the pregnancy test results in dairy cows. In experiment one, we evaluated the effect of time of day at which the milk sample is collected from 51 cows. In experiment two, which evaluated the possible occurrence of carryover in the milk meter milking system, milk samples from 94 cows belonging to two different farms were used. The samples were subjected to pregnancy test using ELISA methodology to measure PAG concentrations and to classify the samples as positive (pregnant), negative (nonpregnant), or suspicious (recheck). We found that the time of milking did not affect the PAG levels. As to the occurrence of carryover in the milk meter, the PAG levels of the samples collected from Farm-2 were heavily influenced by a carryover effect compared with the samples from Farm-1. Thus, milk samples submitted to a pregnancy test can be collected during the morning or the evening milking. When the sample is collected from the milk meters, periodic equipment maintenance should be noted, including whether the milk meter is totally drained between different animals’ milking and equipment cleaning between milking is performed correctly to minimize the occurrence of carryover, thereby avoiding the effect on PAG levels and, consequently, the pregnancy test results. Therefore, a single milk sample can be used for both milk quality tests and pregnancy test.
Quantity Constrained General Equilibrium
Babenko, R.; Talman, A.J.J.
2006-01-01
In a standard general equilibrium model it is assumed that there are no price restrictions and that prices adjust infinitely fast to their equilibrium values.In case of price restrictions a general equilibrium may not exist and rationing on net demands or supplies is needed to clear the markets.In
Sensitivity to the Sampling Process Emerges From the Principle of Efficiency.
Jara-Ettinger, Julian; Sun, Felix; Schulz, Laura; Tenenbaum, Joshua B
2018-05-01
Humans can seamlessly infer other people's preferences, based on what they do. Broadly, two types of accounts have been proposed to explain different aspects of this ability. The first account focuses on spatial information: Agents' efficient navigation in space reveals what they like. The second account focuses on statistical information: Uncommon choices reveal stronger preferences. Together, these two lines of research suggest that we have two distinct capacities for inferring preferences. Here we propose that this is not the case, and that spatial-based and statistical-based preference inferences can be explained by the assumption that agents are efficient alone. We show that people's sensitivity to spatial and statistical information when they infer preferences is best predicted by a computational model of the principle of efficiency, and that this model outperforms dual-system models, even when the latter are fit to participant judgments. Our results suggest that, as adults, a unified understanding of agency under the principle of efficiency underlies our ability to infer preferences. Copyright © 2018 Cognitive Science Society, Inc.
Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S
2013-04-15
Adaptive clinical trial design has been proposed as a promising new approach that may improve the drug discovery process. Proponents of adaptive sample size re-estimation promote its ability to avoid 'up-front' commitment of resources, better address the complicated decisions faced by data monitoring committees, and minimize accrual to studies having delayed ascertainment of outcomes. We investigate aspects of adaptation rules, such as timing of the adaptation analysis and magnitude of sample size adjustment, that lead to greater or lesser statistical efficiency. Owing in part to the recent Food and Drug Administration guidance that promotes the use of pre-specified sampling plans, we evaluate alternative approaches in the context of well-defined, pre-specified adaptation. We quantify the relative costs and benefits of fixed sample, group sequential, and pre-specified adaptive designs with respect to standard operating characteristics such as type I error, maximal sample size, power, and expected sample size under a range of alternatives. Our results build on others' prior research by demonstrating in realistic settings that simple and easily implemented pre-specified adaptive designs provide only very small efficiency gains over group sequential designs with the same number of analyses. In addition, we describe optimal rules for modifying the sample size, providing efficient adaptation boundaries on a variety of scales for the interim test statistic for adaptation analyses occurring at several different stages of the trial. We thus provide insight into what are good and bad choices of adaptive sampling plans when the added flexibility of adaptive designs is desired. Copyright © 2012 John Wiley & Sons, Ltd.
Reachable Distance Space: Efficient Sampling-Based Planning for Spatially Constrained Systems
Xinyu Tang,; Thomas, S.; Coleman, P.; Amato, N. M.
2010-01-01
reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the number of the robot's degrees of freedom
Stochastic coupled cluster theory: Efficient sampling of the coupled cluster expansion
Scott, Charles J. C.; Thom, Alex J. W.
2017-09-01
We consider the sampling of the coupled cluster expansion within stochastic coupled cluster theory. Observing the limitations of previous approaches due to the inherently non-linear behavior of a coupled cluster wavefunction representation, we propose new approaches based on an intuitive, well-defined condition for sampling weights and on sampling the expansion in cluster operators of different excitation levels. We term these modifications even and truncated selections, respectively. Utilising both approaches demonstrates dramatically improved calculation stability as well as reduced computational and memory costs. These modifications are particularly effective at higher truncation levels owing to the large number of terms within the cluster expansion that can be neglected, as demonstrated by the reduction of the number of terms to be sampled when truncating at triple excitations by 77% and hextuple excitations by 98%.
National Research Council Canada - National Science Library
Dupuis, Paul; Wang, Hui
2005-01-01
Previous papers by authors establish the connection between importance sampling algorithms for estimating rare-event probabilities, two-person zero-sum differential games, and the associated Isaacs equation...
National Research Council Canada - National Science Library
Dupuis, Paul; Wang, Hui
2005-01-01
It has been established that importance sampling algorithms for estimating rare-event probabilities are intimately connected with two-person zero-sum differential games and the associated Isaacs equation...
Signal Sampling for Efficient Sparse Representation of Resting State FMRI Data
Ge, Bao; Makkie, Milad; Wang, Jin; Zhao, Shijie; Jiang, Xi; Li, Xiang; Lv, Jinglei; Zhang, Shu; Zhang, Wei; Han, Junwei; Guo, Lei; Liu, Tianming
2015-01-01
As the size of brain imaging data such as fMRI grows explosively, it provides us with unprecedented and abundant information about the brain. How to reduce the size of fMRI data but not lose much information becomes a more and more pressing issue. Recent literature studies tried to deal with it by dictionary learning and sparse representation methods, however, their computation complexities are still high, which hampers the wider application of sparse representation method to large scale fMRI datasets. To effectively address this problem, this work proposes to represent resting state fMRI (rs-fMRI) signals of a whole brain via a statistical sampling based sparse representation. First we sampled the whole brain’s signals via different sampling methods, then the sampled signals were aggregate into an input data matrix to learn a dictionary, finally this dictionary was used to sparsely represent the whole brain’s signals and identify the resting state networks. Comparative experiments demonstrate that the proposed signal sampling framework can speed-up by ten times in reconstructing concurrent brain networks without losing much information. The experiments on the 1000 Functional Connectomes Project further demonstrate its effectiveness and superiority. PMID:26646924
International Nuclear Information System (INIS)
Scheele, R.D.; Bredt, P.R.; Sell, R.L.
1997-02-01
Water content plays a crucial role in the strategy developed by Webb et al. to prevent propagating or sustainable chemical reactions in the organic-bearing wastes stored in the 20 Organic Tank Watch List tanks at the US Department of Energy's Hanford Site. Because of water's importance in ensuring that the organic-bearing wastes continue to be stored safely, Duke Engineering and Services Hanford commissioned the Pacific Northwest National Laboratory (PNNL) to investigate the effect of water partial pressure (P H2O ) on the water content of organic-bearing or representative wastes. Of the various interrelated controlling factors affecting the water content in wastes, P H2O is the most susceptible to being controlled by the and Hanford Site's environmental conditions and, if necessary, could be managed to maintain the water content at an acceptable level or could be used to adjust the water content back to an acceptable level. Of the various waste types resulting from weapons production and waste-management operations at the Hanford Site, Webb et al. determined that saltcake wastes are the most likely to require active management to maintain the wastes in a Conditionally Safe condition. A Conditionally Safe waste is one that satisfies the waste classification criteria based on water content alone or a combination of water content and either total organic carbon (TOC) content or waste energetics. To provide information on the behavior of saltcake wastes, two waste samples taken from Tank 241-BY-108 (BY-108) were selected for study, even though BY-108 is not on the Organic Tanks Watch List because of their ready availability and their similarity to some of the organic-bearing saltcakes
An efficient, robust, and inexpensive grinding device for herbal samples like Cinchona bark
DEFF Research Database (Denmark)
Hansen, Steen Honoré; Holmfred, Else Skovgaard; Cornett, Claus
2015-01-01
An effective, robust, and inexpensive grinding device for the grinding of herb samples like bark and roots was developed by rebuilding a commercially available coffee grinder. The grinder was constructed to be able to provide various particle sizes, to be easy to clean, and to have a minimum...... of dead volume. The recovery of the sample when grinding as little as 50 mg of crude Cinchona bark was about 60%. Grinding is performed in seconds with no rise in temperature, and the grinder is easily disassembled to be cleaned. The influence of the particle size of the obtained powders on the recovery...
An Efficient, Robust, and Inexpensive Grinding Device for Herbal Samples like Cinchona Bark.
Hansen, Steen Honoré; Holmfred, Else; Cornett, Claus; Maldonado, Carla; Rønsted, Nina
2015-01-01
An effective, robust, and inexpensive grinding device for the grinding of herb samples like bark and roots was developed by rebuilding a commercially available coffee grinder. The grinder was constructed to be able to provide various particle sizes, to be easy to clean, and to have a minimum of dead volume. The recovery of the sample when grinding as little as 50 mg of crude Cinchona bark was about 60%. Grinding is performed in seconds with no rise in temperature, and the grinder is easily disassembled to be cleaned. The influence of the particle size of the obtained powders on the recovery of analytes in extracts of Cinchona bark was investigated using HPLC.
A generalized transmission method for gamma-efficiency determinations in soil samples
International Nuclear Information System (INIS)
Bolivar, J.P.; Garcia-Tenorio, R.; Garcia-Leon, M.
1994-01-01
In this paper, a generalization of the γ-ray transmission method which is useful for measurements on soil samples, for example, is presented. The correction factor, f, is given, which is a function of the apparent density of the soil and the γ-ray energy. With this method, the need for individual determinations of f, for each energy and apparent soil density is avoided. Although the method has been developed for soils, the general philosophy can be applied to other sample matrices, such as water or vegetables for example. (author)
Functional approximations to posterior densities: a neural network approach to efficient sampling
L.F. Hoogerheide (Lennart); J.F. Kaashoek (Johan); H.K. van Dijk (Herman)
2002-01-01
textabstractThe performance of Monte Carlo integration methods like importance sampling or Markov Chain Monte Carlo procedures greatly depends on the choice of the importance or candidate density. Usually, such a density has to be "close" to the target density in order to yield numerically accurate
Geostatistics for Mapping Leaf Area Index over a Cropland Landscape: Efficiency Sampling Assessment
Directory of Open Access Journals (Sweden)
Javier Garcia-Haro
2010-11-01
Full Text Available This paper evaluates the performance of spatial methods to estimate leaf area index (LAI fields from ground-based measurements at high-spatial resolution over a cropland landscape. Three geostatistical model variants of the kriging technique, the ordinary kriging (OK, the collocated cokriging (CKC and kriging with an external drift (KED are used. The study focused on the influence of the spatial sampling protocol, auxiliary information, and spatial resolution in the estimates. The main advantage of these models lies in the possibility of considering the spatial dependence of the data and, in the case of the KED and CKC, the auxiliary information for each location used for prediction purposes. A high-resolution NDVI image computed from SPOT TOA reflectance data is used as an auxiliary variable in LAI predictions. The CKC and KED predictions have proven the relevance of the auxiliary information to reproduce the spatial pattern at local scales, proving the KED model to be the best estimator when a non-stationary trend is observed. Advantages and limitations of the methods in LAI field predictions for two systematic and two stratified spatial samplings are discussed for high (20 m, medium (300 m and coarse (1 km spatial scales. The KED has exhibited the best observed local accuracy for all the spatial samplings. Meanwhile, the OK model provides comparable results when a well stratified sampling scheme is considered by land cover.
Size-Resolved Penetration Through High-Efficiency Filter Media Typically Used for Aerosol Sampling
Czech Academy of Sciences Publication Activity Database
Zíková, Naděžda; Ondráček, Jakub; Ždímal, Vladimír
2015-01-01
Roč. 49, č. 4 (2015), s. 239-249 ISSN 0278-6826 R&D Projects: GA ČR(CZ) GBP503/12/G147 Institutional support: RVO:67985858 Keywords : filters * size-resolved penetration * atmospheric aerosol sampling Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.953, year: 2015
Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models
Debasish Saha; Armen R. Kemanian; Benjamin M. Rau; Paul R. Adler; Felipe Montes
2017-01-01
Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (...
Simple and efficient importance sampling scheme for a tandem queue with server slow-down
Miretskiy, D.I.; Scheinhardt, W.R.W.; Mandjes, M.R.H.
2008-01-01
This paper considers importance sampling as a tool for rare-event simulation. The system at hand is a so-called tandem queue with slow-down, which essentially means that the server of the first queue (or: upstream queue) switches to a lower speed when the second queue (downstream queue) exceeds some
Efficient Sequential Monte Carlo Sampling for Continuous Monitoring of a Radiation Situation
Czech Academy of Sciences Publication Activity Database
Šmídl, Václav; Hofman, Radek
2014-01-01
Roč. 56, č. 4 (2014), s. 514-527 ISSN 0040-1706 R&D Projects: GA MV VG20102013018 Institutional support: RVO:67985556 Keywords : radiation protection * atmospheric dispersion model * importance sampling Subject RIV: BD - Theory of Information Impact factor: 1.814, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/smidl-0433631.pdf
Range-efficient consistent sampling and locality-sensitive hashing for polygons
DEFF Research Database (Denmark)
Gudmundsson, Joachim; Pagh, Rasmus
2017-01-01
Locality-sensitive hashing (LSH) is a fundamental technique for similarity search and similarity estimation in high-dimensional spaces. The basic idea is that similar objects should produce hash collisions with probability significantly larger than objects with low similarity. We consider LSH for...... or union of a set of preprocessed polygons. Curiously, our consistent sampling method uses transformation to a geometric problem....
Kim, Stephan D.; Luo, Jiajun; Buchholz, D. Bruce; Chang, R. P. H.; Grayson, M.
2016-09-01
A modular time division multiplexer (MTDM) device is introduced to enable parallel measurement of multiple samples with both fast and slow decay transients spanning from millisecond to month-long time scales. This is achieved by dedicating a single high-speed measurement instrument for rapid data collection at the start of a transient, and by multiplexing a second low-speed measurement instrument for slow data collection of several samples in parallel for the later transients. The MTDM is a high-level design concept that can in principle measure an arbitrary number of samples, and the low cost implementation here allows up to 16 samples to be measured in parallel over several months, reducing the total ensemble measurement duration and equipment usage by as much as an order of magnitude without sacrificing fidelity. The MTDM was successfully demonstrated by simultaneously measuring the photoconductivity of three amorphous indium-gallium-zinc-oxide thin films with 20 ms data resolution for fast transients and an uninterrupted parallel run time of over 20 days. The MTDM has potential applications in many areas of research that manifest response times spanning many orders of magnitude, such as photovoltaics, rechargeable batteries, amorphous semiconductors such as silicon and amorphous indium-gallium-zinc-oxide.
An efficient method of randomly sampling the coherent angular scatter distribution
International Nuclear Information System (INIS)
Williamson, J.F.; Morin, R.L.
1983-01-01
Monte Carlo simulations of photon transport phenomena require random selection of an interaction process at each collision site along the photon track. Possible choices are usually limited to photoelectric absorption and incoherent scatter as approximated by the Klein-Nishina distribution. A technique is described for sampling the coherent angular scatter distribution, for the benefit of workers in medical physics. (U.K.)
CASP10-BCL::Fold efficiently samples topologies of large proteins.
Heinze, Sten; Putnam, Daniel K; Fischer, Axel W; Kohlmann, Tim; Weiner, Brian E; Meiler, Jens
2015-03-01
During CASP10 in summer 2012, we tested BCL::Fold for prediction of free modeling (FM) and template-based modeling (TBM) targets. BCL::Fold assembles the tertiary structure of a protein from predicted secondary structure elements (SSEs) omitting more flexible loop regions early on. This approach enables the sampling of conformational space for larger proteins with more complex topologies. In preparation of CASP11, we analyzed the quality of CASP10 models throughout the prediction pipeline to understand BCL::Fold's ability to sample the native topology, identify native-like models by scoring and/or clustering approaches, and our ability to add loop regions and side chains to initial SSE-only models. The standout observation is that BCL::Fold sampled topologies with a GDT_TS score > 33% for 12 of 18 and with a topology score > 0.8 for 11 of 18 test cases de novo. Despite the sampling success of BCL::Fold, significant challenges still exist in clustering and loop generation stages of the pipeline. The clustering approach employed for model selection often failed to identify the most native-like assembly of SSEs for further refinement and submission. It was also observed that for some β-strand proteins model refinement failed as β-strands were not properly aligned to form hydrogen bonds removing otherwise accurate models from the pool. Further, BCL::Fold samples frequently non-natural topologies that require loop regions to pass through the center of the protein. © 2015 Wiley Periodicals, Inc.
Energy Technology Data Exchange (ETDEWEB)
Liu, Zhenjiang, E-mail: lzj1984@ujs.edu.cn [School of the Environment and Safety Engineering, Jiangsu University, Zhenjiang 212013 (China); Wei, Xi [School of the Environment and Safety Engineering, Jiangsu University, Zhenjiang 212013 (China); The Affiliated First People' s Hospital of Jiangsu University, Zhenjiang 212002 (China); Ren, Kewei; Zhu, Gangbing; Zhang, Zhen; Wang, Jiagao; Du, Daolin [School of the Environment and Safety Engineering, Jiangsu University, Zhenjiang 212013 (China)
2016-11-01
A fast and ultrasensitive indirect competitive time-resolved fluoroimmunoassay (TRFIA) was developed for the analysis of paclobutrazol in environmental water and soil samples. Paclobutrazol hapten was synthesized and conjugated to bovine serum albumin (BSA) for producing polyclonal antibodies. Under optimal conditions, the 50% inhibitory concentration (IC{sub 50} value) and limit of detection (LOD, IC{sub 20} value) were 1.09 μg L{sup −} {sup 1} and 0.067 μg L{sup −} {sup 1}, respectively. The LOD of TRFIA was improved 30-fold compared to the already reported ELISA. There was almost no cross-reactivity of the antibody with the other structural analogues of triazole compounds, indicating that the antibody had high specificity. The average recoveries from spiked samples were in the range from 80.2% to 104.7% with a relative standard deviation of 1.0–9.5%. The TRFIA results for the real samples were in good agreement with that obtained by high-performance liquid chromatography analyses. The results indicate that the established TRFIA has potential application for screening paclobutrazol in environmental samples. - Highlights: • The approach to design and synthesize the PBZ hapten was more straightforward. • A rapid and ultrasensitive TRFIA was developed and applied to the screening of PBZ. • The TRFIA for real soil samples showed reliability and high correlation with HPLC. • The PBZ TRFIA showed high sensitivity, simple operation, a wide range of quantitative analyses and no radioactive hazards.
The FIREBall-2 UV sample grating efficiency at 200-208nm
Quiret, S.; Milliard, B.; Grange, R.; Lemaitre, G. R.; Caillat, A.; Belhadi, M.; Cotel, A.
2014-07-01
The FIREBall-2 (Faint Intergalactic Redshifted Emission Balloon-2) is a balloon-borne ultraviolet spectro-imaging mission optimized for the study of faint diffuse emission around galaxies. A key optical component of the new spectrograph design is the high throughput cost-effective holographic 2400 ℓ =mm, 110x130mm aspherized reflective grating used in the range 200 - 208nm, near 28°deviation angle. In order to anticipate the efficiency in flight conditions, we have developed a PCGrate model for the FIREBall grating calibrated on linearly polarized measurements at 12° deviation angle in the range 240-350nm of a 50x50mm replica of the same master selected for the flight grating. This model predicts an efficiency within [64:7; 64:9]+/-0:7% (S polarization) and [38:3; 45]+/-2:2% (P-polarization) for the baseline aluminum coated grating with an Al2O3 natural oxidation layer and within [63:5; 65] +/-1% (S-polarization) and [51:3; 54:8] +/-2:8% (P-polarization) for an aluminum plus a 70nm MgF2 coating, in the range 200 - 208nm and for a 28°deviation angle. The model also shows there is room for significant improvements at shorter wavelengths, of interest for future deep UV spectroscopic missions.
Brignole, Esteban Alberto
2013-01-01
Traditionally, the teaching of phase equilibria emphasizes the relationships between the thermodynamic variables of each phase in equilibrium rather than its engineering applications. This book changes the focus from the use of thermodynamics relationships to compute phase equilibria to the design and control of the phase conditions that a process needs. Phase Equilibrium Engineering presents a systematic study and application of phase equilibrium tools to the development of chemical processes. The thermodynamic modeling of mixtures for process development, synthesis, simulation, design and
International Nuclear Information System (INIS)
Balter, H.S.
1994-01-01
This work studies the behaviour of radionuclides when it produce a desintegration activity,decay and the isotopes stable creation. It gives definitions about the equilibrium between activity of parent and activity of the daughter, radioactive decay,isotope stable and transient equilibrium and maxim activity time. Some considerations had been given to generators that permit a disgregation of two radioisotopes in equilibrium and its good performance. Tabs
Thermodynamic theory of equilibrium fluctuations
International Nuclear Information System (INIS)
Mishin, Y.
2015-01-01
The postulational basis of classical thermodynamics has been expanded to incorporate equilibrium fluctuations. The main additional elements of the proposed thermodynamic theory are the concept of quasi-equilibrium states, a definition of non-equilibrium entropy, a fundamental equation of state in the entropy representation, and a fluctuation postulate describing the probability distribution of macroscopic parameters of an isolated system. Although these elements introduce a statistical component that does not exist in classical thermodynamics, the logical structure of the theory is different from that of statistical mechanics and represents an expanded version of thermodynamics. Based on this theory, we present a regular procedure for calculations of equilibrium fluctuations of extensive parameters, intensive parameters and densities in systems with any number of fluctuating parameters. The proposed fluctuation formalism is demonstrated by four applications: (1) derivation of the complete set of fluctuation relations for a simple fluid in three different ensembles; (2) fluctuations in finite-reservoir systems interpolating between the canonical and micro-canonical ensembles; (3) derivation of fluctuation relations for excess properties of grain boundaries in binary solid solutions, and (4) derivation of the grain boundary width distribution for pre-melted grain boundaries in alloys. The last two applications offer an efficient fluctuation-based approach to calculations of interface excess properties and extraction of the disjoining potential in pre-melted grain boundaries. Possible future extensions of the theory are outlined.
Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S
2015-02-01
With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.
Non-Equilibrium Properties from Equilibrium Free Energy Calculations
Pohorille, Andrew; Wilson, Michael A.
2012-01-01
Calculating free energy in computer simulations is of central importance in statistical mechanics of condensed media and its applications to chemistry and biology not only because it is the most comprehensive and informative quantity that characterizes the eqUilibrium state, but also because it often provides an efficient route to access dynamic and kinetic properties of a system. Most of applications of equilibrium free energy calculations to non-equilibrium processes rely on a description in which a molecule or an ion diffuses in the potential of mean force. In general case this description is a simplification, but it might be satisfactorily accurate in many instances of practical interest. This hypothesis has been tested in the example of the electrodiffusion equation . Conductance of model ion channels has been calculated directly through counting the number of ion crossing events observed during long molecular dynamics simulations and has been compared with the conductance obtained from solving the generalized Nernst-Plank equation. It has been shown that under relatively modest conditions the agreement between these two approaches is excellent, thus demonstrating the assumptions underlying the diffusion equation are fulfilled. Under these conditions the electrodiffusion equation provides an efficient approach to calculating the full voltage-current dependence routinely measured in electrophysiological experiments.
Improved importance sampling technique for efficient simulation of digital communication systems
Lu, Dingqing; Yao, Kung
1988-01-01
A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.
High efficiency environmental sampling with UV-cured peelable coatings (aka NuGoo project)
Energy Technology Data Exchange (ETDEWEB)
Henzl, Vladimir [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Junghans, Sylvia Ann [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lakis, Rollin Evan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-11-21
This report presents slides on CA Related Project (FY13-17); Environmental sampling by IAEA (not only) during CA; Decontamination gels; Cotton swipes vs. decon gel (FY15); Contamination removal study; The origins of the NuGoo; NuGoo – proof of concept; NuGoo – FY17 project ($250K); LED lamp – which one works and why; Selecting photoinitiator; Monomers and oligomers; Results.
Research on How to Remove Efficiently the Condensate Water of Sampling System
International Nuclear Information System (INIS)
Cho, SungHwan; Kim, MinSoo; Choi, HoYoung; In, WonHo
2015-01-01
Corrosion was caused in the measurement chamber inside the O 2 and H 2 analyzer, and thus measuring the concentration of O 2 and H 2 was not possible. It was confirmed that the cause of the occurrence of condensate water is due to the temperature difference caused during the process of the internal gas of the disposal and degasifier tank being brought into the analyzer. Thus, a heating system was installed inside and outside of the sampling panel for gas to remove generated condensate water in the analyzer and pipe. For the case where condensate water is not removed by the heating system, drain port is also installed in the sampling panel for gas to collect the condensate water of the sampling system. It was verified that there is a great volume of condensate water existing in the pipe line during the purging process after installing manufactured goods. The condensate water was fully removed by the installed heating cable and drain port. The heating cable was operated constantly at a temperature of 80 to 90 .deg. C, which allows the precise measurement of gas concentration and longer maintenance duration by blocking of the condensate water before being produced. To install instruments for measuring the gas, such as an O 2 and H 2 analyzer etc., consideration regarding whether there condensate water is present due to the temperature difference between the measuring system and analyzer is required
Research on How to Remove Efficiently the Condensate Water of Sampling System
Energy Technology Data Exchange (ETDEWEB)
Cho, SungHwan; Kim, MinSoo; Choi, HoYoung; In, WonHo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-10-15
Corrosion was caused in the measurement chamber inside the O{sub 2} and H{sub 2} analyzer, and thus measuring the concentration of O{sub 2} and H{sub 2} was not possible. It was confirmed that the cause of the occurrence of condensate water is due to the temperature difference caused during the process of the internal gas of the disposal and degasifier tank being brought into the analyzer. Thus, a heating system was installed inside and outside of the sampling panel for gas to remove generated condensate water in the analyzer and pipe. For the case where condensate water is not removed by the heating system, drain port is also installed in the sampling panel for gas to collect the condensate water of the sampling system. It was verified that there is a great volume of condensate water existing in the pipe line during the purging process after installing manufactured goods. The condensate water was fully removed by the installed heating cable and drain port. The heating cable was operated constantly at a temperature of 80 to 90 .deg. C, which allows the precise measurement of gas concentration and longer maintenance duration by blocking of the condensate water before being produced. To install instruments for measuring the gas, such as an O{sub 2} and H{sub 2} analyzer etc., consideration regarding whether there condensate water is present due to the temperature difference between the measuring system and analyzer is required.
Efficient Sampling of the Structure of Crypto Generators' State Transition Graphs
Keller, Jörg
Cryptographic generators, e.g. stream cipher generators like the A5/1 used in GSM networks or pseudo-random number generators, are widely used in cryptographic network protocols. Basically, they are finite state machines with deterministic transition functions. Their state transition graphs typically cannot be analyzed analytically, nor can they be explored completely because of their size which typically is at least n = 264. Yet, their structure, i.e. number and sizes of weakly connected components, is of interest because a structure deviating significantly from expected values for random graphs may form a distinguishing attack that indicates a weakness or backdoor. By sampling, one randomly chooses k nodes, derives their distribution onto connected components by graph exploration, and extrapolates these results to the complete graph. In known algorithms, the computational cost to determine the component for one randomly chosen node is up to O(√n), which severely restricts the sample size k. We present an algorithm where the computational cost to find the connected component for one randomly chosen node is O(1), so that a much larger sample size k can be analyzed in a given time. We report on the performance of a prototype implementation, and about preliminary analysis for several generators.
A rapid and efficient DNA extraction protocol from fresh and frozen human blood samples.
Guha, Pokhraj; Das, Avishek; Dutta, Somit; Chaudhuri, Tapas Kumar
2018-01-01
Different methods available for extraction of human genomic DNA suffer from one or more drawbacks including low yield, compromised quality, cost, time consumption, use of toxic organic solvents, and many more. Herein, we aimed to develop a method to extract DNA from 500 μL of fresh or frozen human blood. Five hundred microliters of fresh and frozen human blood samples were used for standardization of the extraction procedure. Absorbance at 260 and 280 nm, respectively, (A 260 /A 280 ) were estimated to check the quality and quantity of the extracted DNA sample. Qualitative assessment of the extracted DNA was checked by Polymerase Chain reaction and double digestion of the DNA sample. Our protocol resulted in average yield of 22±2.97 μg and 20.5±3.97 μg from 500 μL of fresh and frozen blood, respectively, which were comparable to many reference protocols and kits. Besides yielding bulk amount of DNA, our protocol is rapid, economical, and avoids toxic organic solvents such as Phenol. Due to unaffected quality, the DNA is suitable for downstream applications. The protocol may also be useful for pursuing basic molecular researches in laboratories having limited funds. © 2017 Wiley Periodicals, Inc.
Ghasemi, Ensieh; Sillanpää, Mika
2015-01-01
A novel type of magnetic nanosorbent, hydroxyapatite-coated Fe2O3 nanoparticles was synthesized and used for the adsorption and removal of nitrite and nitrate ions from environmental samples. The properties of synthesized magnetic nanoparticles were characterized by scanning electron microscopy, Fourier transform infrared spectroscopy, and X-ray powder diffraction. After the adsorption process, the separation of γ-Fe2O3@hydroxyapatite nanoparticles from the aqueous solution was simply achieved by applying an external magnetic field. The effects of different variables on the adsorption efficiency were studied simultaneously using an experimental design. The variables of interest were amount of magnetic hydroxyapatite nanoparticles, sample volume, pH, stirring rate, adsorption time, and temperature. The experimental parameters were optimized using a Box-Behnken design and response surface methodology after a Plackett-Burman screening design. Under the optimum conditions, the adsorption efficiencies of magnetic hydroxyapatite nanoparticles adsorbents toward NO3(-) and NO2(-) ions (100 mg/L) were in the range of 93-101%. The results revealed that the magnetic hydroxyapatite nanoparticles adsorbent could be used as a simple, efficient, and cost-effective material for the removal of nitrate and nitrite ions from environmental water and soil samples. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans
2015-02-01
The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this
Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators
International Nuclear Information System (INIS)
Flammia, Steven T; Gross, David; Liu, Yi-Kai; Eisert, Jens
2012-01-01
Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. Firstly, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e. the sample complexity of tomography decreases with the rank. Secondly, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. In this paper, we present a new theoretical analysis of compressed tomography, based on the restricted isometry property for low-rank matrices. Using these tools, we obtain near-optimal error bounds for the realistic situation where the data contain noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators—the matrix Dantzig selector and the matrix Lasso—with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low-rank estimate using direct fidelity estimation, and describe a method for compressed quantum process tomography that works for processes with small Kraus rank and requires only Pauli eigenstate preparations
Directory of Open Access Journals (Sweden)
Mohammad Osama
2014-06-01
Full Text Available Pleurotus ostreatus, a white rot fungus, is capable of bioremediating a wide range of organic contaminants including Polycyclic Aromatic Hydrocarbons (PAHs. Ergosterol is produced by living fungal biomass and used as a measure of fungal biomass. The first part of this work deals with the extraction and quantification of PAHs from contaminated sediments by Lipid Extraction Method (LEM. The second part consists of the development of a novel extraction method (Ergosterol Extraction Method (EEM, quantification and bioremediation. The novelty of this method is the simultaneously extraction and quantification of two different types of compounds, sterol (ergosterol and PAHs and is more efficient than LEM. EEM has been successful in extracting ergosterol from the fungus grown on barley in the concentrations of 17.5-39.94 µg g-1 ergosterol and the PAHs are much more quantified in numbers and amounts as compared to LEM. In addition, cholesterol usually found in animals, has also been detected in the fungus, P. ostreatus at easily detectable levels.
Kleppe, J.; Borm, P.E.M.; Hendrickx, R.L.P.
2008-01-01
Fall back equilibrium is a refinement of the Nash equilibrium concept. In the underly- ing thought experiment each player faces the possibility that, after all players decided on their action, his chosen action turns out to be blocked. Therefore, each player has to decide beforehand on a back-up
International Nuclear Information System (INIS)
Wren, J.C.; Moore, C.J.; Rasmussenn, M.T.; Weaver, K.R.
1999-01-01
Charcoal filters are installed in the emergency filtered air discharge system (EFADS) of multiunit stations to control the release of airborne radioiodine in the event of a reactor accident. These filters use highly activated charcoal impregnated with triethylenediamine (TEDA). The TEDA-impregnated charcoal is highly efficient in removing radioiodine from flowing airstreams. The iodine-removal efficiency of the charcoal is presumed to deteriorate slowly with age, but current knowledge of this effect is insufficient to predict with confidence the performance of aged charcoal following an accident. Experiments were performed to determine the methyl iodide removal efficiency of aged charcoal samples taken from the EFADS of Ontario Hydro's Bruce-A nuclear generating station. The charcoal had been in service for ∼4 yr. The adsorption rate constant and capacity were measured under post-loss-of-coolant accident conditions to determine the efficiency of the aged charcoal. The adsorption rate constants of the aged charcoal samples were observed to be extremely high, yielding a decontamination factor (DF) for a 20-cm-deep bed of the aged charcoal >1 X 10 15 . The results show that essentially no CH 3 I would escape from a 20-cm-deep bed of the aged charcoal and that the requirement for a DF of 1000 for organic iodides in the EFADS filters would be exceeded by a tremendous margin. With such high DFs, the release of iodine from a 20-cm-deep bed would be virtually impossible to detect. The adsorption capacities observed for the aged charcoal samples approach the theoretical chemisorption capacity of 5 wt% TEDA charcoal, indicating that aging in the EFADS for 4 yr has had a negligible impact on the adsorption capacity. The results indicate that the short- and long-term performances of the aged charcoal in the EFADS of Bruce-A following an accident would still far exceed performance requirements. (author)
Hartwig, Carla Andrade; Pereira, Rodrigo Mendes; Novo, Diogo La Rosa; Oliveira, Dirce Taina Teixeira; Mesko, Marcia Foster
2017-11-01
Responding to the need for green and efficient methods to determine catalyst residues with suitable precision and accuracy in samples with high fat content, the present work evaluates a microwave-assisted ultraviolet digestion (MW-UV) system for margarines and subsequent determination of Ni, Pd and Pt using inductively coupled plasma mass spectrometry (ICP-MS). It was possible to digest up to 500mg of margarine using only 10mL of 4molL -1 HNO 3 with a digestion efficiency higher than 98%. This allowed the determination of catalyst residues using the ICP-MS and free of interferences. For this purpose, the following experimental parameters were evaluated: concentration of digestion solution, sample mass and microwave irradiation program. The residual carbon content was used as a parameter to evaluate the efficiency of digestion and to select the most suitable experimental conditions. The accuracy evaluation was performed by recovery tests using a standard solution and certified reference material, and recoveries ranging from 94% to 99% were obtained for all analytes. The limits of detection for Ni, Pd and Pt using the proposed method were 35.6, 0.264 and 0.302ngg -1 , respectively. When compared to microwave-assisted digestion (MW-AD) in closed vessels using concentrated HNO 3 (used as a reference method for sample digestion), the proposed MW-UV could be considered an excellent alternative for the digestion of margarine, as this method requires only a diluted nitric acid solution for efficient digestion. In addition, MW-UV provides appropriate solutions for further ICP-MS determination with suitable precision (relative standard deviation < 7%) and accuracy for all evaluated analytes. The proposed method was applied to margarines from different brands produced in Brazil, and the concentration of catalyst residues was in agreement with the current legislation or recommendations. Copyright © 2017 Elsevier B.V. All rights reserved.
High density FTA plates serve as efficient long-term sample storage for HLA genotyping.
Lange, V; Arndt, K; Schwarzelt, C; Boehme, I; Giani, A S; Schmidt, A H; Ehninger, G; Wassmuth, R
2014-02-01
Storage of dried blood spots (DBS) on high-density FTA(®) plates could constitute an appealing alternative to frozen storage. However, it remains controversial whether DBS are suitable for high-resolution sequencing of human leukocyte antigen (HLA) alleles. Therefore, we extracted DNA from DBS that had been stored for up to 4 years, using six different methods. We identified those extraction methods that recovered sufficient high-quality DNA for reliable high-resolution HLA sequencing. Further, we confirmed that frozen whole blood samples that had been stored for several years can be transferred to filter paper without compromising HLA genotyping upon extraction. Concluding, DNA derived from high-density FTA(®) plates is suitable for high-resolution HLA sequencing, provided that appropriate extraction protocols are employed. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Efficient Sample Delay Calculation for 2-D and 3-D Ultrasound Imaging.
Ibrahim, Aya; Hager, Pascal A; Bartolini, Andrea; Angiolini, Federico; Arditi, Marcel; Thiran, Jean-Philippe; Benini, Luca; De Micheli, Giovanni
2017-08-01
Ultrasound imaging is a reference medical diagnostic technique, thanks to its blend of versatility, effectiveness, and moderate cost. The core computation of all ultrasound imaging methods is based on simple formulae, except for those required to calculate acoustic propagation delays with high precision and throughput. Unfortunately, advanced three-dimensional (3-D) systems require the calculation or storage of billions of such delay values per frame, which is a challenge. In 2-D systems, this requirement can be four orders of magnitude lower, but efficient computation is still crucial in view of low-power implementations that can be battery-operated, enabling usage in numerous additional scenarios. In this paper, we explore two smart designs of the delay generation function. To quantify their hardware cost, we implement them on FPGA and study their footprint and performance. We evaluate how these architectures scale to different ultrasound applications, from a low-power 2-D system to a next-generation 3-D machine. When using numerical approximations, we demonstrate the ability to generate delay values with sufficient throughput to support 10 000-channel 3-D imaging at up to 30 fps while using 63% of a Virtex 7 FPGA, requiring 24 MB of external memory accessed at about 32 GB/s bandwidth. Alternatively, with similar FPGA occupation, we show an exact calculation method that reaches 24 fps on 1225-channel 3-D imaging and does not require external memory at all. Both designs can be scaled to use a negligible amount of resources for 2-D imaging in low-power applications and for ultrafast 2-D imaging at hundreds of frames per second.
International Nuclear Information System (INIS)
Boulyga, Sergei F.; Heumann, Klaus G.
2006-01-01
A method by inductively coupled plasma mass spectrometry (Icp-Ms) was developed which allows the measurement of 236 U at concentration ranges down to 3 x 10 -14 g g -1 and extremely low 236 U/ 238 U isotope ratios in soil samples of 10 -7 . By using the high-efficiency solution introduction system APEX in connection with a sector-field ICP-MS a sensitivity of more than 5000 counts fg -1 uranium was achieved. The use of an aerosol desolvating unit reduced the formation rate of uranium hydride ions UH + /U + down to a level of 10 -6 . An abundance sensitivity of 3 x 10 -7 was observed for 236 U/ 238 U isotope ratio measurements at mass resolution 4000. The detection limit for 236 U and the lowest detectable 236 U/ 238 U isotope ratio were improved by more than two orders of magnitude compared with corresponding values by alpha spectrometry. Determination of uranium in soil samples collected in the vicinity of Chernobyl nuclear power plant (NPP) resulted in that the 236 U/ 238 U isotope ratio is a much more sensitive and accurate marker for environmental contamination by spent uranium in comparison to the 235 U/ 238 U isotope ratio. The ICP-MS technique allowed for the first time detection of irradiated uranium in soil samples even at distances more than 200 km to the north of Chernobyl NPP (Mogilev region). The concentration of 236 U in the upper 0-10 cm soil layers varied from 2 x 10 -9 g g -1 within radioactive spots close to the Chernobyl NPP to 3 x 10 -13 g g -1 on a sampling site located by >200 km from Chernobyl
Boulyga, Sergei F; Heumann, Klaus G
2006-01-01
A method by inductively coupled plasma mass spectrometry (ICP-MS) was developed which allows the measurement of (236)U at concentration ranges down to 3 x 10(-14)g g(-1) and extremely low (236)U/(238)U isotope ratios in soil samples of 10(-7). By using the high-efficiency solution introduction system APEX in connection with a sector-field ICP-MS a sensitivity of more than 5,000 counts fg(-1) uranium was achieved. The use of an aerosol desolvating unit reduced the formation rate of uranium hydride ions UH(+)/U(+) down to a level of 10(-6). An abundance sensitivity of 3 x 10(-7) was observed for (236)U/(238)U isotope ratio measurements at mass resolution 4000. The detection limit for (236)U and the lowest detectable (236)U/(238)U isotope ratio were improved by more than two orders of magnitude compared with corresponding values by alpha spectrometry. Determination of uranium in soil samples collected in the vicinity of Chernobyl nuclear power plant (NPP) resulted in that the (236)U/(238)U isotope ratio is a much more sensitive and accurate marker for environmental contamination by spent uranium in comparison to the (235)U/(238)U isotope ratio. The ICP-MS technique allowed for the first time detection of irradiated uranium in soil samples even at distances more than 200 km to the north of Chernobyl NPP (Mogilev region). The concentration of (236)U in the upper 0-10 cm soil layers varied from 2 x 10(-9)g g(-1) within radioactive spots close to the Chernobyl NPP to 3 x 10(-13)g g(-1) on a sampling site located by >200 km from Chernobyl.
Efendiev, Y.
2009-11-01
The Markov chain Monte Carlo (MCMC) is a rigorous sampling method to quantify uncertainty in subsurface characterization. However, the MCMC usually requires many flow and transport simulations in evaluating the posterior distribution and can be computationally expensive for fine-scale geological models. We propose a methodology that combines coarse- and fine-scale information to improve the efficiency of MCMC methods. The proposed method employs off-line computations for modeling the relation between coarse- and fine-scale error responses. This relation is modeled using nonlinear functions with prescribed error precisions which are used in efficient sampling within the MCMC framework. We propose a two-stage MCMC where inexpensive coarse-scale simulations are performed to determine whether or not to run the fine-scale (resolved) simulations. The latter is determined on the basis of a statistical model developed off line. The proposed method is an extension of the approaches considered earlier where linear relations are used for modeling the response between coarse-scale and fine-scale models. The approach considered here does not rely on the proximity of approximate and resolved models and can employ much coarser and more inexpensive models to guide the fine-scale simulations. Numerical results for three-phase flow and transport demonstrate the advantages, efficiency, and utility of the method for uncertainty assessment in the history matching. Copyright 2009 by the American Geophysical Union.
Introducing etch kernels for efficient pattern sampling and etch bias prediction
Weisbuch, François; Lutich, Andrey; Schatz, Jirka
2018-01-01
Successful patterning requires good control of the photolithography and etch processes. While compact litho models, mainly based on rigorous physics, can predict very well the contours printed in photoresist, pure empirical etch models are less accurate and more unstable. Compact etch models are based on geometrical kernels to compute the litho-etch biases that measure the distance between litho and etch contours. The definition of the kernels, as well as the choice of calibration patterns, is critical to get a robust etch model. This work proposes to define a set of independent and anisotropic etch kernels-"internal, external, curvature, Gaussian, z_profile"-designed to represent the finest details of the resist geometry to characterize precisely the etch bias at any point along a resist contour. By evaluating the etch kernels on various structures, it is possible to map their etch signatures in a multidimensional space and analyze them to find an optimal sampling of structures. The etch kernels evaluated on these structures were combined with experimental etch bias derived from scanning electron microscope contours to train artificial neural networks to predict etch bias. The method applied to contact and line/space layers shows an improvement in etch model prediction accuracy over standard etch model. This work emphasizes the importance of the etch kernel definition to characterize and predict complex etch effects.
Liao, Shu Y.; Lee, Myungwoon; Wang, Tuo; Sergeyev, Ivan V.; Hong, Mei
2016-01-01
Although dynamic nuclear polarization (DNP) has dramatically enhanced solid-state NMR spectral sensitivities of many synthetic materials and some biological macromolecules, recent studies of membrane-protein DNP using exogenously doped paramagnetic radicals as polarizing agents have reported varied and sometimes surprisingly limited enhancement factors. This motivated us to carry out a systematic evaluation of sample preparation protocols for optimizing the sensitivity of DNP NMR spectra of membrane-bound peptides and proteins at cryogenic temperatures of ~110 K. We show that mixing the radical with the membrane by direct titration instead of centrifugation gives a significant boost to DNP enhancement. We quantify the relative sensitivity enhancement between AMUPol and TOTAPOL, two commonly used radicals, and between deuterated and protonated lipid membranes. AMUPol shows ~4 fold higher sensitivity enhancement than TOTAPOL, while deuterated lipid membrane does not give net higher sensitivity for the membrane peptides than protonated membrane. Overall, a ~100 fold enhancement between the microwave-on and microwave-off spectra can be achieved on lipid-rich membranes containing conformationally disordered peptides, and absolute sensitivity gains of 105–160 can be obtained between low-temperature DNP spectra and high-temperature non-DNP spectra. We also measured the paramagnetic relaxation enhancement of lipid signals by TOTAPOL and AMUPol, to determine the depths of these two radicals in the lipid bilayer. Our data indicate a bimodal distribution of both radicals, a surface-bound fraction and a membrane-bound fraction where the nitroxides lie at ~10 Å from the membrane surface. TOTAPOL appears to have a higher membrane-embedded fraction than AMUPol. These results should be useful for membrane-protein solid-state NMR studies under DNP conditions and provide insights into how biradicals interact with phospholipid membranes. PMID:26873390
Energy Technology Data Exchange (ETDEWEB)
Liao, Shu Y.; Lee, Myungwoon; Wang, Tuo [Massachusetts Institute of Technology, Department of Chemistry (United States); Sergeyev, Ivan V. [Bruker Biospin (United States); Hong, Mei, E-mail: meihong@mit.edu [Massachusetts Institute of Technology, Department of Chemistry (United States)
2016-03-15
Although dynamic nuclear polarization (DNP) has dramatically enhanced solid-state NMR spectral sensitivities of many synthetic materials and some biological macromolecules, recent studies of membrane-protein DNP using exogenously doped paramagnetic radicals as polarizing agents have reported varied and sometimes surprisingly limited enhancement factors. This motivated us to carry out a systematic evaluation of sample preparation protocols for optimizing the sensitivity of DNP NMR spectra of membrane-bound peptides and proteins at cryogenic temperatures of ~110 K. We show that mixing the radical with the membrane by direct titration instead of centrifugation gives a significant boost to DNP enhancement. We quantify the relative sensitivity enhancement between AMUPol and TOTAPOL, two commonly used radicals, and between deuterated and protonated lipid membranes. AMUPol shows ~fourfold higher sensitivity enhancement than TOTAPOL, while deuterated lipid membrane does not give net higher sensitivity for the membrane peptides than protonated membrane. Overall, a ~100 fold enhancement between the microwave-on and microwave-off spectra can be achieved on lipid-rich membranes containing conformationally disordered peptides, and absolute sensitivity gains of 105–160 can be obtained between low-temperature DNP spectra and high-temperature non-DNP spectra. We also measured the paramagnetic relaxation enhancement of lipid signals by TOTAPOL and AMUPol, to determine the depths of these two radicals in the lipid bilayer. Our data indicate a bimodal distribution of both radicals, a surface-bound fraction and a membrane-bound fraction where the nitroxides lie at ~10 Å from the membrane surface. TOTAPOL appears to have a higher membrane-embedded fraction than AMUPol. These results should be useful for membrane-protein solid-state NMR studies under DNP conditions and provide insights into how biradicals interact with phospholipid membranes.
Energy Technology Data Exchange (ETDEWEB)
Yu, Yuqi; Wang, Jinan; Shao, Qiang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn; Zhu, Weiliang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn [ACS Key Laboratory of Receptor Research, Drug Discovery and Design Center, Shanghai Institute of Materia Medica, Chinese Academy of Sciences, 555 Zuchongzhi Road, Shanghai 201203 (China); Shi, Jiye, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn [UCB Pharma, 216 Bath Road, Slough SL1 4EN (United Kingdom)
2015-03-28
The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much less computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.
Meng, Yilin; Roux, Benoît
2015-08-11
The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.
Equilibrium and non equilibrium in fragmentation
International Nuclear Information System (INIS)
Dorso, C.O.; Chernomoretz, A.; Lopez, J.A.
2001-01-01
Full text: In this communication we present recent results regarding the interplay of equilibrium and non equilibrium in the process of fragmentation of excited finite Lennard Jones drops. Because the general features of such a potential resemble the ones of the nuclear interaction (fact that is reinforced by the similarity between the EOS of both systems) these studies are not only relevant from a fundamental point of view but also shed light on the problem of nuclear multifragmentation. We focus on the microscopic analysis of the state of the fragmenting system at fragmentation time. We show that the Caloric Curve (i e. the functional relationship between the temperature of the system and the excitation energy) is of the type rise plateau with no vapor branch. The usual rise plateau rise pattern is only recovered when equilibrium is artificially imposed. This result puts a serious question on the validity of the freeze out hypothesis. This feature is independent of the dimensionality or excitation mechanism. Moreover we explore the behavior of magnitudes which can help us determine the degree of the assumed phase transition. It is found that no clear cut criteria is presently available. (Author)
Chemical Principles Revisited: Chemical Equilibrium.
Mickey, Charles D.
1980-01-01
Describes: (1) Law of Mass Action; (2) equilibrium constant and ideal behavior; (3) general form of the equilibrium constant; (4) forward and reverse reactions; (5) factors influencing equilibrium; (6) Le Chatelier's principle; (7) effects of temperature, changing concentration, and pressure on equilibrium; and (8) catalysts and equilibrium. (JN)
Equilibrium and non-equilibrium phenomena in arcs and torches
Mullen, van der J.J.A.M.
2000-01-01
A general treatment of non-equilibrium plasma aspects is obtained by relating transport fluxes to equilibrium restoring processes in so-called disturbed Bilateral Relations. The (non) equilibrium stage of a small microwave induced plasma serves as case study.
Directory of Open Access Journals (Sweden)
Katalin Martinás
2007-02-01
Full Text Available A microeconomic, agent based framework to dynamic economics is formulated in a materialist approach. An axiomatic foundation of a non-equilibrium microeconomics is outlined. Economic activity is modelled as transformation and transport of commodities (materials owned by the agents. Rate of transformations (production intensity, and the rate of transport (trade are defined by the agents. Economic decision rules are derived from the observed economic behaviour. The non-linear equations are solved numerically for a model economy. Numerical solutions for simple model economies suggest that the some of the results of general equilibrium economics are consequences only of the equilibrium hypothesis. We show that perfect competition of selfish agents does not guarantee the stability of economic equilibrium, but cooperativity is needed, too.
Directory of Open Access Journals (Sweden)
Muhammed Alamgir Zaman Chowdhury
2014-01-01
Full Text Available In the present study, the residual pesticide levels were determined in eggplants (Solanum melongena (n=16, purchased from four different markets in Dhaka, Bangladesh. The carbamate and organophosphorus pesticide residual levels were determined by high performance liquid chromatography (HPLC, and the efficiency of gamma radiation on pesticide removal in three different types of vegetables was also studied. Many (50% of the samples contained pesticides, and three samples had residual levels above the maximum residue levels determined by the World Health Organisation. Three carbamates (carbaryl, carbofuran, and pirimicarb and six organophosphates (phenthoate, diazinon, parathion, dimethoate, phosphamidon, and pirimiphos-methyl were detected in eggplant samples; the highest carbofuran level detected was 1.86 mg/kg, while phenthoate was detected at 0.311 mg/kg. Gamma radiation decreased pesticide levels proportionately with increasing radiation doses. Diazinon, chlorpyrifos, and phosphamidon were reduced by 40–48%, 35–43%, and 30–45%, respectively, when a radiation strength of 0.5 kGy was utilized. However, when the radiation dose was increased to 1.0 kGy, the levels of the pesticides were reduced to 85–90%, 80–91%, and 90–95%, respectively. In summary, our study revealed that pesticide residues are present at high amounts in vegetable samples and that gamma radiation at 1.0 kGy can remove 80–95% of some pesticides.
Equilibrium statistical mechanics
Mayer, J E
1968-01-01
The International Encyclopedia of Physical Chemistry and Chemical Physics, Volume 1: Equilibrium Statistical Mechanics covers the fundamental principles and the development of theoretical aspects of equilibrium statistical mechanics. Statistical mechanical is the study of the connection between the macroscopic behavior of bulk matter and the microscopic properties of its constituent atoms and molecules. This book contains eight chapters, and begins with a presentation of the master equation used for the calculation of the fundamental thermodynamic functions. The succeeding chapters highlight t
Computing Equilibrium Chemical Compositions
Mcbride, Bonnie J.; Gordon, Sanford
1995-01-01
Chemical Equilibrium With Transport Properties, 1993 (CET93) computer program provides data on chemical-equilibrium compositions. Aids calculation of thermodynamic properties of chemical systems. Information essential in design and analysis of such equipment as compressors, turbines, nozzles, engines, shock tubes, heat exchangers, and chemical-processing equipment. CET93/PC is version of CET93 specifically designed to run within 640K memory limit of MS-DOS operating system. CET93/PC written in FORTRAN.
International Nuclear Information System (INIS)
Jiang Bowei; Xiang Fawei; Zhao Xingchun; Wang Lihua; Fan Chunhai
2013-01-01
Deoxyribonucleic acid (DNA) damage arising from radiations widely occurred along with the development of nuclear weapons and clinically wide application of computed tomography (CT) scan and nuclear medicine. All ionizing radiations (X-rays, γ-rays, alpha particles, etc.) and ultraviolet (UV) radiation lead to the DNA damage. Polymerase chain reaction (PCR) is one of the most wildly used techniques for detecting DNA damage as the amplification stops at the site of the damage. Improvements to enhance the efficiency of PCR are always required and remain a great challenge. Here we establish a multiplex PCR assay system (MPAS) that is served as a robust and efficient method for direct detection of target DNA sequences in genomic DNA. The establishment of the system is performed by adding a combination of PCR enhancers to standard PCR buffer, The performance of MPAS was demonstrated by carrying out the direct PCR amplification on l.2 mm human blood punch using commercially available primer sets which include multiple primer pairs. The optimized PCR system resulted in high quality genotyping results without any inhibitory effect indicated and led to a full-profile success rate of 98.13%. Our studies demonstrate that the MPAS provides an efficient and robust method for obtaining sensitive, reliable and reproducible PCR results from human blood samples. (authors)
Computation of Phase Equilibrium and Phase Envelopes
DEFF Research Database (Denmark)
Ritschel, Tobias Kasper Skovborg; Jørgensen, John Bagterp
formulate the involved equations in terms of the fugacity coefficients. We present expressions for the first-order derivatives. Such derivatives are necessary in computationally efficient gradient-based methods for solving the vapor-liquid equilibrium equations and for computing phase envelopes. Finally, we......In this technical report, we describe the computation of phase equilibrium and phase envelopes based on expressions for the fugacity coefficients. We derive those expressions from the residual Gibbs energy. We consider 1) ideal gases and liquids modeled with correlations from the DIPPR database...... and 2) nonideal gases and liquids modeled with cubic equations of state. Next, we derive the equilibrium conditions for an isothermal-isobaric (constant temperature, constant pressure) vapor-liquid equilibrium process (PT flash), and we present a method for the computation of phase envelopes. We...
Local equilibrium in bird flocks
Mora, Thierry; Walczak, Aleksandra M.; Del Castello, Lorenzo; Ginelli, Francesco; Melillo, Stefania; Parisi, Leonardo; Viale, Massimiliano; Cavagna, Andrea; Giardina, Irene
2016-12-01
The correlated motion of flocks is an example of global order emerging from local interactions. An essential difference with respect to analogous ferromagnetic systems is that flocks are active: animals move relative to each other, dynamically rearranging their interaction network. This non-equilibrium characteristic has been studied theoretically, but its impact on actual animal groups remains to be fully explored experimentally. Here, we introduce a novel dynamical inference technique, based on the principle of maximum entropy, which accommodates network rearrangements and overcomes the problem of slow experimental sampling rates. We use this method to infer the strength and range of alignment forces from data of starling flocks. We find that local bird alignment occurs on a much faster timescale than neighbour rearrangement. Accordingly, equilibrium inference, which assumes a fixed interaction network, gives results consistent with dynamical inference. We conclude that bird orientations are in a state of local quasi-equilibrium over the interaction length scale, providing firm ground for the applicability of statistical physics in certain active systems.
Directory of Open Access Journals (Sweden)
Yang Hsin-Chou
2012-07-01
accuracies and a reduced number of selected markers in AIM panels. Conclusions Integrative analysis of SNP and GE markers provides high-accuracy and/or cost-effective classification results for assigning samples from closely related or distantly related ancestral lineages to their original ancestral populations. User-friendly BIASLESS (Biomarkers Identification and Samples Subdivision software was developed as an efficient tool for selecting key SNP and/or GE markers and then building models for sample subdivision. BIASLESS was programmed in R and R-GUI and is available online at http://www.stat.sinica.edu.tw/hsinchou/genetics/prediction/BIASLESS.htm.
Folsom, R. E.; Weber, J. H.
Two sampling designs were compared for the planned 1978 national longitudinal survey of high school seniors with respect to statistical efficiency and cost. The 1972 survey used a stratified two-stage sample of high schools and seniors within schools. In order to minimize interviewer travel costs, an alternate sampling design was proposed,…
Pressl, B.; Laiho, K.; Chen, H.; Günthner, T.; Schlager, A.; Auchter, S.; Suchomel, H.; Kamp, M.; Höfling, S.; Schneider, C.; Weihs, G.
2018-04-01
Semiconductor alloys of aluminum gallium arsenide (AlGaAs) exhibit strong second-order optical nonlinearities. This makes them prime candidates for the integration of devices for classical nonlinear optical frequency conversion or photon-pair production, for example, through the parametric down-conversion (PDC) process. Within this material system, Bragg-reflection waveguides (BRW) are a promising platform, but the specifics of the fabrication process and the peculiar optical properties of the alloys require careful engineering. Previously, BRW samples have been mostly derived analytically from design equations using a fixed set of aluminum concentrations. This approach limits the variety and flexibility of the device design. Here, we present a comprehensive guide to the design and analysis of advanced BRW samples and show how to automatize these tasks. Then, nonlinear optimization techniques are employed to tailor the BRW epitaxial structure towards a specific design goal. As a demonstration of our approach, we search for the optimal effective nonlinearity and mode overlap which indicate an improved conversion efficiency or PDC pair production rate. However, the methodology itself is much more versatile as any parameter related to the optical properties of the waveguide, for example the phasematching wavelength or modal dispersion, may be incorporated as design goals. Further, we use the developed tools to gain a reliable insight in the fabrication tolerances and challenges of real-world sample imperfections. One such example is the common thickness gradient along the wafer, which strongly influences the photon-pair rate and spectral properties of the PDC process. Detailed models and a better understanding of the optical properties of a realistic BRW structure are not only useful for investigating current samples, but also provide important feedback for the design and fabrication of potential future turn-key devices.
Eberl, Gérard
2016-08-01
The classical model of immunity posits that the immune system reacts to pathogens and injury and restores homeostasis. Indeed, a century of research has uncovered the means and mechanisms by which the immune system recognizes danger and regulates its own activity. However, this classical model does not fully explain complex phenomena, such as tolerance, allergy, the increased prevalence of inflammatory pathologies in industrialized nations and immunity to multiple infections. In this Essay, I propose a model of immunity that is based on equilibrium, in which the healthy immune system is always active and in a state of dynamic equilibrium between antagonistic types of response. This equilibrium is regulated both by the internal milieu and by the microbial environment. As a result, alteration of the internal milieu or microbial environment leads to immune disequilibrium, which determines tolerance, protective immunity and inflammatory pathology.
Equilibrium shoreface profiles
DEFF Research Database (Denmark)
Aagaard, Troels; Hughes, Michael G
2017-01-01
Large-scale coastal behaviour models use the shoreface profile of equilibrium as a fundamental morphological unit that is translated in space to simulate coastal response to, for example, sea level oscillations and variability in sediment supply. Despite a longstanding focus on the shoreface...... profile and its relevance to predicting coastal response to changing environmental conditions, the processes and dynamics involved in shoreface equilibrium are still not fully understood. Here, we apply a process-based empirical sediment transport model, combined with morphodynamic principles to provide......; there is no tuning or calibration and computation times are short. It is therefore easily implemented with repeated iterations to manage uncertainty....
O'Connor, Sean M.; Lynch, Jerome P.; Gilbert, Anna C.
2013-04-01
Wireless sensors have emerged to offer low-cost sensors with impressive functionality (e.g., data acquisition, computing, and communication) and modular installations. Such advantages enable higher nodal densities than tethered systems resulting in increased spatial resolution of the monitoring system. However, high nodal density comes at a cost as huge amounts of data are generated, weighing heavy on power sources, transmission bandwidth, and data management requirements, often making data compression necessary. The traditional compression paradigm consists of high rate (>Nyquist) uniform sampling and storage of the entire target signal followed by some desired compression scheme prior to transmission. The recently proposed compressed sensing (CS) framework combines the acquisition and compression stage together, thus removing the need for storage and operation of the full target signal prior to transmission. The effectiveness of the CS approach hinges on the presence of a sparse representation of the target signal in a known basis, similarly exploited by several traditional compressive sensing applications today (e.g., imaging, MRI). Field implementations of CS schemes in wireless SHM systems have been challenging due to the lack of commercially available sensing units capable of sampling methods (e.g., random) consistent with the compressed sensing framework, often moving evaluation of CS techniques to simulation and post-processing. The research presented here describes implementation of a CS sampling scheme to the Narada wireless sensing node and the energy efficiencies observed in the deployed sensors. Of interest in this study is the compressibility of acceleration response signals collected from a multi-girder steel-concrete composite bridge. The study shows the benefit of CS in reducing data requirements while ensuring data analysis on compressed data remain accurate.
Differential Equation of Equilibrium
African Journals Online (AJOL)
user
ABSTRACT. Analysis of underground circular cylindrical shell is carried out in this work. The forth order differential equation of equilibrium, comparable to that of beam on elastic foundation, was derived from static principles on the assumptions of P. L Pasternak. Laplace transformation was used to solve the governing ...
Yun, Wanying; Lu, Zhenzhou; Jiang, Xian
2018-06-01
To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.
Comments on equilibrium, transient equilibrium, and secular equilibrium in serial radioactive decay
International Nuclear Information System (INIS)
Prince, J.R.
1979-01-01
Equations describing serial radioactive decay are reviewed along with published descriptions or transient and secular equilibrium. It is shown that terms describing equilibrium are not used in the same way by various authors. Specific definitions are proposed; they suggest that secular equilibrium is a subset of transient equilibrium
Achieving Radiation Tolerance through Non-Equilibrium Grain Boundary Structures.
Vetterick, Gregory A; Gruber, Jacob; Suri, Pranav K; Baldwin, Jon K; Kirk, Marquis A; Baldo, Pete; Wang, Yong Q; Misra, Amit; Tucker, Garritt J; Taheri, Mitra L
2017-09-25
Many methods used to produce nanocrystalline (NC) materials leave behind non-equilibrium grain boundaries (GBs) containing excess free volume and higher energy than their equilibrium counterparts with identical 5 degrees of freedom. Since non-equilibrium GBs have increased amounts of both strain and free volume, these boundaries may act as more efficient sinks for the excess interstitials and vacancies produced in a material under irradiation as compared to equilibrium GBs. The relative sink strengths of equilibrium and non-equilibrium GBs were explored by comparing the behavior of annealed (equilibrium) and as-deposited (non-equilibrium) NC iron films on irradiation. These results were coupled with atomistic simulations to better reveal the underlying processes occurring on timescales too short to capture using in situ TEM. After irradiation, NC iron with non-equilibrium GBs contains both a smaller number density of defect clusters and a smaller average defect cluster size. Simulations showed that excess free volume contribute to a decreased survival rate of point defects in cascades occurring adjacent to the GB and that these boundaries undergo less dramatic changes in structure upon irradiation. These results suggest that non-equilibrium GBs act as more efficient sinks for defects and could be utilized to create more radiation tolerant materials in future.
Phase equilibrium condition of marine carbon dioxide hydrate
International Nuclear Information System (INIS)
Sun, Shi-Cai; Liu, Chang-Ling; Ye, Yu-Guang
2013-01-01
Highlights: ► CO 2 hydrate phase equilibrium was studied in simulated marine sediments. ► CO 2 hydrate equilibrium temperature in NaCl and submarine pore water was depressed. ► Coarse-grained silica sand does not affect CO 2 hydrate phase equilibrium. ► The relationship between equilibrium temperature and freezing point was discussed. - Abstract: The phase equilibrium of ocean carbon dioxide hydrate should be understood for ocean storage of carbon dioxide. In this paper, the isochoric multi-step heating dissociation method was employed to investigate the phase equilibrium of carbon dioxide hydrate in a variety of systems (NaCl solution, submarine pore water, silica sand + NaCl solution mixture). The experimental results show that the depression in the phase equilibrium temperature of carbon dioxide hydrate in NaCl solution is caused mainly by Cl − ion. The relationship between the equilibrium temperature and freezing point in NaCl solution was discussed. The phase equilibrium temperature of carbon dioxide hydrate in submarine pore water is shifted by −1.1 K to lower temperature region than that in pure water. However, the phase equilibrium temperature of carbon dioxide hydrate in mixture samples of coarsed-grained silica sand and NaCl solution is in agreement with that in NaCl solution with corresponding concentrations. The relationship between the equilibrium temperature and freezing point in mixture samples was also discussed.
International Nuclear Information System (INIS)
Baccouche, S.; Al-Azmi, D.; Karunakara, N.; Trabelsi, A.
2012-01-01
Gamma-ray measurements in terrestrial/environmental samples require the use of high efficient detectors because of the low level of the radionuclide activity concentrations in the samples; thus scintillators are suitable for this purpose. Two scintillation detectors were studied in this work; CsI(Tl) and NaI(Tl) with identical size for measurement of terrestrial samples for performance study. This work describes a Monte Carlo method for making the full-energy efficiency calibration curves for both detectors using gamma-ray energies associated with the decay of naturally occurring radionuclides 137 Cs (661 keV), 40 K (1460 keV), 238 U ( 214 Bi, 1764 keV) and 232 Th ( 208 Tl, 2614 keV), which are found in terrestrial samples. The magnitude of the coincidence summing effect occurring for the 2614 keV emission of 208 Tl is assessed by simulation. The method provides an efficient tool to make the full-energy efficiency calibration curve for scintillation detectors for any samples geometry and volume in order to determine accurate activity concentrations in terrestrial samples. - Highlights: ► CsI (Tl) and NaI (Tl) detectors were studied for the measurement of terrestrial samples. ► Monte Carlo method was used for efficiency calibration using natural gamma emitting terrestrial radionuclides. ► The coincidence summing effect occurring for the 2614 keV emission of 208 Tl is assessed by simulation.
Richardson, Peter M; Jackson, Scott; Parrott, Andrew J; Nordon, Alison; Duckett, Simon B; Halse, Meghan E
2018-07-01
Signal amplification by reversible exchange (SABRE) is a hyperpolarisation technique that catalytically transfers nuclear polarisation from parahydrogen, the singlet nuclear isomer of H 2 , to a substrate in solution. The SABRE exchange reaction is carried out in a polarisation transfer field (PTF) of tens of gauss before transfer to a stronger magnetic field for nuclear magnetic resonance (NMR) detection. In the simplest implementation, polarisation transfer is achieved by shaking the sample in the stray field of a superconducting NMR magnet. Although convenient, this method suffers from limited reproducibility and cannot be used with NMR spectrometers that do not have appreciable stray fields, such as benchtop instruments. Here, we use a simple hand-held permanent magnet array to provide the necessary PTF during sample shaking. We find that the use of this array provides a 25% increase in SABRE enhancement over the stray field approach, while also providing improved reproducibility. Arrays with a range of PTFs were tested, and the PTF-dependent SABRE enhancements were found to be in excellent agreement with comparable experiments carried out using an automated flow system where an electromagnet is used to generate the PTF. We anticipate that this approach will improve the efficiency and reproducibility of SABRE experiments carried out using manual shaking and will be particularly useful for benchtop NMR, where a suitable stray field is not readily accessible. The ability to construct arrays with a range of PTFs will also enable the rapid optimisation of SABRE enhancement as function of PTF for new substrate and catalyst systems. © 2017 The Authors Magnetic Resonance in Chemistry Published by John Wiley & Sons Ltd.
Standardization of 125 Sb in equilibrium non-equilibrium situations with 125m Te
International Nuclear Information System (INIS)
Rodriguez Barquero, L.; Jimenez de Mingo, A.; Grau Carles, A.
1997-10-01
We study the stability of ''125 Sb in the following scintillators: HiSafeIII''TM, Insta- Gel reg s ign Plus and '' Ultima-Gold'' TM. Since ''125 m Te requires more than one year to reach the secular equilibrium with ''125 Sb, we cannot be sure, for a given sample, whether equilibrium is reached or not. In this report we present a new procedure that permits one calibrate mixtures of ''125 Sb+''125 m Te out of the equilibrium. The steps required for the radiochemical separation of the components are indicated. Finally, we study the evolution of counting rate when column yields are less than 100%. (Author)
International Nuclear Information System (INIS)
Li, Gang; Liang, Yongfei; Xu, Jiayun; Bai, Lixin
2015-01-01
The determination of 137 Cs inventory is widely used to estimate the soil erosion or deposition rate. The generally used method to determine the activity of volumetric samples is the relative measurement method, which employs a calibration standard sample with accurately known activity. This method has great advantages in accuracy and operation only when there is a small difference in elemental composition, sample density and geometry between measuring samples and the calibration standard. Otherwise it needs additional efficiency corrections in the calculating process. The Monte Carlo simulations can handle these correction problems easily with lower financial cost and higher accuracy. This work presents a detailed description to the simulation and calibration procedure for a conventionally used commercial P-type coaxial HPGe detector with cylindrical sample geometry. The effects of sample elemental composition, density and geometry were discussed in detail and calculated in terms of efficiency correction factors. The effect of sample placement was also analyzed, the results indicate that the radioactive nuclides and sample density are not absolutely uniform distributed along the axial direction. At last, a unified binary quadratic functional relationship of efficiency correction factors as a function of sample density and height was obtained by the least square fitting method. This function covers the sample density and height range of 0.8–1.8 g/cm 3 and 3.0–7.25 cm, respectively. The efficiency correction factors calculated by the fitted function are in good agreement with those obtained by the GEANT4 simulations with the determination coefficient value greater than 0.9999. The results obtained in this paper make the above-mentioned relative measurements more accurate and efficient in the routine radioactive analysis of environmental cylindrical soil samples. - Highlights: • Determination of 137 Cs inventory in environmental soil samples by using relative
Zhang, Zhijun; Liu, Xinzijian; Chen, Zifei; Zheng, Haifeng; Yan, Kangyu; Liu, Jian
2017-07-01
We show a unified second-order scheme for constructing simple, robust, and accurate algorithms for typical thermostats for configurational sampling for the canonical ensemble. When Langevin dynamics is used, the scheme leads to the BAOAB algorithm that has been recently investigated. We show that the scheme is also useful for other types of thermostats, such as the Andersen thermostat and Nosé-Hoover chain, regardless of whether the thermostat is deterministic or stochastic. In addition to analytical analysis, two 1-dimensional models and three typical real molecular systems that range from the gas phase, clusters, to the condensed phase are used in numerical examples for demonstration. Accuracy may be increased by an order of magnitude for estimating coordinate-dependent properties in molecular dynamics (when the same time interval is used), irrespective of which type of thermostat is applied. The scheme is especially useful for path integral molecular dynamics because it consistently improves the efficiency for evaluating all thermodynamic properties for any type of thermostat.
Equilibrium and pre-equilibrium emissions in proton-induced ...
Indian Academy of Sciences (India)
necessary for the domain of fission-reactor technology for the calculation of nuclear transmutation ... tions occur in three stages: INC, pre-equilibrium and equilibrium (or compound. 344. Pramana ... In the evaporation phase of the reaction, the.
Li, Gang; Liang, Yongfei; Xu, Jiayun; Bai, Lixin
2015-08-01
The determination of (137)Cs inventory is widely used to estimate the soil erosion or deposition rate. The generally used method to determine the activity of volumetric samples is the relative measurement method, which employs a calibration standard sample with accurately known activity. This method has great advantages in accuracy and operation only when there is a small difference in elemental composition, sample density and geometry between measuring samples and the calibration standard. Otherwise it needs additional efficiency corrections in the calculating process. The Monte Carlo simulations can handle these correction problems easily with lower financial cost and higher accuracy. This work presents a detailed description to the simulation and calibration procedure for a conventionally used commercial P-type coaxial HPGe detector with cylindrical sample geometry. The effects of sample elemental composition, density and geometry were discussed in detail and calculated in terms of efficiency correction factors. The effect of sample placement was also analyzed, the results indicate that the radioactive nuclides and sample density are not absolutely uniform distributed along the axial direction. At last, a unified binary quadratic functional relationship of efficiency correction factors as a function of sample density and height was obtained by the least square fitting method. This function covers the sample density and height range of 0.8-1.8 g/cm(3) and 3.0-7.25 cm, respectively. The efficiency correction factors calculated by the fitted function are in good agreement with those obtained by the GEANT4 simulations with the determination coefficient value greater than 0.9999. The results obtained in this paper make the above-mentioned relative measurements more accurate and efficient in the routine radioactive analysis of environmental cylindrical soil samples. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gated equilibrium bloodpool scintigraphy
International Nuclear Information System (INIS)
Reinders Folmer, S.C.C.
1981-01-01
This thesis deals with the clinical applications of gated equilibrium bloodpool scintigraphy, performed with either a gamma camera or a portable detector system, the nuclear stethoscope. The main goal has been to define the value and limitations of noninvasive measurements of left ventricular ejection fraction as a parameter of cardiac performance in various disease states, both for diagnostic purposes as well as during follow-up after medical or surgical intervention. Secondly, it was attempted to extend the use of the equilibrium bloodpool techniques beyond the calculation of ejection fraction alone by considering the feasibility to determine ventricular volumes and by including the possibility of quantifying valvular regurgitation. In both cases, it has been tried to broaden the perspective of the observations by comparing them with results of other, invasive and non-invasive, procedures, in particular cardiac catheterization, M-mode echocardiography and myocardial perfusion scintigraphy. (Auth.)
Problems in equilibrium theory
Aliprantis, Charalambos D
1996-01-01
In studying General Equilibrium Theory the student must master first the theory and then apply it to solve problems. At the graduate level there is no book devoted exclusively to teaching problem solving. This book teaches for the first time the basic methods of proof and problem solving in General Equilibrium Theory. The problems cover the entire spectrum of difficulty; some are routine, some require a good grasp of the material involved, and some are exceptionally challenging. The book presents complete solutions to two hundred problems. In searching for the basic required techniques, the student will find a wealth of new material incorporated into the solutions. The student is challenged to produce solutions which are different from the ones presented in the book.
Equilibrium statistical mechanics
Jackson, E Atlee
2000-01-01
Ideal as an elementary introduction to equilibrium statistical mechanics, this volume covers both classical and quantum methodology for open and closed systems. Introductory chapters familiarize readers with probability and microscopic models of systems, while additional chapters describe the general derivation of the fundamental statistical mechanics relationships. The final chapter contains 16 sections, each dealing with a different application, ordered according to complexity, from classical through degenerate quantum statistical mechanics. Key features include an elementary introduction t
DEFF Research Database (Denmark)
Bollerslev, Tim; Sizova, Natalia; Tauchen, George
Stock market volatility clusters in time, carries a risk premium, is fractionally inte- grated, and exhibits asymmetric leverage effects relative to returns. This paper develops a first internally consistent equilibrium based explanation for these longstanding empirical facts. The model is cast i......, and the dynamic cross-correlations of the volatility measures with the returns calculated from actual high-frequency intra-day data on the S&P 500 aggregate market and VIX volatility indexes....
Molecular equilibrium with condensation
International Nuclear Information System (INIS)
Sharp, C.M.; Huebner, W.F.
1990-01-01
Minimization of the Gibbs energy of formation for species of chemical elements and compounds in their gas and condensed phases determines their relative abundances in a mixture in chemical equilibrium. The procedure is more general and more powerful than previous abundance determinations in multiphase astrophysical mixtures. Some results for astrophysical equations of state are presented, and the effects of condensation on opacity are briefly indicated. 18 refs
Module description of TOKAMAK equilibrium code MEUDAS
Energy Technology Data Exchange (ETDEWEB)
Suzuki, Masaei; Hayashi, Nobuhiko; Matsumoto, Taro; Ozeki, Takahisa [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment
2002-01-01
The analysis of an axisymmetric MHD equilibrium serves as a foundation of TOKAMAK researches, such as a design of devices and theoretical research, the analysis of experiment result. For this reason, also in JAERI, an efficient MHD analysis code has been developed from start of TOKAMAK research. The free boundary equilibrium code ''MEUDAS'' which uses both the DCR method (Double-Cyclic-Reduction Method) and a Green's function can specify the pressure and the current distribution arbitrarily, and has been applied to the analysis of a broad physical subject as a code having rapidity and high precision. Also the MHD convergence calculation technique in ''MEUDAS'' has been built into various newly developed codes. This report explains in detail each module in ''MEUDAS'' for performing convergence calculation in solving the MHD equilibrium. (author)
Non-equilibrium quantum heat machines
Alicki, Robert; Gelbwaser-Klimovsky, David
2015-11-01
Standard heat machines (engine, heat pump, refrigerator) are composed of a system (working fluid) coupled to at least two equilibrium baths at different temperatures and periodically driven by an external device (piston or rotor) sometimes called the work reservoir. The aim of this paper is to go beyond this scheme by considering environments which are stationary but cannot be decomposed into a few baths at thermal equilibrium. Such situations are important, for example in solar cells, chemical machines in biology, various realizations of laser cooling or nanoscopic machines driven by laser radiation. We classify non-equilibrium baths depending on their thermodynamic behavior and show that the efficiency of heat machines powered by them is limited by the generalized Carnot bound.
Module description of TOKAMAK equilibrium code MEUDAS
International Nuclear Information System (INIS)
Suzuki, Masaei; Hayashi, Nobuhiko; Matsumoto, Taro; Ozeki, Takahisa
2002-01-01
The analysis of an axisymmetric MHD equilibrium serves as a foundation of TOKAMAK researches, such as a design of devices and theoretical research, the analysis of experiment result. For this reason, also in JAERI, an efficient MHD analysis code has been developed from start of TOKAMAK research. The free boundary equilibrium code ''MEUDAS'' which uses both the DCR method (Double-Cyclic-Reduction Method) and a Green's function can specify the pressure and the current distribution arbitrarily, and has been applied to the analysis of a broad physical subject as a code having rapidity and high precision. Also the MHD convergence calculation technique in ''MEUDAS'' has been built into various newly developed codes. This report explains in detail each module in ''MEUDAS'' for performing convergence calculation in solving the MHD equilibrium. (author)
Non-equilibrium quantum heat machines
International Nuclear Information System (INIS)
Alicki, Robert; Gelbwaser-Klimovsky, David
2015-01-01
Standard heat machines (engine, heat pump, refrigerator) are composed of a system (working fluid) coupled to at least two equilibrium baths at different temperatures and periodically driven by an external device (piston or rotor) sometimes called the work reservoir. The aim of this paper is to go beyond this scheme by considering environments which are stationary but cannot be decomposed into a few baths at thermal equilibrium. Such situations are important, for example in solar cells, chemical machines in biology, various realizations of laser cooling or nanoscopic machines driven by laser radiation. We classify non-equilibrium baths depending on their thermodynamic behavior and show that the efficiency of heat machines powered by them is limited by the generalized Carnot bound. (paper)
Noncompact Equilibrium Points and Applications
Directory of Open Access Journals (Sweden)
Zahra Al-Rumaih
2012-01-01
Full Text Available We prove an equilibrium existence result for vector functions defined on noncompact domain and we give some applications in optimization and Nash equilibrium in noncooperative game.
Equilibrium thermodynamics - Callen's postulational approach
Jongschaap, R.J.J.; Öttinger, Hans Christian
2001-01-01
In order to provide the background for nonequilibrium thermodynamics, we outline the fundamentals of equilibrium thermodynamics. Equilibrium thermodynamics must not only be obtained as a special case of any acceptable nonequilibrium generalization but, through its shining example, it also elucidates
MHD equilibrium with toroidal rotation
International Nuclear Information System (INIS)
Li, J.
1987-03-01
The present work attempts to formulate the equilibrium of axisymmetric plasma with purely toroidal flow within ideal MHD theory. In general, the inertial term Rho(v.Del)v caused by plasma flow is so complicated that the equilibrium equation is completely different from the Grad-Shafranov equation. However, in the case of purely toroidal flow the equilibrium equation can be simplified so that it resembles the Grad-Shafranov equation. Generally one arbitrary two-variable functions and two arbitrary single variable functions, instead of only four single-variable functions, are allowed in the new equilibrium equations. Also, the boundary conditions of the rotating (with purely toroidal fluid flow, static - without any fluid flow) equilibrium are the same as those of the static equilibrium. So numerically one can calculate the rotating equilibrium as a static equilibrium. (author)
Implementation of Premixed Equilibrium Chemistry Capability in OVERFLOW
Olsen, Mike E.; Liu, Yen; Vinokur, M.; Olsen, Tom
2004-01-01
An implementation of premixed equilibrium chemistry has been completed for the OVERFLOW code, a chimera capable, complex geometry flow code widely used to predict transonic flowfields. The implementation builds on the computational efficiency and geometric generality of the solver.
Aerospace Applications of Non-Equilibrium Plasma
Blankson, Isaiah M.
2016-01-01
Nonequilibrium plasma/non-thermal plasma/cold plasmas are being used in a wide range of new applications in aeronautics, active flow control, heat transfer reduction, plasma-assisted ignition and combustion, noise suppression, and power generation. Industrial applications may be found in pollution control, materials surface treatment, and water purification. In order for these plasma processes to become practical, efficient means of ionization are necessary. A primary challenge for these applications is to create a desired non-equilibrium plasma in air by preventing the discharge from transitioning into an arc. Of particular interest is the impact on simulations and experimental data with and without detailed consideration of non-equilibrium effects, and the consequences of neglecting non-equilibrium. This presentation will provide an assessment of the presence and influence of non-equilibrium phenomena for various aerospace needs and applications. Specific examples to be considered will include the forward energy deposition of laser-induced non-equilibrium plasmoids for sonic boom mitigation, weakly ionized flows obtained from pulsed nanosecond discharges for an annular Hall type MHD generator duct for turbojet energy bypass, and fundamental mechanisms affecting the design and operation of novel plasma-assisted reactive systems in dielectric liquids (water purification, in-pipe modification of fuels, etc.).
International Nuclear Information System (INIS)
Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man
2014-01-01
The uncertainty with the sampling-based method is evaluated by repeating transport calculations with a number of cross section data sampled from the covariance uncertainty data. In the transport calculation with the sampling-based method, the transport equation is not modified; therefore, all uncertainties of the responses such as k eff , reaction rates, flux and power distribution can be directly obtained all at one time without code modification. However, a major drawback with the sampling-based method is that it requires expensive computational load for statistically reliable results (inside confidence level 0.95) in the uncertainty analysis. The purpose of this study is to develop a method for improving the computational efficiency and obtaining highly reliable uncertainty result in using the sampling-based method with Monte Carlo simulation. The proposed method is a method to reduce the convergence time of the response uncertainty by using the multiple sets of sampled group cross sections in a single Monte Carlo simulation. The proposed method was verified by estimating GODIVA benchmark problem and the results were compared with that of conventional sampling-based method. In this study, sampling-based method based on central limit theorem is proposed to improve calculation efficiency by reducing the number of repetitive Monte Carlo transport calculation required to obtain reliable uncertainty analysis results. Each set of sampled group cross sections is assigned to each active cycle group in a single Monte Carlo simulation. The criticality uncertainty for the GODIVA problem is evaluated by the proposed and previous method. The results show that the proposed sampling-based method can efficiently decrease the number of Monte Carlo simulation required for evaluate uncertainty of k eff . It is expected that the proposed method will improve computational efficiency of uncertainty analysis with sampling-based method
Non-equilibrium thermodynamics
De Groot, Sybren Ruurds
1984-01-01
The study of thermodynamics is especially timely today, as its concepts are being applied to problems in biology, biochemistry, electrochemistry, and engineering. This book treats irreversible processes and phenomena - non-equilibrium thermodynamics.S. R. de Groot and P. Mazur, Professors of Theoretical Physics, present a comprehensive and insightful survey of the foundations of the field, providing the only complete discussion of the fluctuating linear theory of irreversible thermodynamics. The application covers a wide range of topics: the theory of diffusion and heat conduction, fluid dyn
Maia, Alex S C; Nascimento, Sheila T; Nascimento, Carolina C N; Gebremedhin, Kifle G
2016-05-01
The effects of air temperature and relative humidity on thermal equilibrium of goats in a tropical region was evaluated. Nine non-pregnant Anglo Nubian nanny goats were used in the study. An indirect calorimeter was designed and developed to measure oxygen consumption, carbon dioxide production, methane production and water vapour pressure of the air exhaled from goats. Physiological parameters: rectal temperature, skin temperature, hair-coat temperature, expired air temperature and respiratory rate and volume as well as environmental parameters: air temperature, relative humidity and mean radiant temperature were measured. The results show that respiratory and volume rates and latent heat loss did not change significantly for air temperature between 22 and 26°C. In this temperature range, metabolic heat was lost mainly by convection and long-wave radiation. For temperature greater than 30°C, the goats maintained thermal equilibrium mainly by evaporative heat loss. At the higher air temperature, the respiratory and ventilation rates as well as body temperatures were significantly elevated. It can be concluded that for Anglo Nubian goats, the upper limit of air temperature for comfort is around 26°C when the goats are protected from direct solar radiation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Sabadini, Edvaldo; Silva, Marcelo Alves da [Universidade Estadual de Campinas (UNICAMP), SP (Brazil); Ziglio, Claudio Marcos; Carvalho, Carlos Henrique Monteiro de; Rocha, Nelson de Oliveira [PETROBRAS, Rio de Janeiro, RJ (Brazil). Centro de Pesquisas (CENPES)
2008-07-01
In this work the efficiency of five commercial additives which produce drag reduction in petroleum was determined and compared. The studies were carried out in a rheometer using samples of petroleum from Bacia de Campos diluted in 50% of toluene. For such purpose the rheometer acts as a 'torquemeter', in which the magnitude of the drag reduction promoted by the additive is directly proportional to the difference in torque applied to maintain the sample in a specific flow rate. The obtained results have shown excellent capability of the additives to promote drag reduction (up to 20%) and small difference of efficiency among the additives was detectable. (author)
Energy Technology Data Exchange (ETDEWEB)
Baccouche, S., E-mail: souad.baccouche@cnstn.rnrt.tn [UR-MDTN, National Center for Nuclear Sciences and Technology, Technopole Sidi Thabet, 2020 Sidi Thabet (Tunisia); Al-Azmi, D., E-mail: ds.alazmi@paaet.edu.kw [Department of Applied Sciences, College of Technological Studies, Public Authority for Applied Education and Training, Shuwaikh, P.O. Box 42325, Code 70654 (Kuwait); Karunakara, N., E-mail: karunakara_n@yahoo.com [University Science Instrumentation Centre, Mangalore University, Mangalagangotri 574199 (India); Trabelsi, A., E-mail: adel.trabelsi@fst.rnu.tn [UR-MDTN, National Center for Nuclear Sciences and Technology, Technopole Sidi Thabet, 2020 Sidi Thabet (Tunisia); UR-UPNHE, Faculty of Sciences of Tunis, El-Manar University, 2092 Tunis (Tunisia)
2012-01-15
Gamma-ray measurements in terrestrial/environmental samples require the use of high efficient detectors because of the low level of the radionuclide activity concentrations in the samples; thus scintillators are suitable for this purpose. Two scintillation detectors were studied in this work; CsI(Tl) and NaI(Tl) with identical size for measurement of terrestrial samples for performance study. This work describes a Monte Carlo method for making the full-energy efficiency calibration curves for both detectors using gamma-ray energies associated with the decay of naturally occurring radionuclides {sup 137}Cs (661 keV), {sup 40}K (1460 keV), {sup 238}U ({sup 214}Bi, 1764 keV) and {sup 232}Th ({sup 208}Tl, 2614 keV), which are found in terrestrial samples. The magnitude of the coincidence summing effect occurring for the 2614 keV emission of {sup 208}Tl is assessed by simulation. The method provides an efficient tool to make the full-energy efficiency calibration curve for scintillation detectors for any samples geometry and volume in order to determine accurate activity concentrations in terrestrial samples. - Highlights: Black-Right-Pointing-Pointer CsI (Tl) and NaI (Tl) detectors were studied for the measurement of terrestrial samples. Black-Right-Pointing-Pointer Monte Carlo method was used for efficiency calibration using natural gamma emitting terrestrial radionuclides. Black-Right-Pointing-Pointer The coincidence summing effect occurring for the 2614 keV emission of {sup 208}Tl is assessed by simulation.
A method for high accuracy determination of equilibrium relative humidity
DEFF Research Database (Denmark)
Jensen, O.M.
2012-01-01
This paper treats a new method for measuring equilibrium relative humidity and equilibrium dew-point temperature of a material sample. The developed measuring device is described – a Dew-point Meter – which by means of so-called Dynamic Dew-point Analysis permits quick and very accurate...
Efendiev, Y.; Datta-Gupta, A.; Ma, X.; Mallick, B.
2009-01-01
the fine-scale simulations. Numerical results for three-phase flow and transport demonstrate the advantages, efficiency, and utility of the method for uncertainty assessment in the history matching. Copyright 2009 by the American Geophysical Union.
Jovanović Filip P.; Berić Ivana M.; Jovanović Petar M.; Jovanović Aca D.
2016-01-01
This paper analyses the applicability of well-known risk management methodologies in energy efficiency projects in the industry. The possibilities of application of the selected risk management methodology are demonstrated within the project of the plants for injecting pulverized coal into blast furnaces nos. 1 and 2, implemented by the company US STEEL SERBIA d.o.o. in Smederevo. The aim of the project was to increase energy efficiency through the reductio...
Nanostructured energy devices equilibrium concepts and kinetics
Bisquert, Juan
2014-01-01
Due to the pressing needs of society, low cost materials for energy devices have experienced an outstanding development in recent times. In this highly multidisciplinary area, chemistry, material science, physics, and electrochemistry meet to develop new materials and devices that perform required energy conversion and storage processes with high efficiency, adequate capabilities for required applications, and low production cost. Nanostructured Energy Devices: Equilibrium Concepts and Kinetics introduces the main physicochemical principles that govern the operation of energy devices. It inclu
Statistical approach to partial equilibrium analysis
Wang, Yougui; Stanley, H. E.
2009-04-01
A statistical approach to market equilibrium and efficiency analysis is proposed in this paper. One factor that governs the exchange decisions of traders in a market, named willingness price, is highlighted and constitutes the whole theory. The supply and demand functions are formulated as the distributions of corresponding willing exchange over the willingness price. The laws of supply and demand can be derived directly from these distributions. The characteristics of excess demand function are analyzed and the necessary conditions for the existence and uniqueness of equilibrium point of the market are specified. The rationing rates of buyers and sellers are introduced to describe the ratio of realized exchange to willing exchange, and their dependence on the market price is studied in the cases of shortage and surplus. The realized market surplus, which is the criterion of market efficiency, can be written as a function of the distributions of willing exchange and the rationing rates. With this approach we can strictly prove that a market is efficient in the state of equilibrium.
Energy Technology Data Exchange (ETDEWEB)
Rugi, F.; Becagli, S.; Ghedini, C.; Marconi, M.; Severi, M.; Traversi, R.; Udisti, R. [Dep. of Chemistry, University of Florence, Sesto F.no (Fl) (Italy); Calzolai, G.; Chiari, M.; Lucarelli, F.; Nava, S. [Dep. of Physics and Astronomy , University of Florence and INFN, Sesto F. no (Fl) (Italy)
2013-07-01
Full text: A recent EU regulation (EN 149022005) requests the quantification of selected metals in the atmospheric particulate by mineralization with H{sub 2}0{sub 2} and HN0{sub 3} in microwave oven.This method might possibly conflict with the determination of the total metal content. In fact, the more the aerosol is enriched in crustal elements the more the difference in the two methods are expected, since the H{sub 2}0{sub 2}+ HN0{sub 3}, extraction is not reliable for metals in silicate form. In order to evaluate the extracted fraction, PIXE and ICP-AES measurements were carried out on the two halves of a series of PM10 and PM2.5 samples collected on Teflon filters in an urban site in the surrounding of Florence (Italy). An ICP-AES (Inductively Coupled Plasma -Atomic Emission Spectroscopy) method was optimized by an ultrasound nebuliser (CETAC 5000 AT+), in order to improve reproducibility and detection limit. In these conditions, it was possible quantifying AI, As, Cr, Cu, Fe, Mn, Ni, Pb and Vatsub-ppb levels. PIXE analysis using the external beam set-up at LABEC and a 3 MeV proton beam was carried out in order to measure the total elemental content of the metals. By comparing the ICP-AES and the PIXE results, a preliminary evaluation of the efficiency of the H{sub 2}0{sub 2} and HN0{sub 3} extraction method was performed. The obtained results (the mean values for the ICP-AES/PIXE ratio are reported in Table 1) show that the extraction procedure following the EN 14902 directive allows quantitative recoveries (80-120%, including the analytical uncertainties)for the majority of the analysed metals, especially for those mainly emitted by anthropic sources. This result points out that anthropic metals are present in the atmosphere as relatively available species (free metals, labile complexes, carbonates, oxides). On the contrary, lower recoveries were obtained for AI (mean value around 75%), a metal that has a relevant crustal fraction. Percentage of recovery of
International Nuclear Information System (INIS)
Kramer, S.J.; Milton, G.M.; Repta, C.J.W.
1995-06-01
The effect of variations in sample preparation and storage on the counting efficiency for 14 C using a Carbo-Sorb/PermafluorE+ liquid scintillation cocktail has been studied, and optimum conditions are recommended. (author). 2 refs., 2 tabs., 4 figs
Equilibrium models and variational inequalities
Konnov, Igor
2007-01-01
The concept of equilibrium plays a central role in various applied sciences, such as physics (especially, mechanics), economics, engineering, transportation, sociology, chemistry, biology and other fields. If one can formulate the equilibrium problem in the form of a mathematical model, solutions of the corresponding problem can be used for forecasting the future behavior of very complex systems and, also, for correcting the the current state of the system under control. This book presents a unifying look on different equilibrium concepts in economics, including several models from related sciences.- Presents a unifying look on different equilibrium concepts and also the present state of investigations in this field- Describes static and dynamic input-output models, Walras, Cassel-Wald, spatial price, auction market, oligopolistic equilibrium models, transportation and migration equilibrium models- Covers the basics of theory and solution methods both for the complementarity and variational inequality probl...
Grinding kinetics and equilibrium states
Opoczky, L.; Farnady, F.
1984-01-01
The temporary and permanent equilibrium occurring during the initial stage of cement grinding does not indicate the end of comminution, but rather an increased energy consumption during grinding. The constant dynamic equilibrium occurs after a long grinding period indicating the end of comminution for a given particle size. Grinding equilibrium curves can be constructed to show the stages of comminution and agglomeration for certain particle sizes.
Mental Equilibrium and Rational Emotions
Eyal Winter; Ignacio Garcia-Jurado; Jose Mendez-Naya; Luciano Mendez-Naya
2009-01-01
We introduce emotions into an equilibrium notion. In a mental equilibrium each player "selects" an emotional state which determines the player's preferences over the outcomes of the game. These preferences typically differ from the players' material preferences. The emotional states interact to play a Nash equilibrium and in addition each player's emotional state must be a best response (with respect to material preferences) to the emotional states of the others. We discuss the concept behind...
Para-equilibrium phase diagrams
International Nuclear Information System (INIS)
Pelton, Arthur D.; Koukkari, Pertti; Pajarre, Risto; Eriksson, Gunnar
2014-01-01
Highlights: • A rapidly cooled system may attain a state of para-equilibrium. • In this state rapidly diffusing elements reach equilibrium but others are immobile. • Application of the Phase Rule to para-equilibrium phase diagrams is discussed. • A general algorithm to calculate para-equilibrium phase diagrams is described. - Abstract: If an initially homogeneous system at high temperature is rapidly cooled, a temporary para-equilibrium state may result in which rapidly diffusing elements have reached equilibrium but more slowly diffusing elements have remained essentially immobile. The best known example occurs when homogeneous austenite is quenched. A para-equilibrium phase assemblage may be calculated thermodynamically by Gibbs free energy minimization under the constraint that the ratios of the slowly diffusing elements are the same in all phases. Several examples of calculated para-equilibrium phase diagram sections are presented and the application of the Phase Rule is discussed. Although the rules governing the geometry of these diagrams may appear at first to be somewhat different from those for full equilibrium phase diagrams, it is shown that in fact they obey exactly the same rules with the following provision. Since the molar ratios of non-diffusing elements are the same in all phases at para-equilibrium, these ratios act, as far as the geometry of the diagram is concerned, like “potential” variables (such as T, pressure or chemical potentials) rather than like “normal” composition variables which need not be the same in all phases. A general algorithm to calculate para-equilibrium phase diagrams is presented. In the limit, if a para-equilibrium calculation is performed under the constraint that no elements diffuse, then the resultant phase diagram shows the single phase with the minimum Gibbs free energy at any point on the diagram; such calculations are of interest in physical vapor deposition when deposition is so rapid that phase
Directory of Open Access Journals (Sweden)
Salime Jafari
2012-10-01
Full Text Available Background and Aim: Due to limitation of standardized tests for Persian-speakers with language disorders, spontaneous language sampling collection is an important part of assessment of languageprotocol. Therefore, selection of a language sampling method, which will provide information of linguistic competence in a short time, is important. Therefore, in this study, we compared the languagesamples elicited with picture description and storytelling methods in order to determine the effectiveness of the two methods.Methods: In this study 30 first-grade elementary school girls were selected with simple sampling. To investigate picture description method, we used two illustrated stories with four pictures. Languagesamples were collected through storytelling by telling a famous children’s story. To determine the effectiveness of these two methods the two indices of duration of sampling and mean length ofutterance (MLU were compared.Results: There was no significant difference between MLU in description and storytelling methods(p>0.05. However, duration of sampling was shorter in the picture description method than the storytelling method (p<0.05.Conclusion: Findings show that, the two methods of picture description and storytelling have the same potential in language sampling. Since, picture description method can provide language samples with the same complexity in a shorter time than storytelling, it can be used as a beneficial method forclinical purposes.
International Nuclear Information System (INIS)
Esaka, Fumitaka; Watanabe, Kazuo; Fukuyama, Hiroyasu; Onodera, Takashi; Esaka, Konomi T.; Magara, Masaaki; Sakurai, Satoshi; Usuda, Shigekazu
2004-01-01
A new particle recovery method and a sensitive screening method were developed for subsequent isotope ratio analysis of uranium particles in safeguards swipe samples. The particles in the swipe sample were recovered onto a carrier by means of vacuum suction-impact collection method. When grease coating was applied to the carrier, the recovery efficiency was improved to 48±9%, which is superior to that of conventionally-used ultrasoneration method. Prior to isotope ratio analysis with secondary ion mass spectrometry (SIMS), total reflection X-ray fluorescence spectrometry (TXRF) was applied to screen the sample for the presence of uranium particles. By the use of Si carriers in TXRF analysis, the detection limit of 22 pg was achieved for uranium. By combining these methods with SIMS, the isotope ratios of 235 U/ 238 U for individual uranium particles were efficiently determined. (author)
Oyarzún, Bernardo; Mognetti, Bortolo Matteo
2018-03-01
We present a new simulation technique to study systems of polymers functionalized by reactive sites that bind/unbind forming reversible linkages. Functionalized polymers feature self-assembly and responsive properties that are unmatched by the systems lacking selective interactions. The scales at which the functional properties of these materials emerge are difficult to model, especially in the reversible regime where such properties result from many binding/unbinding events. This difficulty is related to large entropic barriers associated with the formation of intra-molecular loops. In this work, we present a simulation scheme that sidesteps configurational costs by dedicated Monte Carlo moves capable of binding/unbinding reactive sites in a single step. Cross-linking reactions are implemented by trial moves that reconstruct chain sections attempting, at the same time, a dimerization reaction between pairs of reactive sites. The model is parametrized by the reaction equilibrium constant of the reactive species free in solution. This quantity can be obtained by means of experiments or atomistic/quantum simulations. We use the proposed methodology to study the self-assembly of single-chain polymeric nanoparticles, starting from flexible precursors carrying regularly or randomly distributed reactive sites. We focus on understanding differences in the morphology of chain nanoparticles when linkages are reversible as compared to the well-studied case of irreversible reactions. Intriguingly, we find that the size of regularly functionalized chains, in good solvent conditions, is non-monotonous as a function of the degree of functionalization. We clarify how this result follows from excluded volume interactions and is peculiar of reversible linkages and regular functionalizations.
Directory of Open Access Journals (Sweden)
Alinune N Kabaghe
Full Text Available In the context of malaria elimination, interventions will need to target high burden areas to further reduce transmission. Current tools to monitor and report disease burden lack the capacity to continuously detect fine-scale spatial and temporal variations of disease distribution exhibited by malaria. These tools use random sampling techniques that are inefficient for capturing underlying heterogeneity while health facility data in resource-limited settings are inaccurate. Continuous community surveys of malaria burden provide real-time results of local spatio-temporal variation. Adaptive geostatistical design (AGD improves prediction of outcome of interest compared to current random sampling techniques. We present findings of continuous malaria prevalence surveys using an adaptive sampling design.We conducted repeated cross sectional surveys guided by an adaptive sampling design to monitor the prevalence of malaria parasitaemia and anaemia in children below five years old in the communities living around Majete Wildlife Reserve in Chikwawa district, Southern Malawi. AGD sampling uses previously collected data to sample new locations of high prediction variance or, where prediction exceeds a set threshold. We fitted a geostatistical model to predict malaria prevalence in the area.We conducted five rounds of sampling, and tested 876 children aged 6-59 months from 1377 households over a 12-month period. Malaria prevalence prediction maps showed spatial heterogeneity and presence of hotspots-where predicted malaria prevalence was above 30%; predictors of malaria included age, socio-economic status and ownership of insecticide-treated mosquito nets.Continuous malaria prevalence surveys using adaptive sampling increased malaria prevalence prediction accuracy. Results from the surveys were readily available after data collection. The tool can assist local managers to target malaria control interventions in areas with the greatest health impact and is
Czech Academy of Sciences Publication Activity Database
Jůza, T.; Čech, Martin; Kubečka, Jan; Vašek, Mojmír; Peterka, Jiří; Matěna, Josef
2010-01-01
Roč. 105, č. 3 (2010), s. 125-133 ISSN 0165-7836 R&D Projects: GA ČR(CZ) GP206/09/P266; GA AV ČR(CZ) 1QS600170504 Institutional research plan: CEZ:AV0Z60170517 Keywords : trawl efficiency * fry density * perch Subject RIV: EH - Ecology, Behaviour Impact factor: 1.656, year: 2010
International Nuclear Information System (INIS)
Pop, O.M.; Stets, M.V.; Maslyuk, V.T.
2015-01-01
We consider a gamma-spectrometric complex of IEP of the NAS of Ukraine, where a passive multilayer external defense is used (complex has been made in 1989). We have developed and investigated a system of stability and lowering of background in the gamma-spectrometric complex. As metrological coefficients, the efficiency factor of defense are considered, the calculation and analysis of which show that their values are different for different energies of gamma-quanta and gamma-active nuclides
International Nuclear Information System (INIS)
Roh, Heui-Seol
2015-01-01
Chemical energy transfer mechanisms at finite temperature are explored by a chemical energy transfer theory which is capable of investigating various chemical mechanisms of non-equilibrium, quasi-equilibrium, and equilibrium. Gibbs energy fluxes are obtained as a function of chemical potential, time, and displacement. Diffusion, convection, internal convection, and internal equilibrium chemical energy fluxes are demonstrated. The theory reveals that there are chemical energy flux gaps and broken discrete symmetries at the activation chemical potential, time, and displacement. The statistical, thermodynamic theory is the unification of diffusion and internal convection chemical reactions which reduces to the non-equilibrium generalization beyond the quasi-equilibrium theories of migration and diffusion processes. The relationship between kinetic theories of chemical and electrochemical reactions is also explored. The theory is applied to explore non-equilibrium chemical reactions as an illustration. Three variable separation constants indicate particle number constants and play key roles in describing the distinct chemical reaction mechanisms. The kinetics of chemical energy transfer accounts for the four control mechanisms of chemical reactions such as activation, concentration, transition, and film chemical reactions. - Highlights: • Chemical energy transfer theory is proposed for non-, quasi-, and equilibrium. • Gibbs energy fluxes are expressed by chemical potential, time, and displacement. • Relationship between chemical and electrochemical reactions is discussed. • Theory is applied to explore nonequilibrium energy transfer in chemical reactions. • Kinetics of non-equilibrium chemical reactions shows the four control mechanisms
Energy Technology Data Exchange (ETDEWEB)
Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Khawaja, M. Sami [The Cadmus Group, Portland, OR (United States); Rushton, Josh [The Cadmus Group, Portland, OR (United States); Keeling, Josh [The Cadmus Group, Portland, OR (United States)
2017-09-01
Evaluating an energy efficiency program requires assessing the total energy and demand saved through all of the energy efficiency measures provided by the program. For large programs, the direct assessment of savings for each participant would be cost-prohibitive. Even if a program is small enough that a full census could be managed, such an undertaking would almost always be an inefficient use of evaluation resources. The bulk of this chapter describes methods for minimizing and quantifying sampling error. Measurement error and regression error are discussed in various contexts in other chapters.
Directory of Open Access Journals (Sweden)
Mao-Jung Lee
2018-01-01
Full Text Available Tocopherols and tocotrienols, collectively known as vitamin E, have received a great deal of attention because of their interesting biological activities. In the present study, we reexamined and improved previous methods of sample preparation and the conditions of high-performance liquid chromatography for more accurate quantification of tocopherols, tocotrienols and their major chain-degradation metabolites. For the analysis of serum tocopherols/tocotrienols, we reconfirmed our method of mixing serum with ethanol followed by hexane extraction. For the analysis of tissue samples, we improved our methods by extracting tocopherols/tocotrienols directly from tissue homogenate with hexane. For the analysis of total amounts (conjugated and unconjugated forms of side-chain degradation metabolites, the samples need to be deconjugated by incubating with β-glucuronidase and sulfatase; serum samples can be directly used for the incubation, whereas for tissue homogenates a pre-deproteination step is needed. The present methods are sensitive, convenient and are suitable for the determination of different forms of vitamin E and their metabolites in animal and human studies. Results from the analysis of serum, liver, kidney, lung and urine samples from mice that had been treated with mixtures of tocotrienols and tocopherols are presented as examples.
Equilibrium and shot noise in mesoscopic systems
Energy Technology Data Exchange (ETDEWEB)
Martin, T.
1994-10-01
Within the last decade, there has been a resurgence of interest in the study of noise in Mesoscopic devices, both experimentally and theoretically. Noise in solid state devices can have different origins: there is 1/f noise, which is believed to arise from fluctuations in the resistance of the sample due to the motion of impurities. On top of this contribution is a frequency independent component associated with the stochastic nature of electron transport, which will be the focus of this paper. If the sample considered is small enough that dephasing and inelastic effects can be neglected, equilibrium (thermal) and excess noise can be completely described in terms of the elastic scattering properties of the sample. As mentioned above, noise arises as a consequence of random processes governing the transport of electrons. Here, there are two sources of randomness: first, electrons incident on the sample occupy a given energy state with a probability given by the Fermi-Dirac distribution function. Secondly, electrons can be transmitted across the sample or reflected in the same reservoir where they came from with a probability given by the quantum mechanical transmission/reflection coefficients. Equilibrium noise refers to the case where no bias voltage is applied between the leads connected to the sample, where thermal agitation alone allows the electrons close to the Fermi level to tunnel through the sample. In general, equilibrium noise is related to the conductance of the sample via the Johnson-Nyquist formula. In the presence of a bias, in the classical regime, one expects to recover the full shot noise < {Delta}{sup 2}I >= 2I{Delta}{mu} as was observed a long time ago in vacuum diodes. In the Mesoscopic regime, however, excess noise is reduced below the shot noise level. The author introduces a more intuitive picture, where the current passing through the device is a superposition of pulses, or electron wave packets, which can be transmitted or reflected.
Fundamental functions in equilibrium thermodynamics
Horst, H.J. ter
In the standard presentations of the principles of Gibbsian equilibrium thermodynamics one can find several gaps in the logic. For a subject that is as widely used as equilibrium thermodynamics, it is of interest to clear up such questions of mathematical rigor. In this paper it is shown that using
Enhanced conformational sampling using enveloping distribution sampling.
Lin, Zhixiong; van Gunsteren, Wilfred F
2013-10-14
To lessen the problem of insufficient conformational sampling in biomolecular simulations is still a major challenge in computational biochemistry. In this article, an application of the method of enveloping distribution sampling (EDS) is proposed that addresses this challenge and its sampling efficiency is demonstrated in simulations of a hexa-β-peptide whose conformational equilibrium encompasses two different helical folds, i.e., a right-handed 2.7(10∕12)-helix and a left-handed 3(14)-helix, separated by a high energy barrier. Standard MD simulations of this peptide using the GROMOS 53A6 force field did not reach convergence of the free enthalpy difference between the two helices even after 500 ns of simulation time. The use of soft-core non-bonded interactions in the centre of the peptide did enhance the number of transitions between the helices, but at the same time led to neglect of relevant helical configurations. In the simulations of a two-state EDS reference Hamiltonian that envelops both the physical peptide and the soft-core peptide, sampling of the conformational space of the physical peptide ensures that physically relevant conformations can be visited, and sampling of the conformational space of the soft-core peptide helps to enhance the transitions between the two helices. The EDS simulations sampled many more transitions between the two helices and showed much faster convergence of the relative free enthalpy of the two helices compared with the standard MD simulations with only a slightly larger computational effort to determine optimized EDS parameters. Combined with various methods to smoothen the potential energy surface, the proposed EDS application will be a powerful technique to enhance the sampling efficiency in biomolecular simulations.
A Multiperiod Equilibrium Pricing Model
Directory of Open Access Journals (Sweden)
Minsuk Kwak
2014-01-01
Full Text Available We propose an equilibrium pricing model in a dynamic multiperiod stochastic framework with uncertain income. There are one tradable risky asset (stock/commodity, one nontradable underlying (temperature, and also a contingent claim (weather derivative written on the tradable risky asset and the nontradable underlying in the market. The price of the contingent claim is priced in equilibrium by optimal strategies of representative agent and market clearing condition. The risk preferences are of exponential type with a stochastic coefficient of risk aversion. Both subgame perfect strategy and naive strategy are considered and the corresponding equilibrium prices are derived. From the numerical result we examine how the equilibrium prices vary in response to changes in model parameters and highlight the importance of our equilibrium pricing principle.
Non-equilibrium phase transitions
Henkel, Malte; Lübeck, Sven
2009-01-01
This book describes two main classes of non-equilibrium phase-transitions: (a) static and dynamics of transitions into an absorbing state, and (b) dynamical scaling in far-from-equilibrium relaxation behaviour and ageing. The first volume begins with an introductory chapter which recalls the main concepts of phase-transitions, set for the convenience of the reader in an equilibrium context. The extension to non-equilibrium systems is made by using directed percolation as the main paradigm of absorbing phase transitions and in view of the richness of the known results an entire chapter is devoted to it, including a discussion of recent experimental results. Scaling theories and a large set of both numerical and analytical methods for the study of non-equilibrium phase transitions are thoroughly discussed. The techniques used for directed percolation are then extended to other universality classes and many important results on model parameters are provided for easy reference.
Thermochemical equilibrium modelling of a gasifying process
International Nuclear Information System (INIS)
Melgar, Andres; Perez, Juan F.; Laget, Hannes; Horillo, Alfonso
2007-01-01
This article discusses a mathematical model for the thermochemical processes in a downdraft biomass gasifier. The model combines the chemical equilibrium and the thermodynamic equilibrium of the global reaction, predicting the final composition of the producer gas as well as its reaction temperature. Once the composition of the producer gas is obtained, a range of parameters can be derived, such as the cold gas efficiency of the gasifier, the amount of dissociated water in the process and the heating value and engine fuel quality of the gas. The model has been validated experimentally. This work includes a parametric study of the influence of the gasifying relative fuel/air ratio and the moisture content of the biomass on the characteristics of the process and the producer gas composition. The model helps to predict the behaviour of different biomass types and is a useful tool for optimizing the design and operation of downdraft biomass gasifiers
Instability of quantum equilibrium in Bohm's dynamics.
Colin, Samuel; Valentini, Antony
2014-11-08
We consider Bohm's second-order dynamics for arbitrary initial conditions in phase space. In principle, Bohm's dynamics allows for 'extended' non-equilibrium, with initial momenta not equal to the gradient of phase of the wave function (as well as initial positions whose distribution departs from the Born rule). We show that extended non-equilibrium does not relax in general and is in fact unstable. This is in sharp contrast with de Broglie's first-order dynamics, for which non-standard momenta are not allowed and which shows an efficient relaxation to the Born rule for positions. On this basis, we argue that, while de Broglie's dynamics is a tenable physical theory, Bohm's dynamics is not. In a world governed by Bohm's dynamics, there would be no reason to expect to see an effective quantum theory today (even approximately), in contradiction with observation.
Directory of Open Access Journals (Sweden)
Sérgio Luiz Gomes Antunes
2012-03-01
Full Text Available Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL. When acid-fast bacilli (AFB are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data.
Antunes, Sérgio Luiz Gomes; Chimelli, Leila; Jardim, Márcia Rodrigues; Vital, Robson Teixeira; Nery, José Augusto da Costa; Corte-Real, Suzana; Hacker, Mariana Andréa Vilas Boas; Sarno, Euzenir Nunes
2012-03-01
Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL). When acid-fast bacilli (AFB) are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies) to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN)] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres) than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data.
International Nuclear Information System (INIS)
Meng, Ming; Shang, Wei; Zhao, Xiaoli; Niu, Dongxiao; Li, Wei
2015-01-01
The coordinated actions of the central and the provincial governments are important in improving China's energy efficiency. This paper uses a three-dimensional decomposition model to measure the contribution of each province in improving the country's energy efficiency and a small-sample hybrid model to forecast this contribution. Empirical analysis draws the following conclusions which are useful for the central government to adjust its provincial energy-related policies. (a) There are two important areas for the Chinese government to improve its energy efficiency: adjusting the provincial economic structure and controlling the number of the small-scale private industrial enterprises; (b) Except for a few outliers, the energy efficiency growth rates of the northern provinces are higher than those of the southern provinces; provinces with high growth rates tend to converge geographically; (c) With regard to the energy sustainable development level, Beijing, Tianjin, Jiangxi, and Shaanxi are the best performers and Heilongjiang, Shanxi, Shanghai, and Guizhou are the worst performers; (d) By 2020, China's energy efficiency may reach 24.75 thousand yuan per ton of standard coal; as well as (e) Three development scenarios are designed to forecast China's energy consumption in 2012–2020. - Highlights: • Decomposition and forecasting models are used to analyze China's energy efficiency. • China should focus on the small industrial enterprises and local protectionism. • The energy sustainable development level of each province is evaluated. • Geographic distribution characteristics of energy efficiency changes are revealed. • Future energy efficiency and energy consumption are forecasted
Directory of Open Access Journals (Sweden)
Jovanović Filip P.
2016-01-01
Full Text Available This paper analyses the applicability of well-known risk management methodologies in energy efficiency projects in the industry. The possibilities of application of the selected risk management methodology are demonstrated within the project of the plants for injecting pulverized coal into blast furnaces nos. 1 and 2, implemented by the company US STEEL SERBIA d.o.o. in Smederevo. The aim of the project was to increase energy efficiency through the reduction of the quantity of coke, whose production requires large amounts of energy, reduction of harmful exhaust emission and increase productivity of blast furnaces through the reduction of production costs. The project was complex and had high costs, so that it was necessary to predict risk events and plan responses to identified risks at an early stage of implementation, in the course of the project design, in order to minimise losses and implement the project in accordance with the defined time and cost limitations. [Projekat Ministarstva nauke Republike Srbije, br. 179081: Researching contemporary tendencies of strategic management using specialized management disciplines in function of competitiveness of Serbian economy
Grand, I.; Bellon-Fontaine, M.-N.; Herry, J.-M.; Hilaire, D.; Moriconi, F.-X.; Naïtali, M.
2011-01-01
The standard test methods used to assess the efficiency of a disinfectant applied to surfaces are often based on counting the microbial survivors sampled in a liquid, but total cell removal from surfaces is seldom achieved. One might therefore wonder whether evaluations of microbial survivors in liquid-sampled cells are representative of the levels of survivors in whole populations. The present study was thus designed to determine the “damaged/undamaged” status induced by a peracetic acid disinfection for Bacillus atrophaeus spores deposited on glass coupons directly on this substrate and to compare it to the status of spores collected in liquid by a sampling procedure. The method utilized to assess the viability of both surface-associated and liquid-sampled spores included fluorescence labeling with a combination of Syto 61 and Chemchrome V6 dyes and quantifications by analyzing the images acquired by confocal laser scanning microscopy. The principal result of the study was that the viability of spores sampled in the liquid was found to be poorer than that of surface-associated spores. For example, after 2 min of peracetic acid disinfection, less than 17% ± 5% of viable cells were detected among liquid-sampled cells compared to 79% ± 5% or 47% ± 4%, respectively, when the viability was evaluated on the surface after or without the sampling procedure. Moreover, assessments of the survivors collected in the liquid phase, evaluated using the microscopic method and standard plate counts, were well correlated. Evaluations based on the determination of survivors among the liquid-sampled cells can thus overestimate the efficiency of surface disinfection procedures. PMID:21742922
Grand, I; Bellon-Fontaine, M-N; Herry, J-M; Hilaire, D; Moriconi, F-X; Naïtali, M
2011-09-01
The standard test methods used to assess the efficiency of a disinfectant applied to surfaces are often based on counting the microbial survivors sampled in a liquid, but total cell removal from surfaces is seldom achieved. One might therefore wonder whether evaluations of microbial survivors in liquid-sampled cells are representative of the levels of survivors in whole populations. The present study was thus designed to determine the "damaged/undamaged" status induced by a peracetic acid disinfection for Bacillus atrophaeus spores deposited on glass coupons directly on this substrate and to compare it to the status of spores collected in liquid by a sampling procedure. The method utilized to assess the viability of both surface-associated and liquid-sampled spores included fluorescence labeling with a combination of Syto 61 and Chemchrome V6 dyes and quantifications by analyzing the images acquired by confocal laser scanning microscopy. The principal result of the study was that the viability of spores sampled in the liquid was found to be poorer than that of surface-associated spores. For example, after 2 min of peracetic acid disinfection, less than 17% ± 5% of viable cells were detected among liquid-sampled cells compared to 79% ± 5% or 47% ± 4%, respectively, when the viability was evaluated on the surface after or without the sampling procedure. Moreover, assessments of the survivors collected in the liquid phase, evaluated using the microscopic method and standard plate counts, were well correlated. Evaluations based on the determination of survivors among the liquid-sampled cells can thus overestimate the efficiency of surface disinfection procedures.
Wei, Shih-Chun; Fan, Shen; Lien, Chia-Wen; Unnikrishnan, Binesh; Wang, Yi-Sheng; Chu, Han-Wei; Huang, Chih-Ching; Hsu, Pang-Hung; Chang, Huan-Tsung
2018-03-20
A graphene oxide (GO) nanosheet-modified N + -nylon membrane (GOM) has been prepared and used as an extraction and spray-ionization substrate for robust mass spectrometric detection of malachite green (MG), a highly toxic disinfectant in liquid samples and fish meat. The GOM is prepared by self-deposition of GO thin film onto an N + -nylon membrane, which has been used for efficient extraction of MG in aquaculture water samples or homogenized fish meat samples. Having a dissociation constant of 2.17 × 10 -9 M -1 , the GOM allows extraction of approximately 98% of 100 nM MG. Coupling of the GOM-spray with an ion-trap mass spectrometer allows quantitation of MG in aquaculture freshwater and seawater samples down to nanomolar levels. Furthermore, the system possesses high selectivity and sensitivity for the quantitation of MG and its metabolite (leucomalachite green) in fish meat samples. With easy extraction and efficient spray ionization properties of GOM, this membrane spray-mass spectrometry technique is relatively simple and fast in comparison to the traditional LC-MS/MS methods for the quantitation of MG and its metabolite in aquaculture products. Copyright © 2017 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Jonathan A Scolnick
Full Text Available Fusion genes are known to be key drivers of tumor growth in several types of cancer. Traditionally, detecting fusion genes has been a difficult task based on fluorescent in situ hybridization to detect chromosomal abnormalities. More recently, RNA sequencing has enabled an increased pace of fusion gene identification. However, RNA-Seq is inefficient for the identification of fusion genes due to the high number of sequencing reads needed to detect the small number of fusion transcripts present in cells of interest. Here we describe a method, Single Primer Enrichment Technology (SPET, for targeted RNA sequencing that is customizable to any target genes, is simple to use, and efficiently detects gene fusions. Using SPET to target 5701 exons of 401 known cancer fusion genes for sequencing, we were able to identify known and previously unreported gene fusions from both fresh-frozen and formalin-fixed paraffin-embedded (FFPE tissue RNA in both normal tissue and cancer cells.
Non-equilibrium supramolecular polymerization.
Sorrenti, Alessandro; Leira-Iglesias, Jorge; Markvoort, Albert J; de Greef, Tom F A; Hermans, Thomas M
2017-09-18
Supramolecular polymerization has been traditionally focused on the thermodynamic equilibrium state, where one-dimensional assemblies reside at the global minimum of the Gibbs free energy. The pathway and rate to reach the equilibrium state are irrelevant, and the resulting assemblies remain unchanged over time. In the past decade, the focus has shifted to kinetically trapped (non-dissipative non-equilibrium) structures that heavily depend on the method of preparation (i.e., pathway complexity), and where the assembly rates are of key importance. Kinetic models have greatly improved our understanding of competing pathways, and shown how to steer supramolecular polymerization in the desired direction (i.e., pathway selection). The most recent innovation in the field relies on energy or mass input that is dissipated to keep the system away from the thermodynamic equilibrium (or from other non-dissipative states). This tutorial review aims to provide the reader with a set of tools to identify different types of self-assembled states that have been explored so far. In particular, we aim to clarify the often unclear use of the term "non-equilibrium self-assembly" by subdividing systems into dissipative, and non-dissipative non-equilibrium states. Examples are given for each of the states, with a focus on non-dissipative non-equilibrium states found in one-dimensional supramolecular polymerization.
Directory of Open Access Journals (Sweden)
C. Fountoukis
2007-09-01
Full Text Available This study presents ISORROPIA II, a thermodynamic equilibrium model for the K+–Ca2+–Mg2+–NH4+–Na+–SO42−–NO3−–Cl−–H2O aerosol system. A comprehensive evaluation of its performance is conducted against water uptake measurements for laboratory aerosol and predictions of the SCAPE2 thermodynamic module over a wide range of atmospherically relevant conditions. The two models agree well, to within 13% for aerosol water content and total PM mass, 16% for aerosol nitrate and 6% for aerosol chloride and ammonium. Largest discrepancies were found under conditions of low RH, primarily from differences in the treatment of water uptake and solid state composition. In terms of computational speed, ISORROPIA II was more than an order of magnitude faster than SCAPE2, with robust and rapid convergence under all conditions. The addition of crustal species does not slow down the thermodynamic calculations (compared to the older ISORROPIA code because of optimizations in the activity coefficient calculation algorithm. Based on its computational rigor and performance, ISORROPIA II appears to be a highly attractive alternative for use in large scale air quality and atmospheric transport models.
Spontaneity and Equilibrium: Why "?G Equilibrium" Are Incorrect
Raff, Lionel M.
2014-01-01
The fundamental criteria for chemical reactions to be spontaneous in a given direction are generally incorrectly stated as ?G equilibrium are also misstated as being ?G = 0 or ?A = 0. Following a brief review of the…
Directory of Open Access Journals (Sweden)
Arzu İrvem
2016-10-01
Full Text Available Background: E. histolytica is among the common causes of acute gastroenteritis. The pathogenic species E. histolytica and the nonpathogenic species E. dispar cannot be morphologically differentiated, although correct identification of these protozoans is important for treatment and public health. In many laboratories, the screening of leukocytes, erythrocytes, amoebic cysts, trophozoites and parasite eggs is performed using Native-Lugol’s iodine for pre-diagnosis. Aims: In this study, we aimed to investigate the frequency of E. histolytica in stool samples collected from 788 patients residing in the Anatolian region of İstanbul who presented with gastrointestinal complaints. We used the information obtained to evaluate the effectiveness of microscopic examinations when used in combination with the E. histolytica adhesin antigen test. Study Design: Retrospective cross-sectional study Methods: Preparations of stool samples stained with Native-Lugol’s iodine were evaluated using the E. histolytica adhesin test and examined using standard light microscopy at ×40 magnification. Pearson’s Chi-square and Fisher’s exact tests were used for statistical analysis. Logistic regression analysis was used for multivariate analysis. Results: Of 788 samples, 38 (4.8% were positive for E. histolytica adhesin antigens. When evaluated together with the presences of erythrocytes, leukocytes, cysts, and trophozoites, respectively, using logistic regression analysis, leukocyte positivity was significantly higher. The odds ratio of leukocyte positivity increased adhesin test-positivity by 2,530-fold (95% CI=1.01–6.330. Adhesin test-positivity was significant (p=0.047. Conclusion: In line with these findings, the consistency between the presence of cysts and erythrocytes and adhesin test-positivity was found to be highly significant, but that of higher levels of leukocytes was found to be discordant. It was concluded that leukocytes and trophozoites were
Helical axis stellarator equilibrium model
International Nuclear Information System (INIS)
Koniges, A.E.; Johnson, J.L.
1985-02-01
An asymptotic model is developed to study MHD equilibria in toroidal systems with a helical magnetic axis. Using a characteristic coordinate system based on the vacuum field lines, the equilibrium problem is reduced to a two-dimensional generalized partial differential equation of the Grad-Shafranov type. A stellarator-expansion free-boundary equilibrium code is modified to solve the helical-axis equations. The expansion model is used to predict the equilibrium properties of Asperators NP-3 and NP-4. Numerically determined flux surfaces, magnetic well, transform, and shear are presented. The equilibria show a toroidal Shafranov shift
The nuclear quantum liquid off equilibrium
International Nuclear Information System (INIS)
Bjoernholm, S.
1986-01-01
Fusion, fission, quasifission and deep inelastic scattering of heavy ions sample the behaviour of the nuclear quantum liquid when it is far from equilibrium. This considerably augments the picture of nuclei obtained on the basis of specific perturbative disturbances of the equilibrium configurations, and from compound nucleus decay. Some peculiar properties of a quantum liquid composed of fermions ( 3 He or nucleons) with a mean free path that exceeds the dimensions of the system are reviewed and discussed in relation to measurements of mass asymmetry relaxation in quasifission. It is concluded that heavy ion reactions are especially well suited for studying quantum liquids in the limit where interactions between the particles and the self-consistent surface dominate the dissipative behaviour and where dissipation-fluctuation correlations are important. (orig.)
Numerical Verification Of Equilibrium Chemistry
International Nuclear Information System (INIS)
Piro, Markus; Lewis, Brent; Thompson, William T.; Simunovic, Srdjan; Besmann, Theodore M.
2010-01-01
A numerical tool is in an advanced state of development to compute the equilibrium compositions of phases and their proportions in multi-component systems of importance to the nuclear industry. The resulting software is being conceived for direct integration into large multi-physics fuel performance codes, particularly for providing boundary conditions in heat and mass transport modules. However, any numerical errors produced in equilibrium chemistry computations will be propagated in subsequent heat and mass transport calculations, thus falsely predicting nuclear fuel behaviour. The necessity for a reliable method to numerically verify chemical equilibrium computations is emphasized by the requirement to handle the very large number of elements necessary to capture the entire fission product inventory. A simple, reliable and comprehensive numerical verification method is presented which can be invoked by any equilibrium chemistry solver for quality assurance purposes.
Equilibrium ignition for ICF capsules
International Nuclear Information System (INIS)
Lackner, K.S.; Colgate, S.A.; Johnson, N.L.; Kirkpatrick, R.C.; Menikoff, R.; Petschek, A.G.
1993-01-01
There are two fundamentally different approaches to igniting DT fuel in an ICF capsule which can be described as equilibrium and hot spot ignition. In both cases, a capsule which can be thought of as a pusher containing the DT fuel is imploded until the fuel reaches ignition conditions. In comparing high-gain ICF targets using cryogenic DT for a pusher with equilibrium ignition targets using high-Z pushers which contain the radiation. The authors point to the intrinsic advantages of the latter. Equilibrium or volume ignition sacrifices high gain for lower losses, lower ignition temperature, lower implosion velocity and lower sensitivity of the more robust capsule to small fluctuations and asymmetries in the drive system. The reduction in gain is about a factor of 2.5, which is small enough to make the more robust equilibrium ignition an attractive alternative
On the local equilibrium condition
International Nuclear Information System (INIS)
Hessling, H.
1994-11-01
A physical system is in local equilibrium if it cannot be distinguished from a global equilibrium by ''infinitesimally localized measurements''. This should be a natural characterization of local equilibrium, but the problem is to give a precise meaning to the qualitative phrase ''infinitesimally localized measurements''. A solution is suggested in form of a Local Equilibrium Condition (LEC), which can be applied to linear relativistic quantum field theories but not directly to selfinteracting quantum fields. The concept of local temperature resulting from LEC is compared to an old approach to local temperature based on the principle of maximal entropy. It is shown that the principle of maximal entropy does not always lead to physical states if it is applied to relativistic quantum field theories. (orig.)
Skarphedinsson, Gudmundur; Villabø, Marianne A; Lauth, Bertrand
2015-01-01
The Multidimensional Anxiety Scale for Children (MASC) is a widely used self-report questionnaire for the assessment of anxiety symptoms in children and adolescents with well documented predictive validity of the total score and subscales in internalizing and mixed clinical samples. However, no data exist on the screening efficiency in an inpatient sample of adolescents. To examine the psychometric properties and screening efficiency of the MASC in a high comorbid inpatient sample. The current study used receiver operating characteristic (ROC) analyses to investigate the predictive value of the MASC total and subscale scores for the Schedule for Affective Disorders and Schizophrenia for School-age children-Present and Lifetime version (K-SADS-PL), DSM-IV diagnoses of generalized anxiety disorder (GAD), separation anxiety disorder (SAD) and social phobia (SoP) in a highly comorbid inpatient sample of adolescents (11-18 years). The MASC total score predicted any anxiety disorder (AD) and GAD moderately well. Physical symptoms predicted GAD moderately well. Social anxiety and separation anxiety/panic did not predict SoP or SAD, respectively. Physical symptoms and harm avoidance also predicted the presence of major depressive disorder. The findings support the utility of the MASC total score to predict the presence of any AD and GAD. However, the utility of the social anxiety and separation anxiety/panic subscales showed limited utility to predict the presence of SAD and SoP, respectively. The MASC has probably a more limited function in screening for AD among a highly comorbid inpatient sample of severely affected adolescents. Our results should be interpreted in the light of a small, mixed sample of inpatient adolescents.
Almutairy, Meznah; Torng, Eric
2017-01-01
One of the most common ways to search a sequence database for sequences that are similar to a query sequence is to use a k-mer index such as BLAST. A big problem with k-mer indexes is the space required to store the lists of all occurrences of all k-mers in the database. One method for reducing the space needed, and also query time, is sampling where only some k-mer occurrences are stored. Most previous work uses hard sampling, in which enough k-mer occurrences are retained so that all similar sequences are guaranteed to be found. In contrast, we study soft sampling, which further reduces the number of stored k-mer occurrences at a cost of decreasing query accuracy. We focus on finding highly similar local alignments (HSLA) over nucleotide sequences, an operation that is fundamental to biological applications such as cDNA sequence mapping. For our comparison, we use the NCBI BLAST tool with the human genome and human ESTs. When identifying HSLAs, we find that soft sampling significantly reduces both index size and query time with relatively small losses in query accuracy. For the human genome and HSLAs of length at least 100 bp, soft sampling reduces index size 4-10 times more than hard sampling and processes queries 2.3-6.8 times faster, while still achieving retention rates of at least 96.6%. When we apply soft sampling to the problem of mapping ESTs against the genome, we map more than 98% of ESTs perfectly while reducing the index size by a factor of 4 and query time by 23.3%. These results demonstrate that soft sampling is a simple but effective strategy for performing efficient searches for HSLAs. We also provide a new model for sampling with BLAST that predicts empirical retention rates with reasonable accuracy by modeling two key problem factors.
Directory of Open Access Journals (Sweden)
Gabriel J. Turbay
2011-03-01
Full Text Available The strategic equilibrium of an N-person cooperative game with transferable utility is a system composed of a cover collection of subsets of N and a set of extended imputations attainable through such equilibrium cover. The system describes a state of coalitional bargaining stability where every player has a bargaining alternative against any other player to support his corresponding equilibrium claim. Any coalition in the sable system may form and divide the characteristic value function of the coalition as prescribed by the equilibrium payoffs. If syndicates are allowed to form, a formed coalition may become a syndicate using the equilibrium payoffs as disagreement values in bargaining for a part of the complementary coalition incremental value to the grand coalition when formed. The emergent well known-constant sum derived game in partition function is described in terms of parameters that result from incumbent binding agreements. The strategic-equilibrium corresponding to the derived game gives an equal value claim to all players. This surprising result is alternatively explained in terms of strategic-equilibrium based possible outcomes by a sequence of bargaining stages that when the binding agreements are in the right sequential order, von Neumann and Morgenstern (vN-M non-discriminatory solutions emerge. In these solutions a preferred branch by a sufficient number of players is identified: the weaker players syndicate against the stronger player. This condition is referred to as the stronger player paradox. A strategic alternative available to the stronger players to overcome the anticipated not desirable results is to voluntarily lower his bargaining equilibrium claim. In doing the original strategic equilibrium is modified and vN-M discriminatory solutions may occur, but also a different stronger player may emerge that has eventually will have to lower his equilibrium claim. A sequence of such measures converges to the equal
Quick plasma equilibrium reconstruction based on GPU
International Nuclear Information System (INIS)
Xiao Bingjia; Huang, Y.; Luo, Z.P.; Yuan, Q.P.; Lao, L.
2014-01-01
A parallel code named P-EFIT which could complete an equilibrium reconstruction iteration in 250 μs is described. It is built with the CUDA TM architecture by using Graphical Processing Unit (GPU). It is described for the optimization of middle-scale matrix multiplication on GPU and an algorithm which could solve block tri-diagonal linear system efficiently in parallel. Benchmark test is conducted. Static test proves the accuracy of the P-EFIT and simulation-test proves the feasibility of using P-EFIT for real-time reconstruction on 65x65 computation grids. (author)
Thermodynamic evolution far from equilibrium
Khantuleva, Tatiana A.
2018-05-01
The presented model of thermodynamic evolution of an open system far from equilibrium is based on the modern results of nonequilibrium statistical mechanics, the nonlocal theory of nonequilibrium transport developed by the author and the Speed Gradient principle introduced in the theory of adaptive control. Transition to a description of the system internal structure evolution at the mesoscopic level allows a new insight at the stability problem of non-equilibrium processes. The new model is used in a number of specific tasks.
Relevance of equilibrium in multifragmentation
International Nuclear Information System (INIS)
Furuta, Takuya; Ono, Akira
2009-01-01
The relevance of equilibrium in a multifragmentation reaction of very central 40 Ca + 40 Ca collisions at 35 MeV/nucleon is investigated by using simulations of antisymmetrized molecular dynamics (AMD). Two types of ensembles are compared. One is the reaction ensemble of the states at each reaction time t in collision events simulated by AMD, and the other is the equilibrium ensemble prepared by solving the AMD equation of motion for a many-nucleon system confined in a container for a long time. The comparison of the ensembles is performed for the fragment charge distribution and the excitation energies. Our calculations show that there exists an equilibrium ensemble that well reproduces the reaction ensemble at each reaction time t for the investigated period 80≤t≤300 fm/c. However, there are some other observables that show discrepancies between the reaction and equilibrium ensembles. These may be interpreted as dynamical effects in the reaction. The usual static equilibrium at each instant is not realized since any equilibrium ensemble with the same volume as that of the reaction system cannot reproduce the fragment observables
International Nuclear Information System (INIS)
Martin-Casallo, M. T.; Los Arcos, J. M.; Grau, A.
1989-01-01
Two calculation procedures have been tested for the application of the efficiency tracing method to the activity determination of 3H and 14C dual- -labelled samples in the liquid scintillation metrology. A procedure Ieads to the statement of a linear equations system as a function of the quenching parameter, while the other one uses a least-square algorithm to fit the total count rate against the quenching parameter. The first procedure is strongly sensitive to the statistical uncertainty on the partial efficiencies and produces discrepancies which may reach more than 100% compared to the real values. The second procedure leads to more reliable results, showing discrepancies between 0.1% and 0.6% for the 3H activity and between 0.6% and 5% for the 14C activity, as that the efficiency tracing method can be applied to the metro- logy of dual-labelled samples of 3H and 14C by means of this procedure. (Author) 7 refs
Energy Technology Data Exchange (ETDEWEB)
Tian, Z; Folkerts, M; Jiang, S; Jia, X [UT Southwestern Medical Ctr, Dallas, TX (United States); Li, Y [Beihang University, Beijing (China)
2016-06-15
Purpose: We have previously developed a GPU-OpenCL-based MC dose engine named goMC with built-in analytical linac beam model. To move goMC towards routine clinical use, we have developed an automatic beam-commissioning method, and an efficient source sampling strategy to facilitate dose calculations for real treatment plans. Methods: Our commissioning method is to automatically adjust the relative weights among the sub-sources, through an optimization process minimizing the discrepancies between calculated dose and measurements. Six models built for Varian Truebeam linac photon beams (6MV, 10MV, 15MV, 18MV, 6MVFFF, 10MVFFF) were commissioned using measurement data acquired at our institution. To facilitate dose calculations for real treatment plans, we employed inverse sampling method to efficiently incorporate MLC leaf-sequencing into source sampling. Specifically, instead of sampling source particles control-point by control-point and rejecting the particles blocked by MLC, we assigned a control-point index to each sampled source particle, according to MLC leaf-open duration of each control-point at the pixel where the particle intersects the iso-center plane. Results: Our auto-commissioning method decreased distance-to-agreement (DTA) of depth dose at build-up regions by 36.2% averagely, making it within 1mm. Lateral profiles were better matched for all beams, with biggest improvement found at 15MV for which root-mean-square difference was reduced from 1.44% to 0.50%. Maximum differences of output factors were reduced to less than 0.7% for all beams, with largest decrease being from1.70% to 0.37% found at 10FFF. Our new sampling strategy was tested on a Head&Neck VMAT patient case. Achieving clinically acceptable accuracy, the new strategy could reduce the required history number by a factor of ∼2.8 given a statistical uncertainty level and hence achieve a similar speed-up factor. Conclusion: Our studies have demonstrated the feasibility and effectiveness of
Yingst, R. A.; Bartley, J. K.; Chidsey, T. C.; Cohen, B. A.; Gilleaudeau, G. J.; Hynek, B. M.; Kah, L. C.; Minitti, M. E.; Williams, R. M. E.; Black, S.; Gemperline, J.; Schaufler, R.; Thomas, R. J.
2018-05-01
The GHOST field tests are designed to isolate and test science-driven rover operations protocols, to determine best practices. During a recent field test at a potential Mars 2020 landing site analog, we tested two Mars Science Laboratory data-acquisition and decision-making methods to assess resulting science return and sample quality: a linear method, where sites of interest are studied in the order encountered, and a "walkabout-first" method, where sites of interest are examined remotely before down-selecting to a subset of sites that are interrogated with more resource-intensive instruments. The walkabout method cost less time and fewer resources, while increasing confidence in interpretations. Contextual data critical to evaluating site geology was acquired earlier than for the linear method, and given a higher priority, which resulted in development of more mature hypotheses earlier in the analysis process. Combined, this saved time and energy in the collection of data with more limited spatial coverage. Based on these results, we suggest that the walkabout method be used where doing so would provide early context and time for the science team to develop hypotheses-critical tests; and that in gathering context, coverage may be more important than higher resolution.
Shape characteristics of equilibrium and non-equilibrium fractal clusters.
Mansfield, Marc L; Douglas, Jack F
2013-07-28
It is often difficult in practice to discriminate between equilibrium and non-equilibrium nanoparticle or colloidal-particle clusters that form through aggregation in gas or solution phases. Scattering studies often permit the determination of an apparent fractal dimension, but both equilibrium and non-equilibrium clusters in three dimensions frequently have fractal dimensions near 2, so that it is often not possible to discriminate on the basis of this geometrical property. A survey of the anisotropy of a wide variety of polymeric structures (linear and ring random and self-avoiding random walks, percolation clusters, lattice animals, diffusion-limited aggregates, and Eden clusters) based on the principal components of both the radius of gyration and electric polarizability tensor indicates, perhaps counter-intuitively, that self-similar equilibrium clusters tend to be intrinsically anisotropic at all sizes, while non-equilibrium processes such as diffusion-limited aggregation or Eden growth tend to be isotropic in the large-mass limit, providing a potential means of discriminating these clusters experimentally if anisotropy could be determined along with the fractal dimension. Equilibrium polymer structures, such as flexible polymer chains, are normally self-similar due to the existence of only a single relevant length scale, and are thus anisotropic at all length scales, while non-equilibrium polymer structures that grow irreversibly in time eventually become isotropic if there is no difference in the average growth rates in different directions. There is apparently no proof of these general trends and little theoretical insight into what controls the universal anisotropy in equilibrium polymer structures of various kinds. This is an obvious topic of theoretical investigation, as well as a matter of practical interest. To address this general problem, we consider two experimentally accessible ratios, one between the hydrodynamic and gyration radii, the other
Calculation of Multiphase Chemical Equilibrium by the Modified RAND Method
DEFF Research Database (Denmark)
Tsanas, Christos; Stenby, Erling Halfdan; Yan, Wei
2017-01-01
method. The modified RAND extends the classical RAND method from single-phase chemical reaction equilibrium of ideal systems to multiphase chemical equilibrium of nonideal systems. All components in all phases are treated in the same manner and the system Gibbs energy can be used to monitor convergence....... This is the first time that modified RAND was applied to multiphase chemical equilibrium systems. The combined algorithm was tested using nine examples covering vapor–liquid (VLE) and vapor–liquid–liquid equilibria (VLLE) of ideal and nonideal reaction systems. Successive substitution provided good initial......A robust and efficient algorithm for simultaneous chemical and phase equilibrium calculations is proposed. It combines two individual nonstoichiometric solving procedures: a nested-loop method with successive substitution for the first steps and final convergence with the second-order modified RAND...
Aliakbarpour, H; Rawi, Che Salmah Md
2010-06-01
Thrips cause considerable economic loss to mango, Mangifera indica L., in Penang, Malaysia. Three nondestructive sampling techniques--shaking mango panicles over a moist plastic tray, washing the panicles with ethanol, and immobilization of thrips by using CO2--were evaluated for their precision to determine the most effective technique to capture mango flower thrips (Thysanoptera: Thripidae) in an orchard located at Balik Pulau, Penang, Malaysia, during two flowering seasons from December 2008 to February 2009 and from August to September 2009. The efficiency of each of the three sampling techniques was compared with absolute population counts on whole panicles as a reference. Diurnal flight activity of thrips species was assessed using yellow sticky traps. All three sampling methods and sticky traps were used at two hourly intervals from 0800 to 1800 hours to get insight into diurnal periodicity of thrips abundance in the orchard. Based on pooled data for the two seasons, the CO2 method was the most efficient procedure extracting 80.7% adults and 74.5% larvae. The CO2 method had the lowest relative variation and was the most accurate procedure compared with the absolute method as shown by regression analysis. All collection techniques showed that the numbers of all thrips species in mango panicles increased after 0800 hours, reaching a peak between 1200 and 1400 hours. Adults thrips captured on the sticky traps were the most abundant between 0800-1000 and 1400-1600 hours. According to results of this study, the CO2 method is recommended for sampling of thrips in the field. It is a nondestructive sampling procedure that neither damages flowers nor diminishes fruit production. Management of thrips populations in mango orchards with insecticides would be more effectively carried out during their peak population abundance on the flower panicles at midday to 1400 hours.
International Nuclear Information System (INIS)
Zhang, R; Baer, E; Jee, K; Sharp, G; Flanz, J; Lu, H
2016-01-01
Purpose: For proton therapy, an accurate model of CT HU to relative stopping power (RSP) conversion is essential. In current practice, validation of these models relies solely on measurements of tissue substitutes with standard compositions. Validation based on real tissue samples would be much more direct and can address variations between patients. This study intends to develop an efficient and accurate system based on the concept of dose extinction to measure WEPL and retrieve RSP in biological tissue in large number of types. Methods: A broad AP proton beam delivering a spread out Bragg peak (SOBP) is used to irradiate the samples with a Matrixx detector positioned immediately below. A water tank was placed on top of the samples, with the water level controllable in sub-millimeter by a remotely controlled dosing pump. While gradually lowering the water level with beam on, the transmission dose was recorded at 1 frame/sec. The WEPL were determined as the difference between the known beam range of the delivered SOBP (80%) and the water level corresponding to 80% of measured dose profiles in time. A Gammex 467 phantom was used to test the system and various types of biological tissue was measured. Results: RSP for all Gammex inserts, expect the one made with lung-450 material (<2% error), were determined within ±0.5% error. Depends on the WEPL of investigated phantom, a measurement takes around 10 min, which can be accelerated by a faster pump. Conclusion: Based on the concept of dose extinction, a system was explored to measure WEPL efficiently and accurately for a large number of samples. This allows the validation of CT HU to stopping power conversions based on large number of samples and real tissues. It also allows the assessment of beam uncertainties due to variations over patients, which issue has never been sufficiently studied before.
Energy Technology Data Exchange (ETDEWEB)
Zhang, R; Baer, E; Jee, K; Sharp, G; Flanz, J; Lu, H [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States)
2016-06-15
Purpose: For proton therapy, an accurate model of CT HU to relative stopping power (RSP) conversion is essential. In current practice, validation of these models relies solely on measurements of tissue substitutes with standard compositions. Validation based on real tissue samples would be much more direct and can address variations between patients. This study intends to develop an efficient and accurate system based on the concept of dose extinction to measure WEPL and retrieve RSP in biological tissue in large number of types. Methods: A broad AP proton beam delivering a spread out Bragg peak (SOBP) is used to irradiate the samples with a Matrixx detector positioned immediately below. A water tank was placed on top of the samples, with the water level controllable in sub-millimeter by a remotely controlled dosing pump. While gradually lowering the water level with beam on, the transmission dose was recorded at 1 frame/sec. The WEPL were determined as the difference between the known beam range of the delivered SOBP (80%) and the water level corresponding to 80% of measured dose profiles in time. A Gammex 467 phantom was used to test the system and various types of biological tissue was measured. Results: RSP for all Gammex inserts, expect the one made with lung-450 material (<2% error), were determined within ±0.5% error. Depends on the WEPL of investigated phantom, a measurement takes around 10 min, which can be accelerated by a faster pump. Conclusion: Based on the concept of dose extinction, a system was explored to measure WEPL efficiently and accurately for a large number of samples. This allows the validation of CT HU to stopping power conversions based on large number of samples and real tissues. It also allows the assessment of beam uncertainties due to variations over patients, which issue has never been sufficiently studied before.
International Nuclear Information System (INIS)
Velikhov, E.P.; Golubev, V.S.; Dykhne, A.M.
1976-01-01
The paper assesses the position in 1975 of theoretical and experimental work on the physics of a magnetohydrodynamic generator with non-equilibrium plasma conductivity. This research started at the beginning of the 1960s; as work on the properties of thermally non-equilibrium plasma in magnetic fields and also in MHD generator ducts progressed, a number of phenomena were discovered and investigated that had either been unknown in plasma physics or had remained uninvestigated until that time: ionization instability and ionization turbulence of plasma in a magnetic field, acoustic instability of a plasma with anisotropic conductivity, the non-equilibrium ionization wave and the energy balance of a non-equilibrium plasma. At the same time, it was discovered what physical requirements an MHD generator with non-equilibrium conductivity must satisfy to achieve high efficiency in converting the thermal or kinetic energy of the gas flow into electric energy. The experiments on MHD power generation with thermally non-equilibrium plasma carried out up to 1975 indicated that it should be possible to achieve conversion efficiencies of up to 20-30%. (author)
Li, Changqiao; The ATLAS collaboration
2017-01-01
The $b$-tagging efficiency of the MV2c10 discriminant for track-jets and calorimeter-jets containing $b$-hadrons is measured using 36.5~fb$^{-1}$ of $pp$ collisions collected in 2015 and 2016 by ATLAS at $\\sqrt{s}$=13~TeV. The measurements are performed using a tag-and-probe method to select a control sample of jets enriched in $b$-jets, by keeping events with a final state consistent with the process $pp\\to t\\bar{t}\\to W^+bW^-\\bar{b} \\to e^\\pm \\mu^\\mp \
Serra, Antonio; Monteduro, Anna Grazia; Padmanabhan, Sanosh Kunjalukkal; Licciulli, Antonio; Bonfrate, Valentina; Salvatore, Luca; Calcagnile, Lucio
2017-01-01
Mixed iron-manganese oxide nanoparticles, synthesized by a simple procedure, were used to remove nickel ion from aqueous solutions. Nanostructures, prepared by using different weight percents of manganese, were characterized by transmission electron microscopy, selected area diffraction, X-ray diffraction, Raman spectroscopy, and vibrating sample magnetometry. Adsorption/desorption isotherm curves demonstrated that manganese inclusions enhance the specific surface area three times and the pores volume ten times. This feature was crucial to decontaminate both aqueous samples and food extracts from nickel ion. Efficient removal of Ni2+ was highlighted by the well-known dimethylglyoxime test and by ICP-MS analysis and the possibility of regenerating the nanostructure was obtained by a washing treatment in disodium ethylenediaminetetraacetate solution. PMID:28804670
Directory of Open Access Journals (Sweden)
Alessandro Buccolieri
2017-01-01
Full Text Available Mixed iron-manganese oxide nanoparticles, synthesized by a simple procedure, were used to remove nickel ion from aqueous solutions. Nanostructures, prepared by using different weight percents of manganese, were characterized by transmission electron microscopy, selected area diffraction, X-ray diffraction, Raman spectroscopy, and vibrating sample magnetometry. Adsorption/desorption isotherm curves demonstrated that manganese inclusions enhance the specific surface area three times and the pores volume ten times. This feature was crucial to decontaminate both aqueous samples and food extracts from nickel ion. Efficient removal of Ni2+ was highlighted by the well-known dimethylglyoxime test and by ICP-MS analysis and the possibility of regenerating the nanostructure was obtained by a washing treatment in disodium ethylenediaminetetraacetate solution.
Local Equilibrium and Retardation Revisited.
Hansen, Scott K; Vesselinov, Velimir V
2018-01-01
In modeling solute transport with mobile-immobile mass transfer (MIMT), it is common to use an advection-dispersion equation (ADE) with a retardation factor, or retarded ADE. This is commonly referred to as making the local equilibrium assumption (LEA). Assuming local equilibrium, Eulerian textbook treatments derive the retarded ADE, ostensibly exactly. However, other authors have presented rigorous mathematical derivations of the dispersive effect of MIMT, applicable even in the case of arbitrarily fast mass transfer. We resolve the apparent contradiction between these seemingly exact derivations by adopting a Lagrangian point of view. We show that local equilibrium constrains the expected time immobile, whereas the retarded ADE actually embeds a stronger, nonphysical, constraint: that all particles spend the same amount of every time increment immobile. Eulerian derivations of the retarded ADE thus silently commit the gambler's fallacy, leading them to ignore dispersion due to mass transfer that is correctly modeled by other approaches. We then present a particle tracking simulation illustrating how poor an approximation the retarded ADE may be, even when mobile and immobile plumes are continually near local equilibrium. We note that classic "LEA" (actually, retarded ADE validity) criteria test for insignificance of MIMT-driven dispersion relative to hydrodynamic dispersion, rather than for local equilibrium. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Magiera, Sylwia; Kwietniowska, Ewelina
2016-11-15
In this study, an easy, simple and efficient method for the determination of naringenin enantiomers in fruit juices after salting-out-assisted liquid-liquid extraction (SALLE) and high-performance liquid chromatography (HPLC) with diode-array detection (DAD) was developed. The sample treatment is based on the use of water-miscible acetonitrile as the extractant and acetonitrile phase separation under high-salt conditions. After extraction, juice samples were incubated with hydrochloric acid in order to achieve hydrolysis of naringin to naringenin. The hydrolysis parameters were optimized by using a half-fraction factorial central composite design (CCD). After sample preparation, chromatographic separation was obtained on a Chiralcel® OJ-RH column using the mobile phase consisting of 10mM aqueous ammonium acetate:methanol:acetonitrile (50:30:20; v/v/v) with detection at 288nm. The average recovery of the analyzed compounds ranged from 85.6 to 97.1%. The proposed method was satisfactorily used for the determination of naringenin enantiomers in various fruit juices samples. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mauri-Aucejo, Adela; Amorós, Pedro; Moragues, Alaina; Guillem, Carmen; Belenguer-Sapiña, Carolina
2016-08-15
Solid-phase extraction is one of the most important techniques for sample purification and concentration. A wide variety of solid phases have been used for sample preparation over time. In this work, the efficiency of a new kind of solid-phase extraction adsorbent, which is a microporous material made from modified cyclodextrin bounded to a silica network, is evaluated through an analytical method which combines solid-phase extraction with high-performance liquid chromatography to determine polycyclic aromatic hydrocarbons in water samples. Several parameters that affected the analytes recovery, such as the amount of solid phase, the nature and volume of the eluent or the sample volume and concentration influence have been evaluated. The experimental results indicate that the material possesses adsorption ability to the tested polycyclic aromatic hydrocarbons. Under the optimum conditions, the quantification limits of the method were in the range of 0.09-2.4μgL(-1) and fine linear correlations between peak height and concentration were found around 1.3-70μgL(-1). The method has good repeatability and reproducibility, with coefficients of variation under 8%. Due to the concentration results, this material may represent an alternative for trace analysis of polycyclic aromatic hydrocarbons in water trough solid-phase extraction. Copyright © 2016 Elsevier B.V. All rights reserved.
Hansen, T. M.; Cordua, K. S.
2017-12-01
Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.
Equilibrium Arrival Times to Queues
DEFF Research Database (Denmark)
Breinbjerg, Jesper; Østerdal, Lars Peter
We consider a non-cooperative queueing environment where a finite number of customers independently choose when to arrive at a queueing system that opens at a given point in time and serves customers on a last-come first-serve preemptive-resume (LCFS-PR) basis. Each customer has a service time...... requirement which is identically and independently distributed according to some general probability distribution, and they want to complete service as early as possible while minimizing the time spent in the queue. In this setting, we establish the existence of an arrival time strategy that constitutes...... a symmetric (mixed) Nash equilibrium, and show that there is at most one symmetric equilibrium. We provide a numerical method to compute this equilibrium and demonstrate by a numerical example that the social effciency can be lower than the effciency induced by a similar queueing system that serves customers...
Wen, Congying; Li, Mengmeng; Li, Wangbo; Li, Zizhou; Duan, Wei; Li, Yulong; Zhou, Jie; Li, Xiyou; Zeng, Jingbin
2017-12-29
The content of gasoline fraction in oil samples is not only an important indicator of oil quality, but also an indispensable fundamental data for oil refining and processing. Before its determination, efficient preconcentration and separation of gasoline fractions from complicated matrices is essential. In this work, a thin layer of graphene (G) was deposited onto oriented ZnO nanorods (ZNRs) as a SPME coating. By this approach, the surface area of G was greatly enhanced by the aligned ZNRs, and the surface polarity of ZNRs was changed from polar to less polar, which were both beneficial for the extraction of gasoline fractions. In addition, the ZNRs were well protected by the mechanically and chemically stable G, making the coating highly durable for use. With headspace SPME (HS-SPME) mode, the G/ZNRs coating can effectively extract gasoline fractions from various oil samples, whose extraction efficiency achieved 1.5-5.4 and 2.1-8.2 times higher than those of a G and commercial 7-μm PDMS coating respectively. Coupled with GC-FID, the developed method is sensitive, simple, cost effective and easily accessible for the analysis of gasoline fractions. Moreover, the method is also feasible for the detection of gasoline markers in simulated oil-polluted water, which provides an option for the monitoring of oil spill accident. Copyright © 2017 Elsevier B.V. All rights reserved.
Coudert, Lucie; Blais, Jean-François; Mercier, Guy; Cooper, Paul; Janin, Amélie; Gastonguay, Louis
2014-01-01
In recent years, an efficient and economically attractive leaching process has been developed to remove metals from copper-based treated wood wastes. This study explored the applicability of this leaching process using chromated copper arsenate (CCA) treated wood samples with different initial metal loading and elapsed time between wood preservation treatment and remediation. The sulfuric acid leaching process resulted in the solubilization of more than 87% of the As, 70% of the Cr, and 76% of the Cu from CCA-chips and in the solubilization of more than 96% of the As, 78% of the Cr and 91% of the Cu from CCA-sawdust. The results showed that the performance of this leaching process might be influenced by the initial metal loading of the treated wood wastes and the elapsed time between preservation treatment and remediation. The effluents generated during the leaching steps were treated by precipitation-coagulation to satisfy the regulations for effluent discharge in municipal sewers. Precipitation using ferric chloride and sodium hydroxide was highly efficient, removing more than 99% of the As, Cr, and Cu. It appears that this leaching process can be successfully applied to remove metals from different CCA-treated wood samples and then from the effluents. Copyright © 2013 Elsevier Ltd. All rights reserved.
Equilibrium deuterium isotope effect of surprising magnitude
International Nuclear Information System (INIS)
Goldstein, M.J.; Pressman, E.J.
1981-01-01
Seemingly large deuterium isotope effects are reported for the preference of deuterium for the α-chloro site to the bridgehead or to the vinyl site in samples of anti-7-chlorobicyclo[4.3.2]undecatetraene-d 1 . Studies of molecular models did not provide a basis for these large equilibrium deuterium isotope effects. The possibility is proposed that these isotope effects only appear to be large for want of comparison with isotope effects measured for molecules that might provide even greater contrasts in local force fields
Spontaneity and Equilibrium: Why "?G Equilibrium" Are Incorrect
Raff, Lionel M.
2014-01-01
The fundamental criteria for chemical reactions to be spontaneous in a given direction are generally incorrectly stated as ?G chemistry textbooks and even in some more advanced texts. Similarly, the criteria for equilibrium are also misstated as being ?G = 0 or ?A = 0. Following a brief review of the…
Equilibrium problems for Raney densities
Forrester, Peter J.; Liu, Dang-Zheng; Zinn-Justin, Paul
2015-07-01
The Raney numbers are a class of combinatorial numbers generalising the Fuss-Catalan numbers. They are indexed by a pair of positive real numbers (p, r) with p > 1 and 0 0 and similarly use both methods to identify the equilibrium problem for (p, r) = (θ/q + 1, 1/q), θ > 0 and q \\in Z+ . The Wiener-Hopf method is used to extend the latter to parameters (p, r) = (θ/q + 1, m + 1/q) for m a non-negative integer, and also to identify the equilibrium problem for a family of densities with moments given by certain binomial coefficients.
Equilibrium in a Production Economy
Energy Technology Data Exchange (ETDEWEB)
Chiarolla, Maria B., E-mail: maria.chiarolla@uniroma1.it [Universita di Roma ' La Sapienza' , Dipartimento di Metodi e Modelli per l' Economia, il Territorio e la Finanza, Facolta di Economia (Italy); Haussmann, Ulrich G., E-mail: uhaus@math.ubc.ca [University of British Columbia, Department of Mathematics (Canada)
2011-06-15
Consider a closed production-consumption economy with multiple agents and multiple resources. The resources are used to produce the consumption good. The agents derive utility from holding resources as well as consuming the good produced. They aim to maximize their utility while the manager of the production facility aims to maximize profits. With the aid of a representative agent (who has a multivariable utility function) it is shown that an Arrow-Debreu equilibrium exists. In so doing we establish technical results that will be used to solve the stochastic dynamic problem (a case with infinite dimensional commodity space so the General Equilibrium Theory does not apply) elsewhere.
Incentives in Supply Function Equilibrium
DEFF Research Database (Denmark)
Vetter, Henrik
2014-01-01
The author analyses delegation in homogenous duopoly under the assumption that the firm-managers compete in supply functions. In supply function equilibrium, managers’ decisions are strategic complements. This reverses earlier findings in that the author finds that owners give managers incentives...... to act in an accommodating way. As a result, optimal delegation reduces per-firm output and increases profits to above-Cournot profits. Moreover, in supply function equilibrium the mode of competition is endogenous. This means that the author avoids results that are sensitive with respect to assuming...
Equilibrium in a Production Economy
International Nuclear Information System (INIS)
Chiarolla, Maria B.; Haussmann, Ulrich G.
2011-01-01
Consider a closed production-consumption economy with multiple agents and multiple resources. The resources are used to produce the consumption good. The agents derive utility from holding resources as well as consuming the good produced. They aim to maximize their utility while the manager of the production facility aims to maximize profits. With the aid of a representative agent (who has a multivariable utility function) it is shown that an Arrow-Debreu equilibrium exists. In so doing we establish technical results that will be used to solve the stochastic dynamic problem (a case with infinite dimensional commodity space so the General Equilibrium Theory does not apply) elsewhere.
The Equilibrium Rule--A Personal Discovery
Hewitt, Paul G.
2016-01-01
Examples of equilibrium are evident everywhere and the equilibrium rule provides a reasoned way to view all things, whether in static (balancing rocks, steel beams in building construction) or dynamic (airplanes, bowling balls) equilibrium. Interestingly, the equilibrium rule applies not just to objects at rest but whenever any object or system of…
Non equilibrium atomic processes and plasma spectroscopy
International Nuclear Information System (INIS)
Kato, Takako
2003-01-01
Along with the technical progress in plasma spectroscopy, non equilibrium ionization processes have been recently observed. We study non local thermodynamic equilibrium and non ionization equilibrium for various kinds of plasmas. Specifically we discuss non equilibrium atomic processes in magnetically confined plasmas, solar flares and laser produced plasmas using a collisional radiative model based on plasma spectroscopic data. (author)
Learning efficient correlated equilibria
Borowski, Holly P.; Marden, Jason R.; Shamma, Jeff S.
2014-01-01
The majority of distributed learning literature focuses on convergence to Nash equilibria. Correlated equilibria, on the other hand, can often characterize more efficient collective behavior than even the best Nash equilibrium. However, there are no existing distributed learning algorithms that converge to specific correlated equilibria. In this paper, we provide one such algorithm which guarantees that the agents' collective joint strategy will constitute an efficient correlated equilibrium with high probability. The key to attaining efficient correlated behavior through distributed learning involves incorporating a common random signal into the learning environment.
Learning efficient correlated equilibria
Borowski, Holly P.
2014-12-15
The majority of distributed learning literature focuses on convergence to Nash equilibria. Correlated equilibria, on the other hand, can often characterize more efficient collective behavior than even the best Nash equilibrium. However, there are no existing distributed learning algorithms that converge to specific correlated equilibria. In this paper, we provide one such algorithm which guarantees that the agents\\' collective joint strategy will constitute an efficient correlated equilibrium with high probability. The key to attaining efficient correlated behavior through distributed learning involves incorporating a common random signal into the learning environment.
A Metastable Equilibrium Model for the Relative Abundances of Microbial Phyla in a Hot Spring
Dick, Jeffrey M.; Shock, Everett L.
2013-01-01
Many studies link the compositions of microbial communities to their environments, but the energetics of organism-specific biomass synthesis as a function of geochemical variables have rarely been assessed. We describe a thermodynamic model that integrates geochemical and metagenomic data for biofilms sampled at five sites along a thermal and chemical gradient in the outflow channel of the hot spring known as “Bison Pool” in Yellowstone National Park. The relative abundances of major phyla in individual communities sampled along the outflow channel are modeled by computing metastable equilibrium among model proteins with amino acid compositions derived from metagenomic sequences. Geochemical conditions are represented by temperature and activities of basis species, including pH and oxidation-reduction potential quantified as the activity of dissolved hydrogen. By adjusting the activity of hydrogen, the model can be tuned to closely approximate the relative abundances of the phyla observed in the community profiles generated from BLAST assignments. The findings reveal an inverse relationship between the energy demand to form the proteins at equal thermodynamic activities and the abundance of phyla in the community. The distance from metastable equilibrium of the communities, assessed using an equation derived from energetic considerations that is also consistent with the information-theoretic entropy change, decreases along the outflow channel. Specific divergences from metastable equilibrium, such as an underprediction of the relative abundances of phototrophic organisms at lower temperatures, can be explained by considering additional sources of energy and/or differences in growth efficiency. Although the metabolisms used by many members of these communities are driven by chemical disequilibria, the results support the possibility that higher-level patterns of chemotrophic microbial ecosystems are shaped by metastable equilibrium states that depend on both the
The Inefficiency of the Stock Market Equilibrium under Moral Hazard
Calcagno, R.; Wagner, W.B.
2003-01-01
In this paper we study the constrained efficiency of a stock market equilibrium under moral hazard.We extend a standard general equilbrium framework (Magill and Quinzii (1999) and (2002)) to allow for a more general initial ownership distribution.We show that the market allocation is constrained
Financial Intermediation, Competition, and Risk : A General Equilibrium Exposition
Di Nicolo, G.; Lucchetta, M.
2010-01-01
We study a simple general equilibrium model in which investment in a risky technology is subject to moral hazard and banks can extract market power rents. We show that more bank competition results in lower economy-wide risk, lower bank capital ratios, more efficient production plans and
ALARA in diagnosis and therapy: for an equilibrium medically reasonable
International Nuclear Information System (INIS)
Aurengo, A.; Fraboulet, P.
1998-01-01
After a recall about the differences of conceptions in radiation protection in the medical field it is shown how it is possible to make with efficiency an optimization approach and to lead towards an equilibrium medically acceptable between the therapeutic necessity and the protection of medical personnel. The problem of the patients radiation protection is out of this frame. (N.C.)
Institutions, Equilibria and Efficiency
DEFF Research Database (Denmark)
Competition and efficiency is at the core of economic theory. This volume collects papers of leading scholars, which extend the conventional general equilibrium model in important ways. Efficiency and price regulation are studied when markets are incomplete and existence of equilibria in such set...... in OLG, learning in OLG and in games, optimal pricing of derivative securities, the impact of heterogeneity...
Explicit integration of extremely stiff reaction networks: partial equilibrium methods
International Nuclear Information System (INIS)
Guidry, M W; Hix, W R; Billings, J J
2013-01-01
In two preceding papers (Guidry et al 2013 Comput. Sci. Disc. 6 015001 and Guidry and Harris 2013 Comput. Sci. Disc. 6 015002), we have shown that when reaction networks are well removed from equilibrium, explicit asymptotic and quasi-steady-state approximations can give algebraically stabilized integration schemes that rival standard implicit methods in accuracy and speed for extremely stiff systems. However, we also showed that these explicit methods remain accurate but are no longer competitive in speed as the network approaches equilibrium. In this paper, we analyze this failure and show that it is associated with the presence of fast equilibration timescales that neither asymptotic nor quasi-steady-state approximations are able to remove efficiently from the numerical integration. Based on this understanding, we develop a partial equilibrium method to deal effectively with the approach to equilibrium and show that explicit asymptotic methods, combined with the new partial equilibrium methods, give an integration scheme that can plausibly deal with the stiffest networks, even in the approach to equilibrium, with accuracy and speed competitive with that of implicit methods. Thus we demonstrate that such explicit methods may offer alternatives to implicit integration of even extremely stiff systems and that these methods may permit integration of much larger networks than have been possible before in a number of fields. (paper)
Deviations from thermal equilibrium in plasmas
International Nuclear Information System (INIS)
Burm, K.T.A.L.
2004-01-01
A plasma system in local thermal equilibrium can usually be described with only two parameters. To describe deviations from equilibrium two extra parameters are needed. However, it will be shown that deviations from temperature equilibrium and deviations from Saha equilibrium depend on one another. As a result, non-equilibrium plasmas can be described with three parameters. This reduction in parameter space will ease the plasma describing effort enormously
Non-equilibrium fluctuation-induced interactions
International Nuclear Information System (INIS)
Dean, David S
2012-01-01
We discuss non-equilibrium aspects of fluctuation-induced interactions. While the equilibrium behavior of such interactions has been extensively studied and is relatively well understood, the study of these interactions out of equilibrium is relatively new. We discuss recent results on the non-equilibrium behavior of systems whose dynamics is of the dissipative stochastic type and identify a number of outstanding problems concerning non-equilibrium fluctuation-induced interactions.
Florentina Xhelili Krasniqi; Rahmie Topxhiu; Donat Rexha
2016-01-01
Nobel Laureates with their contributions to the development of the theory of general equilibrium have enabled this theory to be one of the most important for theoretical and practical analysis of the overall economy and the efficient use of economic resources. Results of the research showing that contributions of Nobel Laureates in the economy belong to two main frameworks of development of the general equilibrium theory: one was the mathematical model of general equilibrium developed by J...
Understanding Thermal Equilibrium through Activities
Pathare, Shirish; Huli, Saurabhee; Nachane, Madhura; Ladage, Savita; Pradhan, Hemachandra
2015-01-01
Thermal equilibrium is a basic concept in thermodynamics. In India, this concept is generally introduced at the first year of undergraduate education in physics and chemistry. In our earlier studies (Pathare and Pradhan 2011 "Proc. episteme-4 Int. Conf. to Review Research on Science Technology and Mathematics Education" pp 169-72) we…
Equilibrium theory : A salient approach
Schalk, S.
1999-01-01
Whereas the neoclassical models in General Equilibrium Theory focus on the existence of separate commodities, this thesis regards 'bundles of trade' as the unit objects of exchange. Apart from commodities and commodity bundles in the neoclassical sense, the term `bundle of trade' includes, for
Essays in general equilibrium theory
Konovalov, A.
2001-01-01
The thesis focuses on various issues of general equilibrium theory and can approximately be divided into three parts. The first part of the thesis studies generalized equilibria in the Arrow-Debreu model in the situation where the strong survival assumption is not satisfied. Chapter four deals with
Financial equilibrium with career concerns
Directory of Open Access Journals (Sweden)
Amil Dasgupta
2006-03-01
Full Text Available What are the equilibrium features of a financial market where a sizeable proportion of traders face reputational concerns? This question is central to our understanding of financial markets, which are increasingly dominated by institutional investors. We construct a model of delegated portfolio management that captures key features of the US mutual fund industry and embed it in an asset pricing framework. We thus provide a formal model of financial equilibrium with career concerned agents. Fund managers differ in their ability to understand market fundamentals, and in every period investors choose a fund. In equilibrium, the presence of career concerns induces uninformed fund managers to churn, i.e., to engage in trading even when they face a negative expected return. Churners act as noise traders and enhance the level of trading volume. The equilibrium relationship between fund return and net fund flows displays a skewed shape that is consistent with stylized facts. The robustness of our core results is probed from several angles.
Equilibrium with arbitrary market structure
DEFF Research Database (Denmark)
Grodal, Birgit; Vind, Karl
2005-01-01
. The complete market predicted by this theory is clearly unrealistic, and Radner [10] formulated and proved existence of equilibrium in a multiperiod model with incomplete markets. In this paper the Radner result is extended. Radner assumed a specific structure of markets, independence of preferences...
Nash equilibrium with lower probabilities
DEFF Research Database (Denmark)
Groes, Ebbe; Jacobsen, Hans Jørgen; Sloth, Birgitte
1998-01-01
We generalize the concept of Nash equilibrium in mixed strategies for strategic form games to allow for ambiguity in the players' expectations. In contrast to other contributions, we model ambiguity by means of so-called lower probability measures or belief functions, which makes it possible...
International Nuclear Information System (INIS)
Olson, K.W.; Haas, W.J. Jr.; Fassel, V.A.
1977-01-01
Two important aspects of the analytical performance of a newly developed ultrasonic nebulizer and a specially designed pneumatic nebulizer have been compared for use in inductively coupled plasma--atomic emission spectroscopy (ICP-AES). The ultrasonic nebulizer, when combined with a conventional aerosol desolvation apparatus, provided an order of magnitude or more improvement in simultaneous multielement detection limits as compared to those obtained when the pneumatic nebulizer was used either with or without desolvation. Application of a novel method for direct measurement of the overall efficiency of nebulization to the two systems showed that an approximately tenfold greater rate of sample delivery to the plasma torch was primarily responsible for the superior detection limits afforded by the ultrasonic nebulizer. A unique feature of the ultrasonic nebulizer described is the protection against chemical attack which is achieved by completely enclosing the transducer in an acoustically coupled borosilicate glass cylinder. Direct sample introduction, convenient sample change, and rapid cleanout are other important characteristics of the system which make it an attractive alternate to pneumatic nebulizer systems
Energy Technology Data Exchange (ETDEWEB)
Boulyga, Sergei F. [Institute of Inorganic Chemistry and Analytical Chemistry, Johannes Gutenberg-University Mainz, Duesbergweg 10-14, 55099 Mainz (Germany)]. E-mail: sergei.boulyga@univie.ac.at; Heumann, Klaus G. [Institute of Inorganic Chemistry and Analytical Chemistry, Johannes Gutenberg-University Mainz, Duesbergweg 10-14, 55099 Mainz (Germany)
2006-07-01
A method by inductively coupled plasma mass spectrometry (Icp-Ms) was developed which allows the measurement of {sup 236}U at concentration ranges down to 3 x 10{sup -14} g g{sup -1} and extremely low {sup 236}U/{sup 238}U isotope ratios in soil samples of 10{sup -7}. By using the high-efficiency solution introduction system APEX in connection with a sector-field ICP-MS a sensitivity of more than 5000 counts fg{sup -1} uranium was achieved. The use of an aerosol desolvating unit reduced the formation rate of uranium hydride ions UH{sup +}/U{sup +} down to a level of 10{sup -6}. An abundance sensitivity of 3 x 10{sup -7} was observed for {sup 236}U/{sup 238}U isotope ratio measurements at mass resolution 4000. The detection limit for {sup 236}U and the lowest detectable {sup 236}U/{sup 238}U isotope ratio were improved by more than two orders of magnitude compared with corresponding values by alpha spectrometry. Determination of uranium in soil samples collected in the vicinity of Chernobyl nuclear power plant (NPP) resulted in that the {sup 236}U/{sup 238}U isotope ratio is a much more sensitive and accurate marker for environmental contamination by spent uranium in comparison to the {sup 235}U/{sup 238}U isotope ratio. The ICP-MS technique allowed for the first time detection of irradiated uranium in soil samples even at distances more than 200 km to the north of Chernobyl NPP (Mogilev region). The concentration of {sup 236}U in the upper 0-10 cm soil layers varied from 2 x 10{sup -9} g g{sup -1} within radioactive spots close to the Chernobyl NPP to 3 x 10{sup -13} g g{sup -1} on a sampling site located by >200 km from Chernobyl.
Barache, Umesh B; Shaikh, Abdul B; Lokhande, Tukaram N; Kamble, Ganesh S; Anuse, Mansing A; Gaikwad, Shashikant H
2018-01-15
The aim of the present work is to develop an efficient, simple and selective moreover cost-effective method for the extractive spectrophotometric determination of copper(II) by using the Schiff base 4-(4'-chlorobenzylideneimino)-3-methyl-5-mercapto-1, 2, 4-triazole [CBIMMT]. This chromogenic reagent forms a yellow coloured complex with copper(II) in acetate buffer at pH4.2. The copper(II) complex with ligand is instantly extracted into chloroform and shows a maximum absorbance at 414nm which remains stable for >48h. The composition of extracted complex is found to be 1:2 [copper(II): reagent] which was ascertained using Job's method of continuous variation, mole ratio method and slope ratio method. Under optimal conditions, the copper(II) complex in chloroform adheres to Beer's law up to 17.5μgmL -1 of copper(II). The optimum concentration range obtained from Ringbom's plot is from 5μgmL -1 to 17.5μgmL -1 . The molar absorptivity, Sandell's sensitivity and enrichment factor of the extracted copper(II) chelate are 0.33813×10 4 Lmol -1 cm -1 , 0.01996μgcm -2 and 2.49 respectively. In the extraction of copper(II), several affecting factors including the solution pH, ligand concentration, equilibrium time, effect of foreign ions are optimized. The interfering effects of various cations and anions were also studied and use of masking agents enhances the selectivity of the method. The chromogenic sulphur containing reagent, 4-(4'-chlorobenzylideneimino)-3-methyl-5-mercapto-1, 2, 4-triazole has been synthesized in a single step with high purity and yield. The synthesized reagent has been successfully applied first time for determination of copper(II). The reagent forms stable chelate with copper(II) in buffer medium instantly and quantitatively extracted in chloroform within a minute. The method is successfully applied for the determination of copper(II) in various synthetic mixtures, complexes, fertilizers, environmental samples such as food samples, leafy
Barache, Umesh B.; Shaikh, Abdul B.; Lokhande, Tukaram N.; Kamble, Ganesh S.; Anuse, Mansing A.; Gaikwad, Shashikant H.
2018-01-01
The aim of the present work is to develop an efficient, simple and selective moreover cost-effective method for the extractive spectrophotometric determination of copper(II) by using the Schiff base 4-(4‧-chlorobenzylideneimino)-3-methyl-5-mercapto-1, 2, 4-triazole [CBIMMT]. This chromogenic reagent forms a yellow coloured complex with copper(II) in acetate buffer at pH 4.2. The copper(II) complex with ligand is instantly extracted into chloroform and shows a maximum absorbance at 414 nm which remains stable for > 48 h. The composition of extracted complex is found to be 1:2 [copper(II): reagent] which was ascertained using Job's method of continuous variation, mole ratio method and slope ratio method. Under optimal conditions, the copper(II) complex in chloroform adheres to Beer's law up to 17.5 μg mL- 1 of copper(II). The optimum concentration range obtained from Ringbom's plot is from 5 μg mL- 1 to 17.5 μg mL- 1. The molar absorptivity, Sandell's sensitivity and enrichment factor of the extracted copper(II) chelate are 0.33813 × 104 L mol- 1 cm- 1, 0.01996 μg cm- 2 and 2.49 respectively. In the extraction of copper(II), several affecting factors including the solution pH, ligand concentration, equilibrium time, effect of foreign ions are optimized. The interfering effects of various cations and anions were also studied and use of masking agents enhances the selectivity of the method. The chromogenic sulphur containing reagent, 4-(4‧-chlorobenzylideneimino)-3-methyl-5-mercapto-1, 2, 4-triazole has been synthesized in a single step with high purity and yield. The synthesized reagent has been successfully applied first time for determination of copper(II). The reagent forms stable chelate with copper(II) in buffer medium instantly and quantitatively extracted in chloroform within a minute. The method is successfully applied for the determination of copper(II) in various synthetic mixtures, complexes, fertilizers, environmental samples such as food samples
Equilibrium moisture content of OSB panels made from Eucalyptus urophylla clones
Directory of Open Access Journals (Sweden)
Lourival Marin Mendes
2014-12-01
Full Text Available This work aimed to verify the efficiency of Nelson's equation to estimate equilibrium moisture content of this material and provide a model for determination of moisture content of panels based on air relative moisture content, as well as to evaluate the effect of some processing variables on the equilibrium moisture content of OSB (Oriented Strand Board panels. The 25 x 25 mm samples were put in an acclimation room where they were kept at 30ºC and had their mass determined after stabilization at the relative air moisture contents of 40, 50, 60, 70, 80 and 90%. By the results obtained it was possible to conclude that: Nelson's equation tended to underestimate moisture values of the panel; the polynomial model adjusted based on the relative moisture of the air presented great potential to be used; although different behavior may be observed for the isotherms of treatments, there was no significant effect of the variables panel density, wood basic density, mat type and pressure temperature on mean equilibrium moisture content in desorption 1, adsorption and desorption 2.
International Nuclear Information System (INIS)
Yang, Xin-an; Chi, Miao-bin; Wang, Qing-qing; Zhang, Wang-bing
2015-01-01
Highlights: • We develop a modified chemical vapor generation method coupled with AFS for the determination of cadmium. • The response of Cd could be increased at least four-fold compared to conventional thiourea and Co(II) system. • A simple mixing sequences experiment is designed to study the reaction mechanism. • The interference of transition metal ions can be easily eliminated by adding DDTC. • The method is successfully applied in seafood samples and rice samples. - Abstract: A vapor generation procedure to determine Cd by atomic fluorescence spectrometry (AFS) has been established. Volatile species of Cd are generated by following reaction of acidified sample containing Fe(II) and L-cysteine (Cys) with sodium tetrahydroborate (NaBH 4 ). The presence of 5 mg L −1 Fe(II) and 0.05% m/v Cys improves the efficiency of Cd vapor generation substantially about four-fold compared with conventional thiourea and Co(II) system. Three experiments with different mixing sequences and reaction times are designed to study the reaction mechanism. The results document that the stability of Cd(II)–Cys complexes is better than Cys–THB complexes (THB means NaBH 4 ) while the Cys–THB complexes have more contribution to improve the Cd vapor generation efficiency than Cd(II)–Cys complexes. Meanwhile, the adding of Fe(II) can catalyze the Cd vapor generation. Under the optimized conditions, the detection limit of Cd is 0.012 μg L −1 ; relative standard deviations vary between 0.8% and 5.5% for replicate measurements of the standard solution. In the presence of 0.01% DDTC, Cu(II), Pb(II) and Zn(II) have no significant influence up to 5 mg L −1 , 10 mg L −1 and 10 mg L −1 , respectively. The accuracy of the method is verified through analysis of the certificated reference materials and the proposed method has been applied in the determination of Cd in seafood and rice samples
Energy Technology Data Exchange (ETDEWEB)
Yang, Xin-an, E-mail: 13087641@qq.com; Chi, Miao-bin, E-mail: 1161306667@qq.com; Wang, Qing-qing, E-mail: wangqq8812@163.com; Zhang, Wang-bing, E-mail: ahutwbzh@163.com
2015-04-15
Highlights: • We develop a modified chemical vapor generation method coupled with AFS for the determination of cadmium. • The response of Cd could be increased at least four-fold compared to conventional thiourea and Co(II) system. • A simple mixing sequences experiment is designed to study the reaction mechanism. • The interference of transition metal ions can be easily eliminated by adding DDTC. • The method is successfully applied in seafood samples and rice samples. - Abstract: A vapor generation procedure to determine Cd by atomic fluorescence spectrometry (AFS) has been established. Volatile species of Cd are generated by following reaction of acidified sample containing Fe(II) and L-cysteine (Cys) with sodium tetrahydroborate (NaBH{sub 4}). The presence of 5 mg L{sup −1} Fe(II) and 0.05% m/v Cys improves the efficiency of Cd vapor generation substantially about four-fold compared with conventional thiourea and Co(II) system. Three experiments with different mixing sequences and reaction times are designed to study the reaction mechanism. The results document that the stability of Cd(II)–Cys complexes is better than Cys–THB complexes (THB means NaBH{sub 4}) while the Cys–THB complexes have more contribution to improve the Cd vapor generation efficiency than Cd(II)–Cys complexes. Meanwhile, the adding of Fe(II) can catalyze the Cd vapor generation. Under the optimized conditions, the detection limit of Cd is 0.012 μg L{sup −1}; relative standard deviations vary between 0.8% and 5.5% for replicate measurements of the standard solution. In the presence of 0.01% DDTC, Cu(II), Pb(II) and Zn(II) have no significant influence up to 5 mg L{sup −1}, 10 mg L{sup −1}and 10 mg L{sup −1}, respectively. The accuracy of the method is verified through analysis of the certificated reference materials and the proposed method has been applied in the determination of Cd in seafood and rice samples.
Xie, Lijun; Liu, Shuqin; Han, Zhubing; Jiang, Ruifen; Zhu, Fang; Xu, Weiqin; Su, Chengyong; Ouyang, Gangfeng
2017-09-01
The fiber coating is the key part of the solid-phase microextraction (SPME) technique, and it determines the sensitivity, selectivity, and repeatability of the analytical method. In this work, amine (NH 2 )-functionalized material of Institute Lavoisier (MIL)-53(Al) nanoparticles were successfully synthesized, characterized, and applied as the SPME fiber coating for efficient sample pretreatment owing to their unique structures and excellent adsorption properties. Under optimized conditions, the NH 2 -MIL-53(Al)-coated fiber showed good precision, low limits of detection (LODs) [0.025-0.83 ng L -1 for synthetic musks (SMs) and 0.051-0.97 ng L -1 for organochlorine pesticides (OCPs)], and good linearity. Experimental results showed that the NH 2 -MIL-53(Al) SPME coating was solvent resistant and thermostable. In addition, the extraction efficiencies of the NH 2 -MIL-53(Al) coating for SMs and OCPs were higher than those of commercially available SPME fiber coatings such as polydimethylsiloxane, polydimethylsiloxane-divinylbenzene, and polyacrylate. The reasons may be that the analytes are adsorbed on NH 2 -MIL-53(Al) primarily through π-π interactions, electron donor-electron acceptor interactions, and hydrogen bonds between the analytes and organic linkers of the material. Direct immersion (DI) SPME-gas chromatography-mass spectrometry methods based on NH 2 -MIL-53(Al) were successfully applied for the analysis of tap and river water samples. The recoveries were 80.3-115% for SMs and 77.4-117% for OCPs. These results indicate that the NH 2 -MIL-53(Al) coating may be a promising alternative to SPME coatings for the enrichment of SMs and OCPs.
On generalized operator quasi-equilibrium problems
Kum, Sangho; Kim, Won Kyu
2008-09-01
In this paper, we will introduce the generalized operator equilibrium problem and generalized operator quasi-equilibrium problem which generalize the operator equilibrium problem due to Kazmi and Raouf [K.R. Kazmi, A. Raouf, A class of operator equilibrium problems, J. Math. Anal. Appl. 308 (2005) 554-564] into multi-valued and quasi-equilibrium problems. Using a Fan-Browder type fixed point theorem in [S. Park, Foundations of the KKM theory via coincidences of composites of upper semicontinuous maps, J. Korean Math. Soc. 31 (1994) 493-519] and an existence theorem of equilibrium for 1-person game in [X.-P. Ding, W.K. Kim, K.-K. Tan, Equilibria of non-compact generalized games with L*-majorized preferences, J. Math. Anal. Appl. 164 (1992) 508-517] as basic tools, we prove new existence theorems on generalized operator equilibrium problem and generalized operator quasi-equilibrium problem which includes operator equilibrium problems.
Equilibrium studies of helical axis stellarators
International Nuclear Information System (INIS)
Hender, T.C.; Carreras, B.A.; Garcia, L.; Harris, J.H.; Rome, J.A.; Cantrell, J.L.; Lynch, V.E.
1984-01-01
The equilibrium properties of helical axis stellarators are studied with a 3-D equilibrium code and with an average method (2-D). The helical axis ATF is shown to have a toroidally dominated equilibrium shift and good equilibria up to at least 10% peak beta. Low aspect ratio heliacs, with relatively large toroidal shifts, are shown to have low equilibrium beta limits (approx. 5%). Increasing the aspect ratio and number of field periods proportionally is found to improve the equilibrium beta limit. Alternatively, increasing the number of field periods at fixed aspect ratio which raises and lowers the toroidal shift improves the equilibrium beta limit
International Nuclear Information System (INIS)
Nasri, F.; Hatami, T.
2012-01-01
Interest in supercritical fluids extraction (SFE ) is increasing throughout many scientific and industrial fields. The common solvent for use in SFE is carbon dioxide. However, pure carbon dioxide frequently fails to efficiently extract the essential oil from a sample matrix, and modifier fluids such as methanol should be used to enhance extraction yield. A more efficient use of SFE requires quantitative prediction of phase equilibrium of this binary system, carbon dioxide - methanol. The purpose of the current research is modeling carbon dioxide - methanol system using artificial neural network (ANN). Results of ANN modeling has been compared with experimental data as well as thermodynamic equations of state. The comparison shows that the ANN modeling has a higher accuracy than thermodynamic models. (author)
De Grazia, Selenia; Gionfriddo, Emanuela; Pawliszyn, Janusz
2017-05-15
The current work presents the optimization of a protocol enabling direct extraction of avocado samples by a new Solid Phase Microextraction matrix compatible coating. In order to further extend the coating life time, pre-desorption and post-desorption washing steps were optimized for solvent type, time, and degree of agitation employed. Using optimized conditions, lifetime profiles of the coating related to extraction of a group of analytes bearing different physical-chemical properties were obtained. Over 80 successive extractions were carried out to establish coating efficiency using PDMS/DVB 65µm commercial coating in comparison with the PDMS/DVB/PDMS. The PDMS/DVB coating was more prone to irreversible matrix attachment on its surface, with consequent reduction of its extractive performance after 80 consecutive extractions. Conversely, the PDMS/DVB/PDMS coating showed enhanced inertness towards matrix fouling due to its outer smooth PDMS layer. This work represents the first step towards the development of robust SPME methods for quantification of contaminants in avocado as well as other fatty-based matrices, with minimal sample pre-treatment prior to extraction. In addition, an evaluation of matrix components attachment on the coating surface and related artifacts created by desorption of the coating at high temperatures in the GC-injector port, has been performed by GCxGC-ToF/MS. Copyright © 2017 Elsevier B.V. All rights reserved.
Students’ misconceptions on solubility equilibrium
Setiowati, H.; Utomo, S. B.; Ashadi
2018-05-01
This study investigated the students’ misconceptions of the solubility equilibrium. The participants of the study consisted of 164 students who were in the science class of second year high school. Instrument used is two-tier diagnostic test consisting of 15 items. Responses were marked and coded into four categories: understanding, misconception, understand little without misconception, and not understanding. Semi-structured interviews were carried out with 45 students according to their written responses which reflected different perspectives, to obtain a more elaborated source of data. Data collected from multiple methods were analyzed qualitatively and quantitatively. Based on the data analysis showed that the students misconceptions in all areas in solubility equilibrium. They had more misconceptions such as in the relation of solubility and solubility product, common-ion effect and pH in solubility, and precipitation concept.
Simulations of NMR pulse sequences during equilibrium and non-equilibrium chemical exchange
International Nuclear Information System (INIS)
Helgstrand, Magnus; Haerd, Torleif; Allard, Peter
2000-01-01
The McConnell equations combine the differential equations for a simple two-state chemical exchange process with the Bloch differential equations for a classical description of the behavior of nuclear spins in a magnetic field. This equation system provides a useful starting point for the analysis of slow, intermediate and fast chemical exchange studied using a variety of NMR experiments. The McConnell equations are in the mathematical form of an inhomogeneous system of first-order differential equations. Here we rewrite the McConnell equations in a homogeneous form in order to facilitate fast and simple numerical calculation of the solution to the equation system. The McConnell equations can only treat equilibrium chemical exchange. We therefore also present a homogeneous equation system that can handle both equilibrium and non-equilibrium chemical processes correctly, as long as the kinetics is of first-order. Finally, the same method of rewriting the inhomogeneous form of the McConnell equations into a homogeneous form is applied to a quantum mechanical treatment of a spin system in chemical exchange. In order to illustrate the homogeneous McConnell equations, we have simulated pulse sequences useful for measuring exchange rates in slow, intermediate and fast chemical exchange processes. A stopped-flow NMR experiment was simulated using the equations for non-equilibrium chemical exchange. The quantum mechanical treatment was tested by the simulation of a sensitivity enhanced 15 N-HSQC with pulsed field gradients during slow chemical exchange and by the simulation of the transfer efficiency of a two-dimensional heteronuclear cross-polarization based experiment as a function of both chemical shift difference and exchange rate constants
Rajabi, Maryam; Arghavani-Beydokhti, Somayeh; Barfi, Behruz; Asghari, Alireza
2017-03-08
In the present work, a novel nanosorbent namely layered double hydroxides with 4-amino-5-hydroxyl-2,7-naphthalendisulfonic acid monosodium salt interlayer anion (Mg-Al-AHNDA-LDH) was synthesized and applied as a dissolvable nanosorbent in a centrifugeless ultrasound-enhanced air-agitated dispersive solid-phase extraction (USE-AA-D-SPE) method. This method was used for the separation and preconcentration of some metal ions including Cd 2+ , Cr 6+ , Pb 2+ , Co 2+ , and Ni 2+ prior to their determination using the micro-sampling flame atomic absorption spectrometry (MS-FAAS) technique. The most interesting aspect of this nanosorbent is its immediate dissolvability at pH values lower than 4. This capability drastically eliminates the elution step, leading to a great improvement in the extraction efficiency and a decrease in the extraction time. Also in this method, the use of a syringe nanofilter eliminates the need for the centrifugation step, which is time-consuming and essentially causes the analysis to be off-line. Several effective parameters governing the extraction efficiency including the sample solution pH, amount of nanosorbent, eluent condition, number of air-agitation cycles, and sonication time were investigated and optimized. Under the optimized conditions, the good linear dynamic ranges of 2-70, 6-360, 7-725, 7-370, and 8-450 ng mL -1 for the Cd 2+ , Cr 6+ , Pb 2+ , Co 2+ and Ni 2+ ions, respectively, with the correlation of determinations (R 2 s) higher than 0.997 were obtained. The limits of detection (LODs) were found to be 0.6, 1.7, 2.0, 2.1, and 2.4 for the Cd 2+ , Cr 6+ , Pb 2+ , Co 2+ , and Ni 2+ ions, respectively. The intra-day and inter-day precisions (percent relative standard deviations (%RSDs) (n = 5)) were below 7.8%. The proposed method was also successfully applied for the extraction and determination of the target ions in different biological fluid and tap water samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Rajabi, Maryam, E-mail: mrajabi@semnan.ac.ir; Arghavani-Beydokhti, Somayeh; Barfi, Behruz; Asghari, Alireza
2017-03-08
In the present work, a novel nanosorbent namely layered double hydroxides with 4-amino-5-hydroxyl-2,7-naphthalendisulfonic acid monosodium salt interlayer anion (Mg-Al-AHNDA-LDH) was synthesized and applied as a dissolvable nanosorbent in a centrifugeless ultrasound-enhanced air-agitated dispersive solid-phase extraction (USE-AA-D-SPE) method. This method was used for the separation and preconcentration of some metal ions including Cd{sup 2+}, Cr{sup 6+}, Pb{sup 2+}, Co{sup 2+}, and Ni{sup 2+} prior to their determination using the micro-sampling flame atomic absorption spectrometry (MS-FAAS) technique. The most interesting aspect of this nanosorbent is its immediate dissolvability at pH values lower than 4. This capability drastically eliminates the elution step, leading to a great improvement in the extraction efficiency and a decrease in the extraction time. Also in this method, the use of a syringe nanofilter eliminates the need for the centrifugation step, which is time-consuming and essentially causes the analysis to be off-line. Several effective parameters governing the extraction efficiency including the sample solution pH, amount of nanosorbent, eluent condition, number of air-agitation cycles, and sonication time were investigated and optimized. Under the optimized conditions, the good linear dynamic ranges of 2–70, 6–360, 7–725, 7–370, and 8–450 ng mL{sup −1} for the Cd{sup 2+}, Cr{sup 6+}, Pb{sup 2+}, Co{sup 2+}and Ni{sup 2+} ions, respectively, with the correlation of determinations (R{sup 2}s) higher than 0.997 were obtained. The limits of detection (LODs) were found to be 0.6, 1.7, 2.0, 2.1, and 2.4 for the Cd{sup 2+}, Cr{sup 6+}, Pb{sup 2+}, Co{sup 2+}, and Ni{sup 2+} ions, respectively. The intra-day and inter-day precisions (percent relative standard deviations (%RSDs) (n = 5)) were below 7.8%. The proposed method was also successfully applied for the extraction and determination of the target ions in different biological fluid
International Nuclear Information System (INIS)
Rajabi, Maryam; Arghavani-Beydokhti, Somayeh; Barfi, Behruz; Asghari, Alireza
2017-01-01
In the present work, a novel nanosorbent namely layered double hydroxides with 4-amino-5-hydroxyl-2,7-naphthalendisulfonic acid monosodium salt interlayer anion (Mg-Al-AHNDA-LDH) was synthesized and applied as a dissolvable nanosorbent in a centrifugeless ultrasound-enhanced air-agitated dispersive solid-phase extraction (USE-AA-D-SPE) method. This method was used for the separation and preconcentration of some metal ions including Cd 2+ , Cr 6+ , Pb 2+ , Co 2+ , and Ni 2+ prior to their determination using the micro-sampling flame atomic absorption spectrometry (MS-FAAS) technique. The most interesting aspect of this nanosorbent is its immediate dissolvability at pH values lower than 4. This capability drastically eliminates the elution step, leading to a great improvement in the extraction efficiency and a decrease in the extraction time. Also in this method, the use of a syringe nanofilter eliminates the need for the centrifugation step, which is time-consuming and essentially causes the analysis to be off-line. Several effective parameters governing the extraction efficiency including the sample solution pH, amount of nanosorbent, eluent condition, number of air-agitation cycles, and sonication time were investigated and optimized. Under the optimized conditions, the good linear dynamic ranges of 2–70, 6–360, 7–725, 7–370, and 8–450 ng mL −1 for the Cd 2+ , Cr 6+ , Pb 2+ , Co 2+ and Ni 2+ ions, respectively, with the correlation of determinations (R 2 s) higher than 0.997 were obtained. The limits of detection (LODs) were found to be 0.6, 1.7, 2.0, 2.1, and 2.4 for the Cd 2+ , Cr 6+ , Pb 2+ , Co 2+ , and Ni 2+ ions, respectively. The intra-day and inter-day precisions (percent relative standard deviations (%RSDs) (n = 5)) were below 7.8%. The proposed method was also successfully applied for the extraction and determination of the target ions in different biological fluid and tap water samples. - Highlights: • A novel centrifugeless dispersive
An introduction to equilibrium thermodynamics
Morrill, Bernard; Hartnett, James P; Hughes, William F
1973-01-01
An Introduction to Equilibrium Thermodynamics discusses classical thermodynamics and irreversible thermodynamics. It introduces the laws of thermodynamics and the connection between statistical concepts and observable macroscopic properties of a thermodynamic system. Chapter 1 discusses the first law of thermodynamics while Chapters 2 through 4 deal with statistical concepts. The succeeding chapters describe the link between entropy and the reversible heat process concept of entropy; the second law of thermodynamics; Legendre transformations and Jacobian algebra. Finally, Chapter 10 provides a
Money Inventories in Search Equilibrium
Berentsen, Aleksander
1998-01-01
The paper relaxes the one unit storage capacity imposed in the basic search-theoretic model of fiat money with indivisible real commodities and indivisible money. Agents can accumulate as much money as they want. It characterizes the stationary distributions of money and shows that for reasonable parameter values (e.g. production cost, discounting, degree of specialization) a monetary equilibrium exists. There are multiple stationary distributions of a given amount of money, which differ in t...
Vázquez, P Parrilla; Lozano, A; Uclés, S; Ramos, M M Gómez; Fernández-Alba, A R
2015-12-24
Several clean-up methods were evaluated for 253 pesticides in pollen samples concentrating on efficient clean-up and the highest number of pesticides satisfying the recovery and precision criteria. These were: (a) modified QuEChERS using dSPE with PSA+C18; (b) freeze-out prior to QuEChERS using dSPE with PSA+C18; (c) freeze-out prior to QuEChERS using dSPE with PSA+C18+Z-Sep; and (d) freeze-out followed by QuEChERS using dSPE with PSA+C18 and SPE with Z-Sep. Determinations were made using LC-MS/MS and GC-MS/MS. The modified QuEChERS protocol applying a freeze-out followed by dSPE with PSA+C18 and SPE clean-up with Z-Sep was selected because it provided the highest number of pesticides with mean recoveries in the 70-120% range, as well as relative standard deviations (RSDs) typically below 20% (12.2% on average) and ensured much better removal of co-extracted matrix compounds of paramount importance in routine analysis. Limits of quantification at levels as low as 5μgkg(-1) were obtained for the majority of the pesticides. The proposed methodology was applied to the analysis of 41 pollen bee samples from different areas in Spain. Pesticides considered potentially toxic to bees (DL50bee) were detected in some samples with concentrations up to 72.7μgkg(-1), which could negatively affect honeybee health. Copyright © 2015 Elsevier B.V. All rights reserved.
Quantum dynamical semigroups and approach to equilibrium
International Nuclear Information System (INIS)
Frigerio, A.
1977-01-01
For a quantum dynamical semigroup possessing a faithful normal stationary state, some conditions are discussed, which ensure the uniqueness of the equilibrium state and/or the approach to equilibrium for arbitrary initial condition. (Auth.)
Vertical equilibrium with sub-scale analytical methods for geological CO2 sequestration
Gasda, S. E.; Nordbotten, J. M.; Celia, M. A.
2009-01-01
equilibrium with sub-scale analytical method (VESA) combines the flexibility of a numerical method, allowing for heterogeneous and geologically complex systems, with the efficiency and accuracy of an analytical method, thereby eliminating expensive grid
The geometry of finite equilibrium sets
DEFF Research Database (Denmark)
Balasko, Yves; Tvede, Mich
2009-01-01
We investigate the geometry of finite datasets defined by equilibrium prices, income distributions, and total resources. We show that the equilibrium condition imposes no restrictions if total resources are collinear, a property that is robust to small perturbations. We also show that the set...... of equilibrium datasets is pathconnected when the equilibrium condition does impose restrictions on datasets, as for example when total resources are widely noncollinear....
Accelerating Multiagent Reinforcement Learning by Equilibrium Transfer.
Hu, Yujing; Gao, Yang; An, Bo
2015-07-01
An important approach in multiagent reinforcement learning (MARL) is equilibrium-based MARL, which adopts equilibrium solution concepts in game theory and requires agents to play equilibrium strategies at each state. However, most existing equilibrium-based MARL algorithms cannot scale due to a large number of computationally expensive equilibrium computations (e.g., computing Nash equilibria is PPAD-hard) during learning. For the first time, this paper finds that during the learning process of equilibrium-based MARL, the one-shot games corresponding to each state's successive visits often have the same or similar equilibria (for some states more than 90% of games corresponding to successive visits have similar equilibria). Inspired by this observation, this paper proposes to use equilibrium transfer to accelerate equilibrium-based MARL. The key idea of equilibrium transfer is to reuse previously computed equilibria when each agent has a small incentive to deviate. By introducing transfer loss and transfer condition, a novel framework called equilibrium transfer-based MARL is proposed. We prove that although equilibrium transfer brings transfer loss, equilibrium-based MARL algorithms can still converge to an equilibrium policy under certain assumptions. Experimental results in widely used benchmarks (e.g., grid world game, soccer game, and wall game) show that the proposed framework: 1) not only significantly accelerates equilibrium-based MARL (up to 96.7% reduction in learning time), but also achieves higher average rewards than algorithms without equilibrium transfer and 2) scales significantly better than algorithms without equilibrium transfer when the state/action space grows and the number of agents increases.
The Geometry of Finite Equilibrium Datasets
DEFF Research Database (Denmark)
Balasko, Yves; Tvede, Mich
We investigate the geometry of finite datasets defined by equilibrium prices, income distributions, and total resources. We show that the equilibrium condition imposes no restrictions if total resources are collinear, a property that is robust to small perturbations. We also show that the set...... of equilibrium datasets is pathconnected when the equilibrium condition does impose restrictions on datasets, as for example when total resources are widely non collinear....
Optimal Dispatching of Active Distribution Networks Based on Load Equilibrium
Directory of Open Access Journals (Sweden)
Xiao Han
2017-12-01
Full Text Available This paper focuses on the optimal intraday scheduling of a distribution system that includes renewable energy (RE generation, energy storage systems (ESSs, and thermostatically controlled loads (TCLs. This system also provides time-of-use pricing to customers. Unlike previous studies, this study attempts to examine how to optimize the allocation of electric energy and to improve the equilibrium of the load curve. Accordingly, we propose a concept of load equilibrium entropy to quantify the overall equilibrium of the load curve and reflect the allocation optimization of electric energy. Based on this entropy, we built a novel multi-objective optimal dispatching model to minimize the operational cost and maximize the load curve equilibrium. To aggregate TCLs into the optimization objective, we introduced the concept of a virtual power plant (VPP and proposed a calculation method for VPP operating characteristics based on the equivalent thermal parameter model and the state-queue control method. The Particle Swarm Optimization algorithm was employed to solve the optimization problems. The simulation results illustrated that the proposed dispatching model can achieve cost reductions of system operations, peak load curtailment, and efficiency improvements, and also verified that the load equilibrium entropy can be used as a novel index of load characteristics.
Open problems in non-equilibrium physics
International Nuclear Information System (INIS)
Kusnezov, D.
1997-01-01
The report contains viewgraphs on the following: approaches to non-equilibrium statistical mechanics; classical and quantum processes in chaotic environments; classical fields in non-equilibrium situations: real time dynamics at finite temperature; and phase transitions in non-equilibrium conditions
The concept of equilibrium in organization theory
Gazendam, H.W.M.
1998-01-01
Many organization theories consist of an interpretation frame and an idea about the ideal equilibrium state. This article explains how the equilibrium concept is used in four organization theories: the theories of Fayol, Mintzberg, Morgan, and Volberda. Equilibrium can be defined as balance, fit or
The concept of equilibrium in organization theory
Gazendam, Henk W.M.
1997-01-01
Many organization theories consist of an interpretation frame and an idea about the ideal equilibrium state. This article explains how the equilibrium concept is used in four organization theories: the theories of Fayol, Mintzberg, Morgan, and Volberda. Equilibrium can be defined as balance, fit or
Open problems in non-equilibrium physics
Energy Technology Data Exchange (ETDEWEB)
Kusnezov, D.
1997-09-22
The report contains viewgraphs on the following: approaches to non-equilibrium statistical mechanics; classical and quantum processes in chaotic environments; classical fields in non-equilibrium situations: real time dynamics at finite temperature; and phase transitions in non-equilibrium conditions.
International Nuclear Information System (INIS)
Nagel, Armin Michael
2009-01-01
A 3D radial k-space acquisition technique with homogenous distribution of the sampling density (DA-3D-RAD) is presented. This technique enables short echo times (TE 23 Na-MRI, and provides a high SNR-efficiency. The gradients of the DA-3D-RAD-sequence are designed such that the average sampling density in each spherical shell of k-space is constant. The DA-3D-RAD-sequence provides 34% more SNR than a conventional 3D radial sequence (3D-RAD) if T 2 * -decay is neglected. This SNR-gain is enhanced if T 2 * -decay is present, so a 1.5 to 1.8 fold higher SNR is measured in brain tissue with the DA-3D-RAD-sequence. Simulations and experimental measurements show that the DA-3D-RAD sequence yields a better resolution in the presence of T 2 * -decay and less image artefacts when B 0 -inhomogeneities exist. Using the developed sequence, T 1 -, T 2 * - and Inversion-Recovery- 23 Na-image contrasts were acquired for several organs and 23 Na-relaxation times were measured (brain tissue: T 1 =29.0±0.3 ms; T 2s * ∼4 ms; T 2l * ∼31 ms; cerebrospinal fluid: T 1 =58.1±0.6 ms; T 2 * =55±3 ms (B 0 =3 T)). T 1 - und T 2 * -relaxation times of cerebrospinal fluid are independent of the selected magnetic field strength (B0 = 3T/7 T), whereas the relaxation times of brain tissue increase with field strength. Furthermore, 23 Na-signals of oedemata were suppressed in patients and thus signals from different tissue compartments were selectively measured. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Piepel, Gregory F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hutchison, Janine R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Deatherage Kaiser, Brooke L [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Amidan, Brett G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sydor, Michael A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Barrett, Christopher A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-03-31
The performance of a macrofoam-swab sampling method was evaluated using Bacillus anthracis Sterne (BAS) and Bacillus atrophaeus Nakamura (BG) spores applied at nine low target amounts (2-500 spores) to positive-control plates and test coupons (2 in. × 2 in.) of four surface materials (glass, stainless steel, vinyl tile, and plastic). Test results from cultured samples were used to evaluate the effects of surrogate, surface concentration, and surface material on recovery efficiency (RE), false negative rate (FNR), and limit of detection. For RE, surrogate and surface material had statistically significant effects, but concentration did not. Mean REs were the lowest for vinyl tile (50.8% with BAS, 40.2% with BG) and the highest for glass (92.8% with BAS, 71.4% with BG). FNR values ranged from 0 to 0.833 for BAS and 0 to 0.806 for BG, with values increasing as concentration decreased in the range tested (0.078 to 19.375 CFU/cm^{2}, where CFU denotes ‘colony forming units’). Surface material also had a statistically significant effect. A FNR-concentration curve was fit for each combination of surrogate and surface material. For both surrogates, the FNR curves tended to be the lowest for glass and highest for vinyl title. The FNR curves for BG tended to be higher than for BAS at lower concentrations, especially for glass. Results using a modified Rapid Viability-Polymerase Chain Reaction (mRV-PCR) analysis method were also obtained. The mRV-PCR results and comparisons to the culture results will be discussed in a subsequent report.
Energy Technology Data Exchange (ETDEWEB)
Piepel, Gregory F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hutchison, Janine R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kaiser, Brooke L. D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Amidan, Brett G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sydor, Michael A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Barrett, Christopher A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2016-06-16
The performance of a macrofoam-swab sampling method was evaluated using Bacillus anthracis Sterne (BAS) and Bacillus atrophaeus Nakamura (BG) spores applied at nine low target amounts (2-500 spores) to positive-control plates and test coupons (2 in × 2 in) of four surface materials (glass, stainless steel, vinyl tile, and plastic). Test results from cultured samples were used to evaluate the effects of surrogate, surface concentration, and surface material on recovery efficiency (RE), false negative rate (FNR), and limit of detection. For RE, surrogate and surface material had statistically significant effects, but concentration did not. Mean REs were the lowest for vinyl tile (50.8% with BAS, 40.2% with BG) and the highest for glass (92.8% with BAS, 71.4% with BG). FNR values ranged from 0 to 0.833 for BAS and 0 to 0.806 for BG, with values increasing as concentration decreased in the range tested (0.078 to 19.375 CFU/cm2, where CFU denotes ‘colony forming units’). Surface material also had a statistically significant effect. A FNR-concentration curve was fit for each combination of surrogate and surface material. For both surrogates, the FNR curves tended to be the lowest for glass and highest for vinyl title. The FNR curves for BG tended to be higher than for BAS at lower concentrations, especially for glass. Results using a modified Rapid Viability-Polymerase Chain Reaction (mRV-PCR) analysis method were also obtained. The mRV-PCR results and comparisons to the culture results are discussed in a separate report.
Equilibrium and non-equilibrium metal-ceramic interfaces
International Nuclear Information System (INIS)
Gao, Y.; Merkle, K.L.
1992-01-01
Metal-ceramic interfaces in thermodynamic equilibrium (Au/ZrO 2 ) and non-equilibrium (Au/MgO) have been studied by TEM and HREM. In the Au/ZrO 2 system, ZrO 2 precipitates formed by internal oxidation of a 7%Zr-Au alloy show a cubic ZrO 2 phase. It appears that formation of the cubic ZrO 2 is facilitated by alignment with the Au matrix. Most of the ZrO 2 precipitates have a perfect cube-on-cube orientation relationship with the Au matrix. The large number of interfacial steps observed in a short-time annealing experiment indicate that the precipitates are formed by the ledge growth mechanism. The lowest interfacial energy is indicated by the dominance of closed-packed [111] Au/ZrO 2 interfaces. In the Au/MgO system, composite films with small MgO smoke particles embedded in a Au matrix were prepared by a thin film technique. HREM observations show that most of the Au/MgO interfaces have a strong tendency to maintain a dense lattice structure across the interfaces irrespective of whether the interfaces are incoherent or semi-coherent. This paper reports that this indicates that there may be a relatively strong bond between MgO and Au
Thermal equilibrium in Einstein's elevator.
Sánchez-Rey, Bernardo; Chacón-Acosta, Guillermo; Dagdug, Leonardo; Cubero, David
2013-05-01
We report fully relativistic molecular-dynamics simulations that verify the appearance of thermal equilibrium of a classical gas inside a uniformly accelerated container. The numerical experiments confirm that the local momentum distribution in this system is very well approximated by the Jüttner function-originally derived for a flat spacetime-via the Tolman-Ehrenfest effect. Moreover, it is shown that when the acceleration or the container size is large enough, the global momentum distribution can be described by the so-called modified Jüttner function, which was initially proposed as an alternative to the Jüttner function.
International Nuclear Information System (INIS)
Diaz-Guerra, J.P.; Roca, M.
1984-01-01
The volatilization-excitation mechanisms of Li 2 CO 3 , SrCO 3 and GeO 2 as buffers for the determination of different major constituents in geological samples have been investigated considering the phenomena taking place in the electrode, anodic load and are plasma. The present paper deals with the evaluation of fundamental parameters and processes in d.c. are that have first been applied to the study of a Li 2 CO 3 : graphite (1:1) mixture. A second paper Is devoted to ascertain the action of each of the other two species. Intensity-time curves, variation of voltage between the electrodes, vapour diffusion through the electrode wall, load depletion, reaction products formation, and temperature, electron pressure and ionization degree in the are plasma have been studied. The measurement of plasma parameters has been performed by introducing thermometric and manometric species in both the anode and the cathode electrodes. A procedure for calculating the relative emission efficiencies of the analytical lines, taking into account the transportation process, has been developed. (Author) 21 refs
International Nuclear Information System (INIS)
Diaz-Guerra, J.P.; Roca, M.
1984-01-01
The volatilization-excitation mechanisms of Li 2 CO 3 , SrCO 3 and GeO 2 as buffers for the determination of different major constituents in geological samples have been investigated considering the phenomena taking place in the electrode, anodic load and arc plasma. The present paper deals with the evaluation of fundamental parameters and processes in d.c. arc that have first been applied to the study of a Li 2 CO 3 : graphite (1:1) mixture. A second paper is devoted to ascertain the action of each of the other two species. Intensity-time curves, variation of voltage between the electrodes, vapour diffusion through the electrode wall, load depletion, reaction products formation, and temperature, electron pressure and ionization degree in the arc plasma have been studied. The measurement of plasma parameters has been performed by introducing thermometric and manometric species in both the anode and the cathode electrodes. A procedure for calculating the relative emission efficiencies of the analytical lines, taking into account the transportation process, has been developed. (author)
Rucci, Michael; Hardie, Russell C; Barnard, Kenneth J
2014-05-01
In this paper, we present a computationally efficient video restoration algorithm to address both blur and noise for a Nyquist sampled imaging system. The proposed method utilizes a temporal Kalman filter followed by a correlation-model based spatial adaptive Wiener filter (AWF). The Kalman filter employs an affine background motion model and novel process-noise variance estimate. We also propose and demonstrate a new multidelay temporal Kalman filter designed to more robustly treat local motion. The AWF is a spatial operation that performs deconvolution and adapts to the spatially varying residual noise left in the Kalman filter stage. In image areas where the temporal Kalman filter is able to provide significant noise reduction, the AWF can be aggressive in its deconvolution. In other areas, where less noise reduction is achieved with the Kalman filter, the AWF balances the deconvolution with spatial noise reduction. In this way, the Kalman filter and AWF work together effectively, but without the computational burden of full joint spatiotemporal processing. We also propose a novel hybrid system that combines a temporal Kalman filter and BM3D processing. To illustrate the efficacy of the proposed methods, we test the algorithms on both simulated imagery and video collected with a visible camera.
International Nuclear Information System (INIS)
Behbahani, Mohammad; Abandansari, Hamid Sadeghi; Babapour, Meysam; Bagheri, Akbar; Nabid, Mohammad Reza; Salarian, Mani
2014-01-01
We have designed and synthesized a thermo sensitive tri-block copolymer for selective trace extraction of Pb(II) ions from biological and food samples. The polymer was characterized by Fourier transform IR and NMR spectroscopy, and by gel permeation chromatography. The critical aggregation concentration and lower critical solution temperature were determined via fluorescence and UV spectrophotometry, respectively. The effects of solution pH value, amount of copolymer, of the temperature on extraction and on phase separation, and of the matrix on the extraction of Pb(II) were optimized. Pb(II) ions were then quantified by FAAS. The use of this copolymer resulted in excellent figures of merit including a calibration plot extending from 0.5 to 160 μg L −1 (with an R 2 of >0.99), a limit of detection (LOD) as low as 90 pg L −1 , an extraction efficiency of >98 %, and relative standard deviations of <4 % for eight separate extraction experiments. (author)
Silverberg, Lee J.; Raff, Lionel M.
2015-01-01
Thermodynamic spontaneity-equilibrium criteria require that in a single-reaction system, reactions in either the forward or reverse direction at equilibrium be nonspontaneous. Conversely, the concept of dynamic equilibrium holds that forward and reverse reactions both occur at equal rates at equilibrium to the extent allowed by kinetic…
Approach to transverse equilibrium in axial channeling
International Nuclear Information System (INIS)
Fearick, R.W.
2000-01-01
Analytical treatments of channeling rely on the assumption of equilibrium on the transverse energy shell. The approach to equilibrium, and the nature of the equilibrium achieved, is examined using solutions of the equations of motion in the continuum multi-string model. The results show that the motion is chaotic in the absence of dissipative processes, and a complicated structure develops in phase space which prevent the development of the simple equilibrium usually assumed. The role of multiple scattering in smoothing out the equilibrium distribution is investigated
Pre-equilibrium plasma dynamics
Energy Technology Data Exchange (ETDEWEB)
Heinz, U.
1986-01-01
Approaches towards understanding and describing the pre-equilibrium stage of quark-gluon plasma formation in heavy-ion collisions are reviewed. Focus is on a kinetic theory approach to non-equilibrium dynamics, its extension to include the dynamics of color degrees of freedom when applied to the quark-gluon plasma, its quantum field theoretical foundations, and its relationship to both the particle formation stage at the very beginning of the nuclear collision and the hydrodynamic stage at late collision times. The usefulness of this approach to obtain the transport coefficients in the quark-gluon plasma and to derive the collective mode spectrum and damping rates in this phase are discussed. Comments are made on the general difficulty to find appropriated initial conditions to get the kinetic theory started, and a specific model is given that demonstrates that, once given such initial conditions, the system can be followed all the way through into the hydrodynamical regime. 39 refs., 7 figs. (LEW)
Non-equilibrium phase transition
International Nuclear Information System (INIS)
Mottola, E.; Cooper, F.M.; Bishop, A.R.; Habib, S.; Kluger, Y.; Jensen, N.G.
1998-01-01
This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). Non-equilibrium phase transitions play a central role in a very broad range of scientific areas, ranging from nuclear, particle, and astrophysics to condensed matter physics and the material and biological sciences. The aim of this project was to explore the path to a deeper and more fundamental understanding of the common physical principles underlying the complex real time dynamics of phase transitions. The main emphasis was on the development of general theoretical tools to deal with non-equilibrium processes, and of numerical methods robust enough to capture the time-evolving structures that occur in actual experimental situations. Specific applications to Laboratory multidivisional efforts in relativistic heavy-ion physics (transition to a new phase of nuclear matter consisting of a quark-gluon plasma) and layered high-temperature superconductors (critical currents and flux flow at the National High Magnetic Field Laboratory) were undertaken
Pre-equilibrium plasma dynamics
International Nuclear Information System (INIS)
Heinz, U.
1986-01-01
Approaches towards understanding and describing the pre-equilibrium stage of quark-gluon plasma formation in heavy-ion collisions are reviewed. Focus is on a kinetic theory approach to non-equilibrium dynamics, its extension to include the dynamics of color degrees of freedom when applied to the quark-gluon plasma, its quantum field theoretical foundations, and its relationship to both the particle formation stage at the very beginning of the nuclear collision and the hydrodynamic stage at late collision times. The usefulness of this approach to obtain the transport coefficients in the quark-gluon plasma and to derive the collective mode spectrum and damping rates in this phase are discussed. Comments are made on the general difficulty to find appropriated initial conditions to get the kinetic theory started, and a specific model is given that demonstrates that, once given such initial conditions, the system can be followed all the way through into the hydrodynamical regime. 39 refs., 7 figs
Equilibrium: two-dimensional configurations
International Nuclear Information System (INIS)
Anon.
1987-01-01
In Chapter 6, the problem of toroidal force balance is addressed in the simplest, nontrivial two-dimensional geometry, that of an axisymmetric torus. A derivation is presented of the Grad-Shafranov equation, the basic equation describing axisymmetric toroidal equilibrium. The solutions to equations provide a complete description of ideal MHD equilibria: radial pressure balance, toroidal force balance, equilibrium Beta limits, rotational transform, shear, magnetic wall, etc. A wide number of configurations are accurately modeled by the Grad-Shafranov equation. Among them are all types of tokamaks, the spheromak, the reversed field pinch, and toroidal multipoles. An important aspect of the analysis is the use of asymptotic expansions, with an inverse aspect ratio serving as the expansion parameter. In addition, an equation similar to the Grad-Shafranov equation, but for helically symmetric equilibria, is presented. This equation represents the leading-order description low-Beta and high-Beta stellarators, heliacs, and the Elmo bumpy torus. The solutions all correspond to infinitely long straight helices. Bending such a configuration into a torus requires a full three-dimensional calculation and is discussed in Chapter 7
Cao, Yupin; Deng, Biyang; Yan, Lizhen; Huang, Hongli
2017-05-15
An environmentally friendly and highly efficient gas pressure-assisted sample introduction system (GPASIS) was developed for inductively-coupled plasma mass spectrometry. A GPASIS consisting of a gas-pressure control device, a customized nebulizer, and a custom-made spray chamber was fabricated. The advantages of this GPASIS derive from its high nebulization efficiencies, small sample volume requirements, low memory effects, good precision, and zero waste emission. A GPASIS can continuously, and stably, nebulize 10% NaCl solution for more than an hour without clogging. Sensitivity, detection limits, precision, long-term stability, double charge and oxide ion levels, nebulization efficiencies, and matrix effects of the sample introduction system were evaluated. Experimental results indicated that the performance of this GPASIS, was equivalent to, or better than, those obtained by conventional sample introduction systems. This GPASIS was successfully used to determine Cd and Pb by ICP-MS in human plasma. Copyright © 2017 Elsevier B.V. All rights reserved.
The Karl Fischer Titration (KFT) reference method is specific for water in lint cotton and was designed for samples conditioned to moisture equilibrium, thus limiting its biases. There is a standard method for moisture content – weight loss – by oven drying (OD), just not for equilibrium moisture c...
International Nuclear Information System (INIS)
Ucar, Murat; Guryildirim, Melike; Tokgoz, Nil; Kilic, Koray; Borcek, Alp; Oner, Yusuf; Akkan, Koray; Tali, Turgut
2014-01-01
To compare the accuracy of diagnosing aqueductal patency and image quality between high spatial resolution three-dimensional (3D) high-sampling-efficiency technique (sampling perfection with application optimized contrast using different flip angle evolutions [SPACE]) and T2-weighted (T2W) two-dimensional (2D) turbo spin echo (TSE) at 3-T in patients with hydrocephalus. This retrospective study included 99 patients diagnosed with hydrocephalus. T2W 3D-SPACE was added to the routine sequences which consisted of T2W 2D-TSE, 3D-constructive interference steady state (CISS), and cine phase-contrast MRI (PC-MRI). Two radiologists evaluated independently the patency of cerebral aqueduct and image quality on the T2W 2D-TSE and T2W 3D-SPACE. PC-MRI and 3D-CISS were used as the reference for aqueductal patency and image quality, respectively. Inter-observer agreement was calculated using kappa statistics. The evaluation of the aqueductal patency by T2W 3D-SPACE and T2W 2D-TSE were in agreement with PC-MRI in 100% (99/99; sensitivity, 100% [83/83]; specificity, 100% [16/16]) and 83.8% (83/99; sensitivity, 100% [67/83]; specificity, 100% [16/16]), respectively (p < 0.001). No significant difference in image quality between T2W 2D-TSE and T2W 3D-SPACE (p = 0.056) occurred. The kappa values for inter-observer agreement were 0.714 for T2W 2D-TSE and 0.899 for T2W 3D-SPACE. Three-dimensional-SPACE is superior to 2D-TSE for the evaluation of aqueductal patency in hydrocephalus. T2W 3D-SPACE may hold promise as a highly accurate alternative treatment to PC-MRI for the physiological and morphological evaluation of aqueductal patency.
Energy Technology Data Exchange (ETDEWEB)
Ucar, Murat; Guryildirim, Melike; Tokgoz, Nil; Kilic, Koray; Borcek, Alp; Oner, Yusuf; Akkan, Koray; Tali, Turgut [School of Medicine, Gazi University, Ankara (Turkey)
2014-12-15
To compare the accuracy of diagnosing aqueductal patency and image quality between high spatial resolution three-dimensional (3D) high-sampling-efficiency technique (sampling perfection with application optimized contrast using different flip angle evolutions [SPACE]) and T2-weighted (T2W) two-dimensional (2D) turbo spin echo (TSE) at 3-T in patients with hydrocephalus. This retrospective study included 99 patients diagnosed with hydrocephalus. T2W 3D-SPACE was added to the routine sequences which consisted of T2W 2D-TSE, 3D-constructive interference steady state (CISS), and cine phase-contrast MRI (PC-MRI). Two radiologists evaluated independently the patency of cerebral aqueduct and image quality on the T2W 2D-TSE and T2W 3D-SPACE. PC-MRI and 3D-CISS were used as the reference for aqueductal patency and image quality, respectively. Inter-observer agreement was calculated using kappa statistics. The evaluation of the aqueductal patency by T2W 3D-SPACE and T2W 2D-TSE were in agreement with PC-MRI in 100% (99/99; sensitivity, 100% [83/83]; specificity, 100% [16/16]) and 83.8% (83/99; sensitivity, 100% [67/83]; specificity, 100% [16/16]), respectively (p < 0.001). No significant difference in image quality between T2W 2D-TSE and T2W 3D-SPACE (p = 0.056) occurred. The kappa values for inter-observer agreement were 0.714 for T2W 2D-TSE and 0.899 for T2W 3D-SPACE. Three-dimensional-SPACE is superior to 2D-TSE for the evaluation of aqueductal patency in hydrocephalus. T2W 3D-SPACE may hold promise as a highly accurate alternative treatment to PC-MRI for the physiological and morphological evaluation of aqueductal patency.
Energy Technology Data Exchange (ETDEWEB)
Arizono, Shigeki [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507 (Japan)], E-mail: arizono@kuhp.kyoto-u.ac.jp; Isoda, Hiroyoshi [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507 (Japan)], E-mail: sayuki@kuhp.kyoto-u.ac.jp; Maetani, Yoji S. [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507 (Japan)], E-mail: mbo@kuhp.kyoto-u.ac.jp; Hirokawa, Yuusuke [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507 (Japan)], E-mail: yuusuke@kuhp.kyoto-u.ac.jp; Shimada, Kotaro [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507 (Japan)], E-mail: kotaro@kuhp.kyoto-u.ac.jp; Nakamoto, Yuji [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507 (Japan)], E-mail: ynakamo1@kuhp.kyoto-u.ac.jp; Shibata, Toshiya [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507 (Japan)], E-mail: ksj@kuhp.kyoto-u.ac.jp; Togashi, Kaori [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507 (Japan)], E-mail: ktogashi@kuhp.kyoto-u.ac.jp
2010-01-15
Purpose: The aim of this study was to evaluate image quality of 3D MR cholangiography (MRC) using high sampling efficiency technique (SPACE) at 3 T compared with 1.5 T. Methods and materials: An IRB approved prospective study was performed with 17 healthy volunteers using both 3 and 1.5 T MR scanners. MRC images were obtained with free-breathing navigator-triggered 3D T2-weighted turbo spin-echo sequence with SPACE (TR, >2700 ms; TE, 780 ms at 3 T and 801 ms at 1.5 T; echo-train length, 121; voxel size, 1.1 mm x 1.0 mm x 0.84 mm). The common bile duct (CBD) to liver contrast-to-noise ratios (CNRs) were compared between 3 and 1.5 T. A five-point scale was used to compare overall image quality and visualization of the third branches of bile duct (B2, B6, and B8). The depiction of cystic duct insertion and the highest order of bile duct visible were also compared. The results were compared using the Wilcoxon signed-ranks test. Results: CNR between the CBD and liver was significantly higher at 3 T than 1.5 T (p = 0.0006). MRC at 3 T showed a significantly higher overall image quality (p = 0.0215) and clearer visualization of B2 (p = 0.0183) and B6 (p = 0.0106) than at 1.5 T. In all analyses of duct visibility, 3 T showed higher scores than 1.5 T. Conclusion: 3 T MRC using SPACE offered better image quality than 1.5 T. SPACE technique facilitated high-resolution 3D MRC with excellent image quality at 3 T.
Feng, Rui-Hong; Hou, Jin-Jun; Zhang, Yi-Bei; Pan, Hui-Qin; Yang, Wenzhi; Qi, Peng; Yao, Shuai; Cai, Lu-Ying; Yang, Min; Jiang, Bao-Hong; Liu, Xuan; Wu, Wan-Ying; Guo, De-An
2015-08-28
An efficient and target-oriented sample enrichment method was established to increase the content of the minor alkaloids in crude extract by using the corresponding two-phase solvent system applied in pH-zone-refining counter-current chromatography. The enrichment and separation of seven minor indole alkaloids from Uncaria rhynchophylla (Miq.) Miq. ex Havil(UR) were selected as an example to show the advantage of this method. An optimized two-phase solvent system composed of n-hexane-ethyl acetate-methanol-water (3:7:1:9, v/v) was used in this study, where triethylamine (TEA) as the retainer and hydrochloric acid (HCl) as the eluter were added at the equimolar of 10mM. Crude alkaloids of UR dissolved in the corresponding upper phase (containing 10mM TEA) were extracted twice with lower phase (containing 10mM TEA) and lower phase (containing 10mM HCl), respectively, the second lower phase extract was subjected to pH-zone-refining CCC separation after alkalization and desalination. Finally, from 10g of crude alkaloids, 4g of refined alkaloids was obtained and the total content of seven target indole alkaloids was increased from 4.64% to 15.78%. Seven indole alkaloids, including 54mg isocorynoxeine, 21mg corynoxeine, 46mg isorhynchophylline, 35mg rhynchophylline, 65mg hirsutine, 51mg hirsuteine and 27mg geissoschizine methylether were all simultaneously separated from 2.5g of refined alkaloids, with the purity of 86.4%, 97.5%, 90.3%, 92.1%, 98.5%, 92.3%, and 92.8%, respectively. The total content and purities of the seven minor indole alkaloids were tested by HPLC and their chemical structures were elucidated by ESI-HRMS and (1)H NMR. Copyright © 2015 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Ginkel, G. van
1980-01-01
Various water-soluble wavelength-shifting compounds were investigated to assess their suitability for the improvement of counting efficiency when Cerenkov radiation from phosphorous-32 is measured in a liquid scintillation counter. Of these compounds esculin, β-methyl-umbelliferon and sodium salicylate led to the greatest improvement in counting efficiency. Especially esculin and β-methyl-umbelliferon are fairly stable under a variety of experimental conditions and improve counting efficiencies by a factor of about 1.3 and 1.2 respectively. The use of ethanol as a water-miscible solvent combined with wavelength shifters soluble in both solvents does not improve counting efficiency. (author)
Equilibrium calculations and mode analysis
International Nuclear Information System (INIS)
Herrnegger, F.
1987-01-01
The STEP asymptotic stellarator expansion procedure was used to study the MHD equilibrium and stability properties of stellarator configurations without longitudinal net-current, which also apply to advanced stellarators. The effects of toroidal curvature and magnetic well, and the Shafranov shift were investigated. A classification of unstable modes in toroidal stellarators is given. For WVII-A coil-field configurations having a β value of 1% and a parabolic pressure profile, no free-boundary modes are found. This agrees with the experimental fact that unstable behavior of the plasma column is not observed for this parameter range. So a theoretical β-limit for stability against ideal MHD modes can be estimated by mode analysis for the WVII-A device
Stellar Equilibrium in Semiclassical Gravity.
Carballo-Rubio, Raúl
2018-02-09
The phenomenon of quantum vacuum polarization in the presence of a gravitational field is well understood and is expected to have a physical reality, but studies of its backreaction on the dynamics of spacetime are practically nonexistent outside of the specific context of homogeneous cosmologies. Building on previous results of quantum field theory in curved spacetimes, in this Letter we first derive the semiclassical equations of stellar equilibrium in the s-wave Polyakov approximation. It is highlighted that incorporating the polarization of the quantum vacuum leads to a generalization of the classical Tolman-Oppenheimer-Volkoff equation. Despite the complexity of the resulting field equations, it is possible to find exact solutions. Aside from being the first known exact solutions that describe relativistic stars including the nonperturbative backreaction of semiclassical effects, these are identified as a nontrivial combination of the black star and gravastar proposals.
Risk premia in general equilibrium
DEFF Research Database (Denmark)
Posch, Olaf
This paper shows that non-linearities can generate time-varying and asymmetric risk premia over the business cycle. These (empirical) key features become relevant and asset market implications improve substantially when we allow for non-normalities in the form of rare disasters. We employ explici......'s effective risk aversion.......This paper shows that non-linearities can generate time-varying and asymmetric risk premia over the business cycle. These (empirical) key features become relevant and asset market implications improve substantially when we allow for non-normalities in the form of rare disasters. We employ explicit...... solutions of dynamic stochastic general equilibrium models, including a novel solution with endogenous labor supply, to obtain closed-form expressions for the risk premium in production economies. We find that the curvature of the policy functions affects the risk premium through controlling the individual...
Neoclassical equilibrium in gyrokinetic simulations
International Nuclear Information System (INIS)
Garbet, X.; Dif-Pradalier, G.; Nguyen, C.; Sarazin, Y.; Grandgirard, V.; Ghendrih, Ph.
2009-01-01
This paper presents a set of model collision operators, which reproduce the neoclassical equilibrium and comply with the constraints of a full-f global gyrokinetic code. The assessment of these operators is based on an entropy variational principle, which allows one to perform a fast calculation of the neoclassical diffusivity and poloidal velocity. It is shown that the force balance equation is recovered at lowest order in the expansion parameter, the normalized gyroradius, hence allowing one to calculate correctly the radial electric field. Also, the conventional neoclassical transport and the poloidal velocity are reproduced in the plateau and banana regimes. The advantages and drawbacks of the various model operators are discussed in view of the requirements for neoclassical and turbulent transport.
QUIL: a chemical equilibrium code
International Nuclear Information System (INIS)
Lunsford, J.L.
1977-02-01
A chemical equilibrium code QUIL is described, along with two support codes FENG and SURF. QUIL is designed to allow calculations on a wide range of chemical environments, which may include surface phases. QUIL was written specifically to calculate distributions associated with complex equilibria involving fission products in the primary coolant loop of the high-temperature gas-cooled reactor. QUIL depends upon an energy-data library called ELIB. This library is maintained by FENG and SURF. FENG enters into the library all reactions having standard free energies of reaction that are independent of concentration. SURF enters all surface reactions into ELIB. All three codes are interactive codes written to be used from a remote terminal, with paging control provided. Plotted output is also available
Pre-equilibrium gamma emissions
International Nuclear Information System (INIS)
Ghosh, Sudip
1993-01-01
Together with the direct reaction and the compound nuclear emissions the pre-equilibrium (PEQ) or pre-compound processes give a fairly complete picture of nuclear reactions induced by light ions at energies of some tens of MeV. PEQ particle emissions covering the higher energy continuum spectra have been investigated in detail both experimentally and theoretically. In contrast, very little work has been done on PEQ γ- emissions. The reason is that in spite of extensive work done on PEQ particle emissions, the mechanism is not yet fully understood. Also, the PEQ γ-emission cross-sections (∼ micro barns) are very small compared to the PEQ particle emission cross-sections (∼ milli barns). Yet apart from the academic interest the understanding of PEQ γ-emissions is important for applied fusion research etc. In this paper the PEQ γ-emissions is discussed and the work done in this field is reviewed. (author). 14 refs
Equilibrium Analysis in Cake Cutting
DEFF Research Database (Denmark)
Branzei, Simina; Miltersen, Peter Bro
2013-01-01
Cake cutting is a fundamental model in fair division; it represents the problem of fairly allocating a heterogeneous divisible good among agents with different preferences. The central criteria of fairness are proportionality and envy-freeness, and many of the existing protocols are designed...... to guarantee proportional or envy-free allocations, when the participating agents follow the protocol. However, typically, all agents following the protocol is not guaranteed to result in a Nash equilibrium. In this paper, we initiate the study of equilibria of classical cake cutting protocols. We consider one...... of the simplest and most elegant continuous algorithms -- the Dubins-Spanier procedure, which guarantees a proportional allocation of the cake -- and study its equilibria when the agents use simple threshold strategies. We show that given a cake cutting instance with strictly positive value density functions...
Equilibrium Solubility of CO2 in Alkanolamines
DEFF Research Database (Denmark)
Waseem Arshad, Muhammad; Fosbøl, Philip Loldrup; von Solms, Nicolas
2014-01-01
Equilibrium solubility of CO2 were measured in aqueous solutions of Monoethanolamine (MEA) and N,N-diethylethanolamine(DEEA). Equilibrium cells are generally used for these measurements. In this study, the equilibrium data were measured from the calorimetry. For this purpose a reaction calorimeter...... (model CPA 122 from ChemiSens AB, Sweden) was used. The advantage of this method is being the measurement of both heats of absorption and equilibrium solubility data of CO2 at the same time. The measurements were performed for 30 mass % MEA and 5M DEEA solutions as a function of CO2 loading at three...... different temperatures 40, 80 and 120 ºC. The measured 30 mass % MEA and 5M DEEA data were compared with the literature data obtained from different equilibrium cells which validated the use of calorimeters for equilibrium solubility measurements....
Du, Xinzhong; Shrestha, Narayan Kumar; Ficklin, Darren L.; Wang, Junye
2018-04-01
Stream temperature is an important indicator for biodiversity and sustainability in aquatic ecosystems. The stream temperature model currently in the Soil and Water Assessment Tool (SWAT) only considers the impact of air temperature on stream temperature, while the hydroclimatological stream temperature model developed within the SWAT model considers hydrology and the impact of air temperature in simulating the water-air heat transfer process. In this study, we modified the hydroclimatological model by including the equilibrium temperature approach to model heat transfer processes at the water-air interface, which reflects the influences of air temperature, solar radiation, wind speed and streamflow conditions on the heat transfer process. The thermal capacity of the streamflow is modeled by the variation of the stream water depth. An advantage of this equilibrium temperature model is the simple parameterization, with only two parameters added to model the heat transfer processes. The equilibrium temperature model proposed in this study is applied and tested in the Athabasca River basin (ARB) in Alberta, Canada. The model is calibrated and validated at five stations throughout different parts of the ARB, where close to monthly samplings of stream temperatures are available. The results indicate that the equilibrium temperature model proposed in this study provided better and more consistent performances for the different regions of the ARB with the values of the Nash-Sutcliffe Efficiency coefficient (NSE) greater than those of the original SWAT model and the hydroclimatological model. To test the model performance for different hydrological and environmental conditions, the equilibrium temperature model was also applied to the North Fork Tolt River Watershed in Washington, United States. The results indicate a reasonable simulation of stream temperature using the model proposed in this study, with minimum relative error values compared to the other two models
Mathematical models and equilibrium in irreversible microeconomics
Directory of Open Access Journals (Sweden)
Anatoly M. Tsirlin
2010-07-01
Full Text Available A set of equilibrium states in a system consisting of economic agents, economic reservoirs, and firms is considered. Methods of irreversible microeconomics are used. We show that direct sale/purchase leads to an equilibrium state which depends upon the coefficients of supply/demand functions. To reach the unique equilibrium state it is necessary to add either monetary exchange or an intermediate firm.
Modeling of the equilibrium of a tokamak plasma
International Nuclear Information System (INIS)
Grandgirard, V.
1999-12-01
The simulation and the control of a plasma discharge in a tokamak require an efficient and accurate solving of the equilibrium because this equilibrium needs to be calculated again every microsecond to simulate discharges that can last up to 1000 seconds. The purpose of this thesis is to propose numerical methods in order to calculate these equilibrium with acceptable computer time and memory size. Chapter 1 deals with hydrodynamics equation and sets up the problem. Chapter 2 gives a method to take into account the boundary conditions. Chapter 3 is dedicated to the optimization of the inversion of the system matrix. This matrix being quasi-symmetric, the Woodbury method combined with Cholesky method has been used. This direct method has been compared with 2 iterative methods: GMRES (generalized minimal residual) and BCG (bi-conjugate gradient). The 2 last chapters study the control of the plasma equilibrium, this work is presented in the formalism of the optimized control of distributed systems and leads to non-linear equations of state and quadratic functionals that are solved numerically by a quadratic sequential method. This method is based on the replacement of the initial problem with a series of control problems involving linear equations of state. (A.C.)
Collapse and equilibrium of rotating, adiabatic clouds
International Nuclear Information System (INIS)
Boss, A.P.
1980-01-01
A numerical hydrodynamics computer code has been used to follow the collapse and establishment of equilibrium of adiabatic gas clouds restricted to axial symmetry. The clouds are initially uniform in density and rotation, with adiabatic exponents γ=5/3 and 7/5. The numerical technique allows, for the first time, a direct comparison to be made between the dynamic collapse and approach to equilibrium of unconstrained clouds on the one hand, and the results for incompressible, uniformly rotating equilibrium clouds, and the equilibrium structures of differentially rotating polytropes, on the other hand
Static Equilibrium Configurations of Charged Metallic Bodies
African Journals Online (AJOL)
Key words: Static equilibrium, charged metallic body, potential energy, projected gradient method. ... television, radio, internet, microwave ovens, mobile telephones, satellite communication systems, radar systems, electrical motors, electrical.
International Nuclear Information System (INIS)
Singh, Sarbjit; Agarwal, Chhavi; Ramaswami, A.; Manchanda, V.K.
2007-01-01
Regular monitoring of off gases released to the environment from a nuclear reactor is mandatory. The gaseous fission products are estimated by gamma ray spectrometry using a HPGe detector coupled to a multichannel analyser. In view of the lack of availability of gaseous fission products standards, an indirect method based on the charcoal absorption technique was developed for the efficiency calibration of HPGe detector system using 133B a and 152E u standards. The known activities of 133B a and 152E u are uniformly distributed in a vial having activated charcoal and counted on the HPGe detector system at liquid nitrogen temperature to determine the gamma ray efficiency for the vial having activated charcoal. The ratio of the gamma ray efficiencies of off gas present in the normal vial and the vial having activated charcoal at liquid nitrogen temperature are used to determine the gamma ray efficiency of off gas present in the normal vial. (author)
Oppositely charged colloids out of equilibrium
Vissers, T.
2010-11-01
Colloids are particles with a size in the range of a few nanometers up to several micrometers. Similar to atomic and molecular systems, they can form gases, liquids, solids, gels and glasses. Colloids can be used as model systems because, unlike molecules, they are sufficiently large to be studied directly with light microscopy and move sufficiently slow to study their dynamics. In this thesis, we study binary systems of polymethylmethacrylate (PMMA) colloidal particles suspended in low-polar solvent mixtures. Since the ions can still partially dissociate, a surface charge builds up which causes electrostatic interactions between the colloids. By carefully tuning the conditions inside the suspension, we make two kinds of particles oppositely charged. To study our samples, we use Confocal Laser Scanning Microscopy (CLSM). The positively and negatively charged particles can be distinguished by a different fluorescent dye. Colloids constantly experience a random motion resulting from random kicks of surrounding solvent molecules. When the attractions between the oppositely charged particles are weak, the particles can attach and detach many times and explore a lot of possible configurations and the system can reach thermodynamic equilibrium. For example, colloidal ‘ionic’ crystals consisting of thousands to millions of particles can form under the right conditions. When the attractions are strong, the system can become kinetically trapped inside a gel-like state. We observe that when the interactions change again, crystals can even emerge again from this gel-like phase. By using local order parameters, we quantitatively study the crystallization of colloidal particles and identify growth defects inside the crystals. We also study the effect of gravity on the growth of ionic crystals by using a rotating stage. We find that sedimentation can completely inhibit crystal growth and plays an important role in crystallization from the gel-like state. The surface
Energy Technology Data Exchange (ETDEWEB)
Anelli, M.; Bertolucci, S. [Laboratori Nazionali di Frascati, INFN (Italy); Bini, C. [Dipartimento di Fisica dell' Universita ' La Sapienza' , Roma (Italy); INFN Sezione di Roma, Roma (Italy); Branchini, P. [INFN Sezione di Roma Tre, Roma (Italy); Corradi, G.; Curceanu, C. [Laboratori Nazionali di Frascati, INFN (Italy); De Zorzi, G.; Di Domenico, A. [Dipartimento di Fisica dell' Universita ' La Sapienza' , Roma (Italy); INFN Sezione di Roma, Roma (Italy); Di Micco, B. [Dipartimento di Fisica dell' Universita ' Roma Tre' , Roma (Italy); INFN Sezione di Roma Tre, Roma (Italy); Ferrari, A. [Fondazione CNAO, Milano (Italy); Fiore, S.; Gauzzi, P. [Dipartimento di Fisica dell' Universita ' La Sapienza' , Roma (Italy); INFN Sezione di Roma, Roma (Italy); Giovannella, S.; Happacher, F. [Laboratori Nazionali di Frascati, INFN (Italy); Iliescu, M. [Laboratori Nazionali di Frascati, INFN (Italy); IFIN-HH, Bucharest (Romania); Luca, A.; Martini, M. [Laboratori Nazionali di Frascati, INFN (Italy); Miscetti, S., E-mail: stefano.miscetti@lnf.infn.i [Laboratori Nazionali di Frascati, INFN (Italy); Nguyen, F. [Dipartimento di Fisica dell' Universita ' Roma Tre' , Roma (Italy); INFN Sezione di Roma Tre, Roma (Italy); Passeri, A. [INFN Sezione di Roma Tre, Roma (Italy)
2010-05-21
We exposed a prototype of the lead-scintillating fiber KLOE calorimeter to neutron beam of 21, 46 and 174 MeV at The Svedberg Laboratory, Uppsala, to study its neutron detection efficiency. This has been found larger than what expected considering the scintillator thickness of the prototype. We show preliminary measurement carried out with a different prototype with a larger lead/fiber ratio, which proves the relevance of passive material to neutron detection efficiency in this kind of calorimeters.
Directory of Open Access Journals (Sweden)
Yanwei Li
2018-02-01
Full Text Available Autism is a neurodevelopmental disorder with dimensional behavioral symptoms and various damages in the structural and functional brain. Previous neuroimaging studies focused on exploring the differences of brain development between individuals with and without autism spectrum disorders (ASD. However, few of them have attempted to investigate the individual differences of the brain features among subjects within the Autism spectrum. Our main goal was to explore the individual differences of neurodevelopment in young children with Autism by testing for the association between the functional network efficiency and levels of autistic behaviors, as well as the association between the functional network efficiency and age. Forty-six children with Autism (ages 2.0–8.9 years old participated in the current study, with levels of autistic behaviors evaluated by their parents. The network efficiency (global and local network efficiency were obtained from the functional networks based on the oxy-, deoxy-, and total-Hemoglobin series, respectively. Results indicated that the network efficiency decreased with age in young children with Autism in the deoxy- and total-Hemoglobin-based-networks, and children with a relatively higher level of autistic behaviors showed decreased network efficiency in the oxy-hemoglobin-based network. Results suggest individual differences of brain development in young children within the Autism spectrum, providing new insights into the psychopathology of ASD.
RINGED ACCRETION DISKS: EQUILIBRIUM CONFIGURATIONS
Energy Technology Data Exchange (ETDEWEB)
Pugliese, D.; Stuchlík, Z., E-mail: d.pugliese.physics@gmail.com, E-mail: zdenek.stuchlik@physics.cz [Institute of Physics and Research Centre of Theoretical Physics and Astrophysics, Faculty of Philosophy and Science, Silesian University in Opava, Bezručovo náměstí 13, CZ-74601 Opava (Czech Republic)
2015-12-15
We investigate a model of a ringed accretion disk, made up by several rings rotating around a supermassive Kerr black hole attractor. Each toroid of the ringed disk is governed by the general relativity hydrodynamic Boyer condition of equilibrium configurations of rotating perfect fluids. Properties of the tori can then be determined by an appropriately defined effective potential reflecting the background Kerr geometry and the centrifugal effects. The ringed disks could be created in various regimes during the evolution of matter configurations around supermassive black holes. Therefore, both corotating and counterrotating rings have to be considered as being a constituent of the ringed disk. We provide constraints on the model parameters for the existence and stability of various ringed configurations and discuss occurrence of accretion onto the Kerr black hole and possible launching of jets from the ringed disk. We demonstrate that various ringed disks can be characterized by a maximum number of rings. We present also a perturbation analysis based on evolution of the oscillating components of the ringed disk. The dynamics of the unstable phases of the ringed disk evolution seems to be promising in relation to high-energy phenomena demonstrated in active galactic nuclei.
Equilibrium figures in geodesy and geophysics.
Moritz, H.
There is an enormous literature on geodetic equilibrium figures, but the various works have not always been interrelated, also for linguistic reasons (English, French, German, Italian, Russian). The author attempts to systematize the various approaches and to use the standard second-order theory for a study of the deviation of the actual earth and of the equipotential reference ellipsoid from an equilibrium figure.
Equilibrium theory of island biogeography: A review
Angela D. Yu; Simon A. Lei
2001-01-01
The topography, climatic pattern, location, and origin of islands generate unique patterns of species distribution. The equilibrium theory of island biogeography creates a general framework in which the study of taxon distribution and broad island trends may be conducted. Critical components of the equilibrium theory include the species-area relationship, island-...
Gibbs equilibrium averages and Bogolyubov measure
International Nuclear Information System (INIS)
Sankovich, D.P.
2011-01-01
Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure
Do intertidal flats ever reach equilibrium?
Maan, D.C.; van Prooijen, B.C.; Wang, Z.B.; de Vriend, H.J.
2015-01-01
Various studies have identified a strong relation between the hydrodynamic forces and the equilibrium profile for intertidal flats. A thorough understanding of the interplay between the hydrodynamic forces and the morphology, however, concerns more than the equilibrium state alone. We study the
Vertical field and equilibrium calculation in ETE
International Nuclear Information System (INIS)
Montes, Antonio; Shibata, Carlos Shinya.
1996-01-01
The free-boundary MHD equilibrium code HEQ is used to study the plasma behaviour in the tokamak ETE, with optimized compensations coils and vertical field coils. The changes on the equilibrium parameters for different plasma current values are also investigated. (author). 5 refs., 4 figs., 2 tabs
Statistical thermodynamics of equilibrium polymers at interfaces
Gucht, van der J.; Besseling, N.A.M.
2002-01-01
The behavior of a solution of equilibrium polymers (or living polymers) at an interface is studied, using a Bethe-Guggenheim lattice model for molecules with orientation dependent interactions. The density profile of polymers and the chain length distribution are calculated. For equilibrium polymers
Groundwater flux estimation in streams: A thermal equilibrium approach
Zhou, Yan; Fox, Garey A.; Miller, Ron B.; Mollenhauer, Robert; Brewer, Shannon
2018-06-01
Stream and groundwater interactions play an essential role in regulating flow, temperature, and water quality for stream ecosystems. Temperature gradients have been used to quantify vertical water movement in the streambed since the 1960s, but advancements in thermal methods are still possible. Seepage runs are a method commonly used to quantify exchange rates through a series of streamflow measurements but can be labor and time intensive. The objective of this study was to develop and evaluate a thermal equilibrium method as a technique for quantifying groundwater flux using monitored stream water temperature at a single point and readily available hydrological and atmospheric data. Our primary assumption was that stream water temperature at the monitored point was at thermal equilibrium with the combination of all heat transfer processes, including mixing with groundwater. By expanding the monitored stream point into a hypothetical, horizontal one-dimensional thermal modeling domain, we were able to simulate the thermal equilibrium achieved with known atmospheric variables at the point and quantify unknown groundwater flux by calibrating the model to the resulting temperature signature. Stream water temperatures were monitored at single points at nine streams in the Ozark Highland ecoregion and five reaches of the Kiamichi River to estimate groundwater fluxes using the thermal equilibrium method. When validated by comparison with seepage runs performed at the same time and reach, estimates from the two methods agreed with each other with an R2 of 0.94, a root mean squared error (RMSE) of 0.08 (m/d) and a Nash-Sutcliffe efficiency (NSE) of 0.93. In conclusion, the thermal equilibrium method was a suitable technique for quantifying groundwater flux with minimal cost and simple field installation given that suitable atmospheric and hydrological data were readily available.
Ghirardi, Marco; Marchetti, Fabio; Pettinari, Claudio; Regis, Alberto; Roletto, Ezio
2015-01-01
A didactic sequence is proposed for the teaching of chemical equilibrium law. In this approach, we have avoided the kinetic derivation and the thermodynamic justification of the equilibrium constant. The equilibrium constant expression is established empirically by a trial-and-error approach. Additionally, students learn to use the criterion of…
On the definition of equilibrium and non-equilibrium states in dynamical systems
Akimoto, Takuma
2008-01-01
We propose a definition of equilibrium and non-equilibrium states in dynamical systems on the basis of the time average. We show numerically that there exists a non-equilibrium non-stationary state in the coupled modified Bernoulli map lattice.
Information-theoretic equilibrium and observable thermalization
Anzà, F.; Vedral, V.
2017-03-01
A crucial point in statistical mechanics is the definition of the notion of thermal equilibrium, which can be given as the state that maximises the von Neumann entropy, under the validity of some constraints. Arguing that such a notion can never be experimentally probed, in this paper we propose a new notion of thermal equilibrium, focused on observables rather than on the full state of the quantum system. We characterise such notion of thermal equilibrium for an arbitrary observable via the maximisation of its Shannon entropy and we bring to light the thermal properties that it heralds. The relation with Gibbs ensembles is studied and understood. We apply such a notion of equilibrium to a closed quantum system and show that there is always a class of observables which exhibits thermal equilibrium properties and we give a recipe to explicitly construct them. Eventually, an intimate connection with the Eigenstate Thermalisation Hypothesis is brought to light.
Disturbances in equilibrium function after major earthquake.
Honma, Motoyasu; Endo, Nobutaka; Osada, Yoshihisa; Kim, Yoshiharu; Kuriyama, Kenichi
2012-01-01
Major earthquakes were followed by a large number of aftershocks and significant outbreaks of dizziness occurred over a large area. However it is unclear why major earthquake causes dizziness. We conducted an intergroup trial on equilibrium dysfunction and psychological states associated with equilibrium dysfunction in individuals exposed to repetitive aftershocks versus those who were rarely exposed. Greater equilibrium dysfunction was observed in the aftershock-exposed group under conditions without visual compensation. Equilibrium dysfunction in the aftershock-exposed group appears to have arisen from disturbance of the inner ear, as well as individual vulnerability to state anxiety enhanced by repetitive exposure to aftershocks. We indicate potential effects of autonomic stress on equilibrium function after major earthquake. Our findings may contribute to risk management of psychological and physical health after major earthquakes with aftershocks, and allow development of a new empirical approach to disaster care after such events.
International Nuclear Information System (INIS)
Hu, Y.; Liu, Z.; Shi, X.; Wang, B.
2006-01-01
A brief introduction of characteristic statistic algorithm (CSA) is given in the paper, which is a new global optimization algorithm to solve the problem of PWR in-core fuel management optimization. CSA is modified by the adoption of back propagation neural network and fast local adjustment. Then the modified CSA is applied to PWR Equilibrium Cycle Reloading Optimization, and the corresponding optimization code of CSA-DYW is developed. CSA-DYW is used to optimize the equilibrium cycle of 18 month reloading of Daya bay nuclear plant Unit 1 reactor. The results show that CSA-DYW has high efficiency and good global performance on PWR Equilibrium Cycle Reloading Optimization. (authors)
An alternative extragradient projection method for quasi-equilibrium problems.
Chen, Haibin; Wang, Yiju; Xu, Yi
2018-01-01
For the quasi-equilibrium problem where the players' costs and their strategies both depend on the rival's decisions, an alternative extragradient projection method for solving it is designed. Different from the classical extragradient projection method whose generated sequence has the contraction property with respect to the solution set, the newly designed method possesses an expansion property with respect to a given initial point. The global convergence of the method is established under the assumptions of pseudomonotonicity of the equilibrium function and of continuity of the underlying multi-valued mapping. Furthermore, we show that the generated sequence converges to the nearest point in the solution set to the initial point. Numerical experiments show the efficiency of the method.
Convective equilibrium and mixing-length theory for stellarator reactors
International Nuclear Information System (INIS)
Ho, D.D.M.; Kulsrud, R.M.
1985-09-01
In high β stellarator and tokamak reactors, the plasma pressure gradient in some regions of the plasma may exceed the critical pressure gradient set by ballooning instabilities. In these regions, convective cells break out to enhance the transport. As a result, the pressure gradient can rise only slightly above the critical gradient and the plasma is in another state of equilibrium - ''convective equilibrium'' - in these regions. Although the convective transport cannot be calculated precisely, it is shown that the density and temperature profiles in the convective region can still be estimated. A simple mixing-length theory, similar to that used for convection in stellar interiors, is introduced in this paper to provide a qualitative description of the convective cells and to show that the convective transport is highly efficient. A numerical example for obtaining the density and temperature profiles in a stellarator reactor is given
Non-equilibrium plasma reactor for natrual gas processing
International Nuclear Information System (INIS)
Shair, F.H.; Ravimohan, A.L.
1974-01-01
A non-equilibrium plasma reactor for natural gas processing into ethane and ethylene comprising means of producing a non-equilibrium chemical plasma wherein selective conversion of the methane in natural gas to desired products of ethane and ethylene at a pre-determined ethane/ethylene ratio in the chemical process may be intimately controlled and optimized at a high electrical power efficiency rate by mixing with a recycling gas inert to the chemical process such as argon, helium, or hydrogen, reducing the residence time of the methane in the chemical plasma, selecting the gas pressure in the chemical plasma from a wide range of pressures, and utilizing pulsed electrical discharge producing the chemical plasma. (author)
Czech Academy of Sciences Publication Activity Database
Mukhopadhyay, N. D.; Sampson, A. J.; Deniz, D.; Carlsson, G. A.; Williamson, J.; Malušek, Alexandr
2012-01-01
Roč. 70, č. 1 (2012), s. 315-323 ISSN 0969-8043 Institutional research plan: CEZ:AV0Z10480505 Keywords : Monte Carlo * correlated sampling * efficiency * uncertainty * bootstrap Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 1.179, year: 2012 http://www.sciencedirect.com/science/article/pii/S0969804311004775
Ginkel, G. van
Various water-soluble wavelength-shifting compounds were investigated to assess their suitability for the improvement of counting efficiency when erenkov radiation from phosphorous-32 is measured in a liquid scintillation counter. Of these compounds esculin, β-methyl-umbelliferon and sodium
Thermodynamics of the multicomponent vapor-liquid equilibrium under capillary pressure difference
DEFF Research Database (Denmark)
Shapiro, Alexander; Stenby, Erling Halfdan
2001-01-01
We discuss the two-phase multicomponent equilibrium, provided that the phase pressures are different due to the action of capillary forces. We prove the two general properties of such an equilibrium, which have previously been known for a single-component case, however, to the best of our knowledge......, not for the multicomponent mixtures. The importance is emphasized on the space of the intensive variables P, T and mu (i), where the laws of capillary equilibrium have a simple geometrical interpretation. We formulate thermodynamic problems specific to such an equilibrium, and outline changes to be introduced to common...... algorithms of flash calculations in order to solve these problems. Sample calculations show large variation of the capillary properties of the mixture in the very neighborhood of the phase envelope and the restrictive role of the spinodal surface as a boundary for possible equilibrium states with different...
Isospin equilibrium and non-equilibrium in heavy-ion collisions at intermediate energies
International Nuclear Information System (INIS)
Chen Liewen; Ge Lingxiao; Zhang Xiaodong; Zhang Fengshou
1997-01-01
The equilibrium and non-equilibrium of the isospin degree of freedom are studied in terms of an isospin-dependent QMD model, which includes isospin-dependent symmetry energy, Coulomb energy, N-N cross sections and Pauli blocking. It is shown that there exists a transition from the isospin equilibrium to non-equilibrium as the incident energy from below to above a threshold energy in central, asymmetric heavy-ion collisions. Meanwhile, it is found that the phenomenon results from the co-existence and competition of different reaction mechanisms, namely, the isospin degree of freedom reaches an equilibrium if the incomplete fusion (ICF) component is dominant and does not reach equilibrium if the fragmentation component is dominant. Moreover, it is also found that the isospin-dependent N-N cross sections and symmetry energy are crucial for the equilibrium of the isospin degree of freedom in heavy-ion collisions around the Fermi energy. (author)
Local Nash equilibrium in social networks.
Zhang, Yichao; Aziz-Alaoui, M A; Bertelle, Cyrille; Guan, Jihong
2014-08-29
Nash equilibrium is widely present in various social disputes. As of now, in structured static populations, such as social networks, regular, and random graphs, the discussions on Nash equilibrium are quite limited. In a relatively stable static gaming network, a rational individual has to comprehensively consider all his/her opponents' strategies before they adopt a unified strategy. In this scenario, a new strategy equilibrium emerges in the system. We define this equilibrium as a local Nash equilibrium. In this paper, we present an explicit definition of the local Nash equilibrium for the two-strategy games in structured populations. Based on the definition, we investigate the condition that a system reaches the evolutionary stable state when the individuals play the Prisoner's dilemma and snow-drift game. The local Nash equilibrium provides a way to judge whether a gaming structured population reaches the evolutionary stable state on one hand. On the other hand, it can be used to predict whether cooperators can survive in a system long before the system reaches its evolutionary stable state for the Prisoner's dilemma game. Our work therefore provides a theoretical framework for understanding the evolutionary stable state in the gaming populations with static structures.
Roperch, Jean-Pierre; Benzekri, Karim; Mansour, Hicham; Incitti, Roberto
2015-01-01
Using quantitative methylation-specific PCR (QM-MSP) is a promising method for colorectal cancer (CRC) diagnosis from stool samples. Difficulty in eliminating PCR inhibitors of this body fluid has been extensively reported. Here
Directory of Open Access Journals (Sweden)
Romain Guignard
Full Text Available OBJECTIVES: It is crucial for policy makers to monitor the evolution of tobacco smoking prevalence. In France, this monitoring is based on a series of cross-sectional general population surveys, the Health Barometers, conducted every five years and based on random samples. A methodological study has been carried out to assess the reliability of a monitoring system based on regular quota sampling surveys for smoking prevalence. DESIGN / OUTCOME MEASURES: In 2010, current and daily tobacco smoking prevalences obtained in a quota survey on 8,018 people were compared with those of the 2010 Health Barometer carried out on 27,653 people. Prevalences were assessed separately according to the telephone equipment of the interviewee (landline phone owner vs "mobile-only", and logistic regressions were conducted in the pooled database to assess the impact of the telephone equipment and of the survey mode on the prevalences found. Finally, logistic regressions adjusted for sociodemographic characteristics were conducted in the random sample in order to determine the impact of the needed number of calls to interwiew "hard-to-reach" people on the prevalence found. RESULTS: Current and daily prevalences were higher in the random sample (respectively 33.9% and 27.5% in 15-75 years-old than in the quota sample (respectively 30.2% and 25.3%. In both surveys, current and daily prevalences were lower among landline phone owners (respectively 31.8% and 25.5% in the random sample and 28.9% and 24.0% in the quota survey. The required number of calls was slightly related to the smoking status after adjustment for sociodemographic characteristics. CONCLUSION: Random sampling appears to be more effective than quota sampling, mainly by making it possible to interview hard-to-reach populations.
Wu, Xiongwu; Brooks, Bernard R.
2015-01-01
Chemical and thermodynamic equilibrium of multiple states is a fundamental phenomenon in biology systems and has been the focus of many experimental and computational studies. This work presents a simulation method to directly study the equilibrium of multiple states. This method constructs a virtual mixture of multiple states (VMMS) to sample the conformational space of all chemical states simultaneously. The VMMS system consists of multiple subsystems, one for each state. The subsystem contains a solute and a solvent environment. The solute molecules in all subsystems share the same conformation but have their own solvent environments. Transition between states is implicated by the change of their molar fractions. Simulation of a VMMS system allows efficient calculation of relative free energies of all states, which in turn determine their equilibrium molar fractions. For systems with a large number of state transition sites, an implicit site approximation is introduced to minimize the cost of simulation. A direct application of the VMMS method is for constant pH simulation to study protonation equilibrium. Applying the VMMS method to a heptapeptide of 3 ionizable residues, we calculated the pKas of those residues both with all explicit states and with implicit sites and obtained consistent results. For mouse epidermal growth factor of 9 ionizable groups, our VMMS simulations with implicit sites produced pKas of all 9 ionizable groups and the results agree qualitatively with NMR measurement. This example demonstrates the VMMS method can be applied to systems of a large number of ionizable groups and the computational cost scales linearly with the number of ionizable groups. For one of the most challenging systems in constant pH calculation, SNase Δ+PHS/V66K, our VMMS simulation shows that it is the state-dependent water penetration that causes the large deviation in lysine66’s pKa. PMID:26506245
Directory of Open Access Journals (Sweden)
Xiongwu Wu
2015-10-01
Full Text Available Chemical and thermodynamic equilibrium of multiple states is a fundamental phenomenon in biology systems and has been the focus of many experimental and computational studies. This work presents a simulation method to directly study the equilibrium of multiple states. This method constructs a virtual mixture of multiple states (VMMS to sample the conformational space of all chemical states simultaneously. The VMMS system consists of multiple subsystems, one for each state. The subsystem contains a solute and a solvent environment. The solute molecules in all subsystems share the same conformation but have their own solvent environments. Transition between states is implicated by the change of their molar fractions. Simulation of a VMMS system allows efficient calculation of relative free energies of all states, which in turn determine their equilibrium molar fractions. For systems with a large number of state transition sites, an implicit site approximation is introduced to minimize the cost of simulation. A direct application of the VMMS method is for constant pH simulation to study protonation equilibrium. Applying the VMMS method to a heptapeptide of 3 ionizable residues, we calculated the pKas of those residues both with all explicit states and with implicit sites and obtained consistent results. For mouse epidermal growth factor of 9 ionizable groups, our VMMS simulations with implicit sites produced pKas of all 9 ionizable groups and the results agree qualitatively with NMR measurement. This example demonstrates the VMMS method can be applied to systems of a large number of ionizable groups and the computational cost scales linearly with the number of ionizable groups. For one of the most challenging systems in constant pH calculation, SNase Δ+PHS/V66K, our VMMS simulation shows that it is the state-dependent water penetration that causes the large deviation in lysine66's pKa.
Wu, Xiongwu; Brooks, Bernard R
2015-10-01
Chemical and thermodynamic equilibrium of multiple states is a fundamental phenomenon in biology systems and has been the focus of many experimental and computational studies. This work presents a simulation method to directly study the equilibrium of multiple states. This method constructs a virtual mixture of multiple states (VMMS) to sample the conformational space of all chemical states simultaneously. The VMMS system consists of multiple subsystems, one for each state. The subsystem contains a solute and a solvent environment. The solute molecules in all subsystems share the same conformation but have their own solvent environments. Transition between states is implicated by the change of their molar fractions. Simulation of a VMMS system allows efficient calculation of relative free energies of all states, which in turn determine their equilibrium molar fractions. For systems with a large number of state transition sites, an implicit site approximation is introduced to minimize the cost of simulation. A direct application of the VMMS method is for constant pH simulation to study protonation equilibrium. Applying the VMMS method to a heptapeptide of 3 ionizable residues, we calculated the pKas of those residues both with all explicit states and with implicit sites and obtained consistent results. For mouse epidermal growth factor of 9 ionizable groups, our VMMS simulations with implicit sites produced pKas of all 9 ionizable groups and the results agree qualitatively with NMR measurement. This example demonstrates the VMMS method can be applied to systems of a large number of ionizable groups and the computational cost scales linearly with the number of ionizable groups. For one of the most challenging systems in constant pH calculation, SNase Δ+PHS/V66K, our VMMS simulation shows that it is the state-dependent water penetration that causes the large deviation in lysine66's pKa.
Teaching Chemical Equilibrium with the Jigsaw Technique
Doymus, Kemal
2008-03-01
This study investigates the effect of cooperative learning (jigsaw) versus individual learning methods on students’ understanding of chemical equilibrium in a first-year general chemistry course. This study was carried out in two different classes in the department of primary science education during the 2005-2006 academic year. One of the classes was randomly assigned as the non-jigsaw group (control) and other as the jigsaw group (cooperative). Students participating in the jigsaw group were divided into four “home groups” since the topic chemical equilibrium is divided into four subtopics (Modules A, B, C and D). Each of these home groups contained four students. The groups were as follows: (1) Home Group A (HGA), representin g the equilibrium state and quantitative aspects of equilibrium (Module A), (2) Home Group B (HGB), representing the equilibrium constant and relationships involving equilibrium constants (Module B), (3) Home Group C (HGC), representing Altering Equilibrium Conditions: Le Chatelier’s principle (Module C), and (4) Home Group D (HGD), representing calculations with equilibrium constants (Module D). The home groups then broke apart, like pieces of a jigsaw puzzle, and the students moved into jigsaw groups consisting of members from the other home groups who were assigned the same portion of the material. The jigsaw groups were then in charge of teaching their specific subtopic to the rest of the students in their learning group. The main data collection tool was a Chemical Equilibrium Achievement Test (CEAT), which was applied to both the jigsaw and non-jigsaw groups The results indicated that the jigsaw group was more successful than the non-jigsaw group (individual learning method).
Equilibrium limit of thermal conduction and boundary scattering in nanostructures.
Haskins, Justin B; Kınacı, Alper; Sevik, Cem; Çağın, Tahir
2014-06-28
Determining the lattice thermal conductivity (κ) of nanostructures is especially challenging in that, aside from the phonon-phonon scattering present in large systems, the scattering of phonons from the system boundary greatly influences heat transport, particularly when system length (L) is less than the average phonon mean free path (MFP). One possible route to modeling κ in these systems is through molecular dynamics (MD) simulations, inherently including both phonon-phonon and phonon-boundary scattering effects in the classical limit. Here, we compare current MD methods for computing κ in nanostructures with both L ⩽ MFP and L ≫ MFP, referred to as mean free path constrained (cMFP) and unconstrained (uMFP), respectively. Using a (10,0) CNT (carbon nanotube) as a benchmark case, we find that while the uMFP limit of κ is well-defined through the use of equilibrium MD and the time-correlation formalism, the standard equilibrium procedure for κ is not appropriate for the treatment of the cMFP limit because of the large influence of boundary scattering. To address this issue, we define an appropriate equilibrium procedure for cMFP systems that, through comparison to high-fidelity non-equilibrium methods, is shown to be the low thermal gradient limit to non-equilibrium results. Further, as a means of predicting κ in systems having L ≫ MFP from cMFP results, we employ an extrapolation procedure based on the phenomenological, boundary scattering inclusive expression of Callaway [Phys. Rev. 113, 1046 (1959)]. Using κ from systems with L ⩽ 3 μm in the extrapolation, we find that the equilibrium uMFP κ of a (10,0) CNT can be predicted within 5%. The equilibrium procedure is then applied to a variety of carbon-based nanostructures, such as graphene flakes (GF), graphene nanoribbons (GNRs), CNTs, and icosahedral fullerenes, to determine the influence of size and environment (suspended versus supported) on κ. Concerning the GF and GNR systems, we find that
Nonideal plasmas as non-equilibrium media
International Nuclear Information System (INIS)
Morozov, I V; Norman, G E; Valuev, A A; Valuev, I A
2003-01-01
Various aspects of the collective behaviour of non-equilibrium nonideal plasmas are studied. The relaxation of kinetic energy to the equilibrium state is simulated by the molecular dynamics (MD) method for two-component non-degenerate strongly non-equilibrium plasmas. The initial non-exponential stage, its duration and the subsequent exponential stage of the relaxation process are studied for a wide range of ion charge, nonideality parameter and ion mass. A simulation model of the nonideal plasma excited by an electron beam is proposed. An approach is developed to calculate the dynamic structure factor in non-stationary conditions. Instability increment is obtained from MD simulations
MHD equilibrium identification on ASDEX-Upgrade
International Nuclear Information System (INIS)
McCarthy, P.J.; Schneider, W.; Lakner, K.; Zehrfeld, H.P.; Buechl, K.; Gernhardt, J.; Gruber, O.; Kallenbach, A.; Lieder, G.; Wunderlich, R.
1992-01-01
A central activity accompanying the ASDEX-Upgrade experiment is the analysis of MHD equilibria. There are two different numerical methods available, both using magnetic measurements which reflect equilibrium states of the plasma. The first method proceeds via a function parameterization (FP) technique, which uses in-vessel magnetic measurements to calculate up to 66 equilibrium parameters. The second method applies an interpretative equilibrium code (DIVA) for a best fit to a different set of magnetic measurements. Cross-checks with the measured particle influxes from the inner heat shield and the divertor region and with visible camera images of the scrape-off layer are made. (author) 3 refs., 3 figs
Numerical method for partial equilibrium flow
International Nuclear Information System (INIS)
Ramshaw, J.D.; Cloutman, L.D.; Los Alamos, New Mexico 87545)
1981-01-01
A numerical method is presented for chemically reactive fluid flow in which equilibrium and nonequilibrium reactions occur simultaneously. The equilibrium constraints on the species concentrations are established by a quadratic iterative procedure. If the equilibrium reactions are uncoupled and of second or lower order, the procedure converges in a single step. In general, convergence is most rapid when the reactions are weakly coupled. This can frequently be achieved by a judicious choice of the independent reactions. In typical transient calculations, satisfactory accuracy has been achieved with about five iterations per time step
The Conceptual Change Approach to Teaching Chemical Equilibrium
Canpolat, Nurtac; Pinarbasi, Tacettin; Bayrakceken, Samih; Geban, Omer
2006-01-01
This study investigates the effect of a conceptual change approach over traditional instruction on students' understanding of chemical equilibrium concepts (e.g. dynamic nature of equilibrium, definition of equilibrium constant, heterogeneous equilibrium, qualitative interpreting of equilibrium constant, changing the reaction conditions). This…
International Nuclear Information System (INIS)
Santos, Cecilia Martins
2003-01-01
In this work the efficiency calibration curves of thin-window and low background gas-flow proportional counters were determined for calibration standards with different energies and different absorber thicknesses. For the gross alpha counting we have used 241 Am and natural uranium standards and for the gross beta counting we have used 90 Sr/ 90 Y and 137 Cs standards in residue thicknesses ranging from 0 to approximately 18 mg/cm 2 . These sample thicknesses were increased with a previously determined salted solution prepared simulating the chemical composition of the underground water of IPEN The counting efficiency for alpha emitters ranged from 0,273 +- 0,038 for a weightless residue to only 0,015 +- 0,002 in a planchet containing 15 mg/cm 2 of residue for 241 Am standard. For natural uranium standard the efficiency ranged from 0,322 +- 0,030 for a weightless residue to 0,023 +- 0,003 in a planchet containing 14,5 mg/cm 2 of residue. The counting efficiency for beta emitters ranged from 0,430 +- 0,036 for a weightless residue to 0,247 +- 0,020 in a planchet containing 17 mg/cm 2 of residue for 137 Cs standard. For 90 Sr/ 90 Y standard the efficiency ranged from 0,489 +- 0,041 for a weightless residue to 0,323 +- 0,026 in a planchet containing 18 mg/cm 2 of residue. Results make evident the counting efficiency variation with the alpha or beta emitters energies and the thickness of the water samples residue. So, the calibration standard, the thickness and the chemical composition of the residue must always be considered in the gross alpha and beta radioactivity determination in water samples. (author)
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Shizuma, Kiyoshi, E-mail: shizuma@hiroshima-u.ac.jp [Graduate School of Engineering, Hiroshima University, Higashi-Hiroshima 739-8527 (Japan); Oba, Yurika; Takada, Momo [Graduate School of Integrated Arts and Sciences, Hiroshima University, Higashi-Hiroshima 739-8521 (Japan)
2016-09-15
A method for determining the γ-ray full-energy peak efficiency at positions close to three Ge detectors and at the well port of a well-type detector was developed for measuring environmental volume samples containing {sup 137}Cs, {sup 134}Cs and {sup 40}K. The efficiency was estimated by considering two correction factors: coincidence-summing and self-absorption corrections. The coincidence-summing correction for a cascade transition nuclide was estimated by an experimental method involving measuring a sample at the far and close positions of a detector. The derived coincidence-summing correction factors were compared with those of analytical and Monte Carlo simulation methods and good agreements were obtained. Differences in the matrix of the calibration source and the environmental sample resulted in an increase or decrease of the full-energy peak counts due to the self-absorption of γ-rays in the sample. The correction factor was derived as a function of the densities of several matrix materials. The present method was applied to the measurement of environmental samples and also low-level radioactivity measurements of water samples using the well-type detector.