Equilibrium molecular thermodynamics from Kirkwood sampling.
Somani, Sandeep; Okamoto, Yuko; Ballard, Andrew J; Wales, David J
2015-05-21
We present two methods for barrierless equilibrium sampling of molecular systems based on the recently proposed Kirkwood method (J. Chem. Phys. 2009, 130, 134102). Kirkwood sampling employs low-order correlations among internal coordinates of a molecule for random (or non-Markovian) sampling of the high dimensional conformational space. This is a geometrical sampling method independent of the potential energy surface. The first method is a variant of biased Monte Carlo, where Kirkwood sampling is used for generating trial Monte Carlo moves. Using this method, equilibrium distributions corresponding to different temperatures and potential energy functions can be generated from a given set of low-order correlations. Since Kirkwood samples are generated independently, this method is ideally suited for massively parallel distributed computing. The second approach is a variant of reservoir replica exchange, where Kirkwood sampling is used to construct a reservoir of conformations, which exchanges conformations with the replicas performing equilibrium sampling corresponding to different thermodynamic states. Coupling with the Kirkwood reservoir enhances sampling by facilitating global jumps in the conformational space. The efficiency of both methods depends on the overlap of the Kirkwood distribution with the target equilibrium distribution. We present proof-of-concept results for a model nine-atom linear molecule and alanine dipeptide.
Discrete Equilibrium Sampling with Arbitrary Nonequilibrium Processes
Hamze, Firas
2015-01-01
We present a novel framework for performing statistical sampling, expectation estimation, and partition function approximation using \\emph{arbitrary} heuristic stochastic processes defined over discrete state spaces. Using a highly parallel construction we call the \\emph{sequential constraining process}, we are able to simultaneously generate states with the heuristic process and accurately estimate their probabilities, even when they are far too small to be realistically inferred by direct counting. After showing that both theoretically correct importance sampling and Markov chain Monte Carlo are possible using the sequential constraining process, we integrate it into a methodology called \\emph{state space sampling}, extending the ideas of state space search from computer science to the sampling context. The methodology comprises a dynamic data structure that constructs a robust Bayesian model of the statistics generated by the heuristic process subject to an accuracy constraint, the posterior Kullback-Leibl...
Financial markets theory equilibrium, efficiency and information
Barucci, Emilio
2017-01-01
This work, now in a thoroughly revised second edition, presents the economic foundations of financial markets theory from a mathematically rigorous standpoint and offers a self-contained critical discussion based on empirical results. It is the only textbook on the subject to include more than two hundred exercises, with detailed solutions to selected exercises. Financial Markets Theory covers classical asset pricing theory in great detail, including utility theory, equilibrium theory, portfolio selection, mean-variance portfolio theory, CAPM, CCAPM, APT, and the Modigliani-Miller theorem. Starting from an analysis of the empirical evidence on the theory, the authors provide a discussion of the relevant literature, pointing out the main advances in classical asset pricing theory and the new approaches designed to address asset pricing puzzles and open problems (e.g., behavioral finance). Later chapters in the book contain more advanced material, including on the role of information in financial markets, non-c...
Power conversion efficiency of non-equilibrium light absorption
I. Santamaría-Holek
2017-04-01
Full Text Available We deduce a novel expression for the non-equilibrium photochemical potential and the power conversion efficiency of non-equilibrium light absorption by a thermostated material. Application of our results for the case of electron migration from valence to conduction bands in photovoltaic cells allows us to accurately interpolate experimental results for the maximal efficiencies of Ge-, Si-, GaAs-based cells and the like.
Equilibrium sampling for a thermodynamic assessment of contaminated sediments
Mayer, Philipp; Nørgaard Schmidt, Stine; Mäenpää, Kimmo
valid equilibrium sampling (method incorporated QA/QC). The measured equilibrium concentrations in silicone (Csil) can then be divided by silicone/water partition ratios to yield Cfree. CSil can also be compared to CSil from silicone equilibrated with biota in order to determine the equilibrium status...... will focus at the latest developments in equilibrium sampling concepts and methods. Further, we will explain how these approaches can provide a new basis for a thermodynamic assessment of polluted sediments.......Hydrophobic organic contaminants (HOCs) reaching the aquatic environment are largely stored in sediments. The risk of contaminated sediments is challenging to assess since traditional exhaustive extraction methods yield total HOC concentrations, whereas freely dissolved concentrations (Cfree...
Engine efficiency: The Curzon-Ahlborn engine and equilibrium thermodynamics
Bhattacharyya, Kamal
2014-01-01
The Carnot engine sets an upper limit to the efficiency of a practical heat engine. An arbitrary variety of the latter, however, is believed to behave closely as the Curzon-Ahlborn engine. Efficiency of this engine is obtained commonly by invoking the maximum power principle in a non-equilibrium framework. We outline here some plausible routes within the domain of classical thermodynamics to arrive at the same expression for efficiency. Further, studies on the performances of quite a few practical engines lead us to a simpler approximate formula with better bounds, on the basis of just the second law.
Sustainable Efficiency of Far-from-equilibrium Systems
Michel Moreau
2012-06-01
Full Text Available The Carnot efficiency of usual thermal motors compares the work produced by the motor to the heat received from the hot source, neglecting the perturbation of the cold source: thus, even if it may be appropriate for industrial purposes, it is not pertinent in the scope of sustainable development and environment care. In the framework of stochastic dynamics we propose a different definition of efficiency, which takes into account the entropy production in all the irreversible processes considered and allows for a fair estimation of the global costs of energy production from heat sources: thus, we may call it “sustainable efficiency“. It can be defined for any number of sources and any kind of reservoir, and it may be extended to other fields than conventional thermodynamics, such as biology and, hopefully, economics.Both sustainable efficiency and Carnot efficiency reach their maximum value when the processes are reversible, but then, power production vanishes. In practise, it is important to consider these efficiencies out of equilibrium, in the conditions of maximum power production. It can be proved that in these conditions, the sustainable efficiency has a universal upper bound, and that the power loss due to irreversibility is at less equal to the power delivered to the mechanical, external system.However, it may be difficult to deduce the sustainable efficiency from experimental observations, whereas Carnot’s efficiency is easily measurable and most generally used for practical purposes. It can be shown that the upper bound of sustainable efficiency implies a new higher bound of Carnot efficiency at maximum power, which is higher than the so-called Curzon-Ahlborn bound of efficiency at maximum power.
Sampling the equilibrium: the j-walking algorithm revisited
Rimas, Zilvinas
2016-01-01
The j-walking Monte-Carlo algorithm is revisited and updated to study the equilibrium properties of a system exhibiting broken ergodicity. The updated algorithm is tested on the Ising model and applied to the lattice-gas model for sorption in aerogel at low temperatures, when dynamics of the system is critically slowed down. It is demonstrated that the updated j-walking simulations are able to produce equilibrium isotherm which are typically hidden by the hysteresis effect within the standard single-flip simulations.
Equilibrium sampling for a thermodynamic assessment of contaminated sediments
of the biota relative to the sediment. Furthermore, concentrations in lipid at thermodynamic equilibrium with sediment (Clip?Sed) can be calculated via lipid/silicone partition ratios CSil × KLip:Sil, which has been done in studies with limnic, river and marine sediments. The data can then be compared to lipid...
Towards Cost-efficient Sampling Methods
Peng, Luo; Yongli, Li; Chong, Wu
2014-01-01
The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper presents two new sampling methods based on the perspective that a small part of vertices with high node degree can possess the most structure information of a network. The two proposed sampling methods are efficient in sampling the nodes with high degree. The first new sampling method is improved on the basis of the stratified random sampling method and...
Equilibrium sampling of hydrophobic organic chemicals in sediments: challenges and new approaches
Schaefer, S.; Mayer, Philipp; Becker, B.
2015-01-01
. The coated glass jars were very convenient for routine monitoring campaigns since (1) equilibration times are minimized by the very thin coatings, (2) the equilibration is done in the laboratory and (3) equilibrium sampling is confirmed by equal analyte concentrations in various silicone coating thicknesses......) are considered to be the effective concentrations for diffusive uptake and partitioning, and they can be measured by equilibrium sampling. We have thus applied glass jars with multiple coating thicknesses for equilibrium sampling of HOCs in sediment samples from various sites in different German rivers...... without tedious time-serious measurements. However, for some sediment samples analyte concentrations decreased towards thicker silicone coating possibly caused by depletion of the sediment or equilibrium partitioning not being attained. In this study, we investigated the application of sediment depletion...
Nikolaos Giannellis; Athanasios Papadopoulos
2006-01-01
This paper proposes an alternative way of testing FOREX efficiency for developing countries. The FOREX market will be efficient if fully reflects all available information. If this holds, the actual exchange rate will not deviate significantly from its equilibrium rate. Moreover, the spot rate should deviate from its equilibrium rate by only transitory components (i.e. it should follow a white noise process). This test is applied to three Central & Eastern European Countries – members of the ...
Nikolaos Giannellis; Athanasios Papadopoulos
2006-01-01
This paper proposes an alternative way of testing FOREX efficiency for developing countries. The FOREX market will be efficient if fully reflects all available information. If this holds, the actual exchange rate will not deviate significantly from its equilibrium rate. Moreover, the spot rate should deviate from its equilibrium rate by only transitory components (i.e. it should follow a white noise process). This test is applied to three Central & Eastern European Countries – members of the ...
Towards Cost-efficient Sampling Methods
Peng, Luo; Chong, Wu
2014-01-01
The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper presents two new sampling methods based on the perspective that a small part of vertices with high node degree can possess the most structure information of a network. The two proposed sampling methods are efficient in sampling the nodes with high degree. The first new sampling method is improved on the basis of the stratified random sampling method and selects the high degree nodes with higher probability by classifying the nodes according to their degree distribution. The second sampling method improves the existing snowball sampling method so that it enables to sample the targeted nodes selectively in every sampling step. Besides, the two proposed sampling methods not only sample the nodes but also pick the edges directly connected to these nodes. In order to demonstrate the two methods' availability and accuracy, we compare them with the existing sampling methods in...
Club Efficiency and Lindahl Equilibrium with Semi-Public Goods
Ten Raa, T.; Gilles, R.P.
2000-01-01
Limit core allocations are the ones that remain in the core of a replicated economy. An equivalent notion for economies with public goods is Schweizer's club efficiency. We extend this notion to economies with goods that have a semi-public nature. The notion encompasses purely private as well as
Club Efficiency and Lindahl Equilibrium with Semi-Public Goods
Ten Raa, T.; Gilles, R.P.
2000-01-01
Limit core allocations are the ones that remain in the core of a replicated economy. An equivalent notion for economies with public goods is Schweizer's club efficiency. We extend this notion to economies with goods that have a semi-public nature. The notion encompasses purely private as well as pur
Schäfer, Sabine; Antoni, Catherine; Möhlenkamp, Christel
2015-01-01
Equilibrium sampling can be applied to measure freely dissolved concentrations (cfree) of hydrophobic organic chemicals (HOCs) that are considered effective concentrations for diffusive uptake and partitioning. It can also yield concentrations in lipids at thermodynamic equilibrium with the sedim...... to apply equilibrium sampling for determining bioavailability and bioaccumulation potential of HOCs, since this technique can provide a thermodynamic basis for the risk assessment and management of contaminated sediments.......Equilibrium sampling can be applied to measure freely dissolved concentrations (cfree) of hydrophobic organic chemicals (HOCs) that are considered effective concentrations for diffusive uptake and partitioning. It can also yield concentrations in lipids at thermodynamic equilibrium...... with the sediment (Clip⇔sed) by multiplying concentrations in the equilibrium sampling polymer with lipid to polymer partition coefficients. We have applied silicone coated glass jars for equilibrium sampling of seven ‘indicator’ polychlorinated biphenyls (PCBs) in sediment samples from ten locations along...
Mäenpää, Kimmo; Leppänen, Matti T.; Reichenberg, Fredrik
2011-01-01
with respect to equilibrium partitioning concentrations in lipids (Clipid,partitioning): (i) Solid phase microextraction in the headspace above the sample (HS-SPME) required optimization for its application to PCBs, and it was calibrated above external partitioning standards in olive oil. (ii) Equilibrium...
Göppel, Tobias; Palyulin, Vladimir V; Gerland, Ulrich
2016-07-27
An out-of-equilibrium physical environment can drive chemical reactions into thermodynamically unfavorable regimes. Under prebiotic conditions such a coupling between physical and chemical non-equilibria may have enabled the spontaneous emergence of primitive evolutionary processes. Here, we study the coupling efficiency within a theoretical model that is inspired by recent laboratory experiments, but focuses on generic effects arising whenever reactant and product molecules have different transport coefficients in a flow-through system. In our model, the physical non-equilibrium is represented by a drift-diffusion process, which is a valid coarse-grained description for the interplay between thermophoresis and convection, as well as for many other molecular transport processes. As a simple chemical reaction, we consider a reversible dimerization process, which is coupled to the transport process by different drift velocities for monomers and dimers. Within this minimal model, the coupling efficiency between the non-equilibrium transport process and the chemical reaction can be analyzed in all parameter regimes. The analysis shows that the efficiency depends strongly on the Damköhler number, a parameter that measures the relative timescales associated with the transport and reaction kinetics. Our model and results will be useful for a better understanding of the conditions for which non-equilibrium environments can provide a significant driving force for chemical reactions in a prebiotic setting.
Sampling efficiency of the Moore egg collector
Worthington, Thomas A.; Brewer, Shannon K.; Grabowski, Timothy B.; Mueller, Julia
2013-01-01
Quantitative studies focusing on the collection of semibuoyant fish eggs, which are associated with a pelagic broadcast-spawning reproductive strategy, are often conducted to evaluate reproductive success. Many of the fishes in this reproductive guild have suffered significant reductions in range and abundance. However, the efficiency of the sampling gear used to evaluate reproduction is often unknown and renders interpretation of the data from these studies difficult. Our objective was to assess the efficiency of a modified Moore egg collector (MEC) using field and laboratory trials. Gear efficiency was assessed by releasing a known quantity of gellan beads with a specific gravity similar to that of eggs from representatives of this reproductive guild (e.g., the Arkansas River Shiner Notropis girardi) into an outdoor flume and recording recaptures. We also used field trials to determine how discharge and release location influenced gear efficiency given current methodological approaches. The flume trials indicated that gear efficiency ranged between 0.0% and 9.5% (n = 57) in a simple 1.83-m-wide channel and was positively related to discharge. Efficiency in the field trials was lower, ranging between 0.0% and 3.6%, and was negatively related to bead release distance from the MEC and discharge. The flume trials indicated that the gellan beads were not distributed uniformly across the channel, although aggregation was reduced at higher discharges. This clustering of passively drifting particles should be considered when selecting placement sites for an MEC; further, the use of multiple devices may be warranted in channels with multiple areas of concentrated flow.
Belkadi, Abdelkrim; Yan, Wei; Moggia, Elsa;
2013-01-01
architecture makes the implementation and evaluation of new ideas and concepts easy. Tests on several 2-D and 3-D gas injection examples indicate that with an efficient implementation of the thermodynamic package and the conventional stability analysis algorithm, the speed can be increased by several folds......Compositional reservoir simulations are widely used to simulate reservoir processes with strong compositional effects, such as gas injection. The equations of state (EoS) based phase equilibrium calculation is a time consuming part in this type of simulations. The phase equilibrium problem can...... be either decoupled from or coupled with the transport problem. In the former case, flash calculation is required, which consists of stability analysis and subsequent phase split calculation; in the latter case, no explicit phase split calculation is required but efficient stability analysis and optimized...
Nonequilibrium candidate Monte Carlo is an efficient tool for equilibrium simulation
Nilmeier, J. P.; Crooks, G. E.; Minh, D. D. L.; Chodera, J. D.
2011-10-24
Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. While generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.
Su, Ji; Yang, Lisha; Lu, Mi; Lin, Hongfei
2015-03-01
A highly efficient, reversible hydrogen storage-evolution process has been developed based on the ammonium bicarbonate/formate redox equilibrium over the same carbon-supported palladium nanocatalyst. This heterogeneously catalyzed hydrogen storage system is comparable to the counterpart homogeneous systems and has shown fast reaction kinetics of both the hydrogenation of ammonium bicarbonate and the dehydrogenation of ammonium formate under mild operating conditions. By adjusting temperature and pressure, the extent of hydrogen storage and evolution can be well controlled in the same catalytic system. Moreover, the hydrogen storage system based on aqueous-phase ammonium formate is advantageous owing to its high volumetric energy density.
Jahnke, Annika; Mayer, Philipp; Adolfsson-Erici, Margaretha
2011-01-01
lipids. In the present study, PDMS thin films were used for equilibrium sampling of polychlorinated biphenyls (PCBs) in intact tissue of two eels and one salmon. A classical exhaustive extraction technique to determine lipid-normalized PCB concentrations, which assigns the body burden of the chemical...... of the equilibrium sampling technique, while at the same time confirming that the fugacity capacity of these lipid-rich tissues for PCBs was dominated by the lipid fraction. Equilibrium sampling was also applied to homogenates of the same fish tissues. The PCB concentrations in the PDMS were 1.2 to 2.0 times higher......Equilibrium sampling of organic pollutants into the silicone polydimethylsiloxane (PDMS) has recently been applied in biological tissues including fish. Pollutant concentrations in PDMS can then be multiplied with lipid/PDMS distribution coefficients (DLipid,PDMS) to obtain concentrations in fish...
Muijs, B.; Jonker, M.T.O.
2012-01-01
Over the past couple of years, several analytical methods have been developed for assessing the bioavailability of environmental contaminants in sediments and soils. Comparison studies suggest that equilibrium passive sampling methods generally provide the better estimates of internal concentrations
SU Qiong; ZHENG Rui; CHEN Yong; CHENG Jian-Ping
2004-01-01
This paper reports the observed changes for equilibrium factors between 226Ra and 222Rn with sealing time of the samples. The samples include soil, raw coal, mineral water, cement, rock, etc. Especially the conceptions of "pre-equilibrium time" and "pre-equilibrium factor" have been put forward and methods of measuring and processing data have been given which can be used for rapidly reporting activity of 226Ra in samples with unknown equilibrium factor. It is definitely concluded that, using methods given in the paper, a test report will be completed in 3～7days, instead of one month, after receiving the sample whose activity is not lower than LLD of the spectrometer.
Efficient experimental validation of photonic boson sampling
Spagnolo, N; Bentivegna, M; Brod, D J; Crespi, A; Flamini, F; Giacomini, S; Milani, G; Ramponi, R; Mataloni, P; Osellame, R; Galvao, E F; Sciarrino, F
2013-01-01
A boson sampling device is a specialised quantum computer that solves a problem which is strongly believed to be computationally hard for classical computers. Recently a number of small-scale implementations have been reported, all based on multi-photon interference in multimode interferometers. In the hard-to-simulate regime, even validating the device's functioning may pose a problem. In a recent criticism of boson sampling experiments, Gogolin et al. argued that the output would be effectively indistinguishable from the trivial, uniform distribution. Here we report new boson sampling experiments on larger photonic chips, and analyse the data using a scalable statistical test recently proposed by Aaronson and Arkhipov. We show the test successfully validates small experimental data samples against the hypothesis that they are uniformly distributed. We also show how to discriminate data arising from either indistinguishable or distinguishable photons. Our results pave the way towards demonstrating the quantu...
Perceptual learning increases orientation sampling efficiency
Moerel, D.; Ling, S.; Jehee, J.F.M.
2016-01-01
Visual orientation discrimination is known to improve with extensive training, but the mechanisms underlying this behavioral benefit remain poorly understood. Here, we examine the possibility that more reliable task performance could arise in part because observers learn to sample information from a
Cannon, Cody [Univ. of Idaho, Idaho Falls, ID (United States). Center for Advanced Studies; Wood, Thomas [Univ. of Idaho, Idaho Falls, ID (United States). Center for Advanced Studies; Neupane, Ghanashyam [Idaho National Lab. (INL), Idaho Falls, ID (United States). Center for Advanced Studies; McLing, Travis [Idaho National Lab. (INL), Idaho Falls, ID (United States). Center for Advanced Studies; Mattson, Earl [Idaho National Lab. (INL), Idaho Falls, ID (United States); Dobson, Patrick [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Conrad, Mark [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2014-10-01
The Eastern Snake River Plain (ESRP) is an area of high regional heat flux due the movement of the North American Plate over the Yellowstone Hotspot beginning ca.16 Ma. Temperature gradients between 45-60 °C/km (up to double the global average) have been calculated from deep wells that penetrate the upper aquifer system (Blackwell 1989). Despite the high geothermal potential, thermal signatures from hot springs and wells are effectively masked by the rapid flow of cold groundwater through the highly permeable basalts of the Eastern Snake River Plain aquifer (ESRPA) (up to 500+ m thick). This preliminary study is part of an effort to more accurately predict temperatures of the ESRP deep thermal reservoir while accounting for the effects of the prolific cold water aquifer system above. This study combines the use of traditional geothermometry, mixing models, and a multicomponent equilibrium geothermometry (MEG) tool to investigate the geothermal potential of the ESRP. In March, 2014, a collaborative team including members of the University of Idaho, the Idaho National Laboratory, and the Lawrence Berkeley National Laboratory collected 14 thermal water samples from and adjacent to the Eastern Snake River Plain. The preliminary results of chemical analyses and geothermometry applied to these samples are presented herein.
Efficient cosmological parameter sampling using sparse grids
Frommert, Mona; Riller, Thomas; Reinecke, Martin; Bungartz, Hans-Joachim; Ensslin, Torsten
2010-01-01
We present a novel method to significantly speed up cosmological parameter sampling. The method relies on constructing an interpolation of the CMB-log-likelihood based on sparse grids, which is used as a shortcut for the likelihood-evaluation. We obtain excellent results over a large region in parameter space, comprising about 25 log-likelihoods around the peak, and we reproduce the one-dimensional projections of the likelihood almost perfectly. In speed and accuracy, our technique is competitive to existing approaches to accelerate parameter estimation based on polynomial interpolation or neural networks, while having some advantages over them. In our method, there is no danger of creating unphysical wiggles as it can be the case for polynomial fits of a high degree. Furthermore, we do not require a long training time as for neural networks, but the construction of the interpolation is determined by the time it takes to evaluate the likelihood at the sampling points, which can be parallelised to an arbitrary...
Induced fit and equilibrium dynamics for high catalytic efficiency in ferredoxin-NADP(H) reductases.
Paladini, Darío H; Musumeci, Matías A; Carrillo, Néstor; Ceccarelli, Eduardo A
2009-06-23
Ferredoxin-NADP(H) reductase (FNR) is a FAD-containing protein that catalyzes the reversible transfer of electrons between NADP(H) and ferredoxin or flavodoxin. This enzyme participates in the redox-based metabolism of plastids, mitochondria, and bacteria. Plastidic plant-type FNRs are very efficient reductases in supporting photosynthesis. They have a strong preference for NADP(H) over NAD(H), consistent with the main physiological role of NADP(+) photoreduction. In contrast, FNRs from organisms with heterotrophic metabolisms or anoxygenic photosynthesis display turnover rates that are up to 100-fold lower than those of their plastidic and cyanobacterial counterparts. With the aim of elucidating the mechanisms by which plastidic enzymes achieve such high catalytic efficiencies and NADP(H) specificity, we investigated the manner in which the NADP(H) nicotinamide enters and properly binds to the catalytic site. Analyzing the interaction of different nucleotides, substrate analogues, and aromatic compounds with the wild type and the mutant Y308S-FNR from pea, we found that the interaction of the 2'-P-AMP moiety from NADP(+) induces a change that favors the interaction of the nicotinamide, thereby facilitating the catalytic process. Furthermore, the main role of the terminal tyrosine, Y308, is to destabilize the interaction of the nicotinamide with the enzyme, inducing product release and favoring discrimination of the nucleotide substrate. We determined that this function can be replaced by the addition of aromatic compounds that freely diffuse in solution and establish a dynamic equilibrium, reversing the effect of the mutation in the Y308S-FNR mutant.
Олександр Михайлович Скребцов
2015-10-01
Full Text Available Austenite to ferrite and pearlite transformation has not been studied enough for low-carbon peritectic steels. Experiments were carried out in the electric arc furnace. Samples of the liquid metal were taken during smelting; three sample at melting, oxidation and reduction as well as one sample from the bucket were taken. The optical binocular microscope Axio Imagez A2m (production of the German company Zeis AG was used to analyze the samples for the chemical composition of the elements and for the microstructure(ferrite and pearlite amount. It makes it possible to determine ferrite-to-pearlite relation in steel by means of the special program Thixomet Pro. Experimental percentage of ferrite was compared with the equilibrium percentage of ferrite calculated from the carbon content in the sample from the Fe-C phase diagram. It has been found that during charge melting experimental ferrite content is 0,52-1,7 equilibrium ferrite content. During the recovery period the microstructural heterogeneity stabilizes and is equal to 0,91-0,93 equilibrium heterogeneity. This ratio is in good agreement with the data available in literature. The amount of rejected finished metal as function of the temperature of the melt at the outlet of the furnace has been determined as well. The amount of rejected steel is minimum if steel is 1,052-1,07 times overheated above the liquidus point which is equal to the temperature of equilibrium microheterogeneity of the molten metal
Lang, Susann-Cathrin; Hursthouse, Andrew; Mayer, Philipp
2015-01-01
Solid Phase Microextraction (SPME) was applied to provide the first large scale dataset of freely dissolved concentrations for 9 polycyclic aromatic hydrocarbons (PAHs) in Baltic Sea sediment cores. Polydimethylsiloxane (PDMS) coated glass fibers were used for ex-situ equilibrium sampling followed...
Schäfer, Sabine; Antoni, Catherine; Möhlenkamp, Christel; Claus, Evelyn; Reifferscheid, Georg; Heininger, Peter; Mayer, Philipp
2015-11-01
Equilibrium sampling can be applied to measure freely dissolved concentrations (cfree) of hydrophobic organic chemicals (HOCs) that are considered effective concentrations for diffusive uptake and partitioning. It can also yield concentrations in lipids at thermodynamic equilibrium with the sediment (clip⇌sed) by multiplying concentrations in the equilibrium sampling polymer with lipid to polymer partition coefficients. We have applied silicone coated glass jars for equilibrium sampling of seven 'indicator' polychlorinated biphenyls (PCBs) in sediment samples from ten locations along the River Elbe to measure cfree of PCBs and their clip⇌sed. For three sites, we then related clip⇌sed to lipid-normalized PCB concentrations (cbio,lip) that were determined independently by the German Environmental Specimen Bank in common bream, a fish species living in close contact with the sediment: (1) In all cases, cbio,lip were below clip⇌sed, (2) there was proportionality between the two parameters with high R(2) values (0.92-1.00) and (3) the slopes of the linear regressions were very similar between the three stations (0.297; 0.327; 0.390). These results confirm the close link between PCB bioaccumulation and the thermodynamic potential of sediment-associated HOCs for partitioning into lipids. This novel approach gives clearer and more consistent results compared to conventional approaches that are based on total concentrations in sediment and biota-sediment accumulation factors. We propose to apply equilibrium sampling for determining bioavailability and bioaccumulation potential of HOCs, since this technique can provide a thermodynamic basis for the risk assessment and management of contaminated sediments.
Jahnke, Annika; MacLeod, Matthew; Wickström, Håkan
2014-01-01
Equilibrium partitioning (EqP) theory is currently the most widely used approach for linking sediment pollution by persistent hydrophobic organic chemicals to bioaccumulation. Most applications of the EqP approach assume (I) a generic relationship between organic carbon-normalized chemical...... chemical concentrations in the silicone, and applying lipid/silicone partition ratios to yield concentrations in lipid at thermodynamic equilibrium with the sediment (CLip⇌Sed). Furthermore, we evaluated the validity of assumption II by comparing CLip⇌Sed of selected persistent, bioaccumulative and toxic...... pollutants (polychlorinated biphenyls (PCBs) and hexachlorobenzene (HCB)) to lipid-normalized concentrations for a range of biota from a Swedish background lake. PCBs in duck mussels, roach, eel, pikeperch, perch and pike were mostly below the equilibrium partitioning level relative to the sediment, i...
Emil Stavrev
2000-01-01
The author of this paper constructs a continuous time macro-econometric model of the Czech economy. The model is assembled as a system of twelve non-linear differential equations. The model is put into use to determine the nominal equilibrium exchange rate of the Czech koruna in a macro-economic framework. The paper also investigates the effectiveness of monetary and fiscal policies in the presence of a fixed exchange-rate regime and massive capital inflows. The search for an equilibrium poin...
Efficient estimation for ergodic diffusions sampled at high frequency
Sørensen, Michael
A general theory of efficient estimation for ergodic diffusions sampled at high fre- quency is presented. High frequency sampling is now possible in many applications, in particular in finance. The theory is formulated in term of approximate martingale estimating functions and covers a large class...
An efficient method for sampling the essential subspace of proteins
Amadei, A; Linssen, A.B M; de Groot, B.L.; van Aalten, D.M.F.; Berendsen, H.J.C.
1996-01-01
A method is presented for a more efficient sampling of the configurational space of proteins as compared to conventional sampling techniques such as molecular dynamics. The method is based on the large conformational changes in proteins revealed by the ''essential dynamics'' analysis. A form of cons
Małolepsza, Edyta; Kim, Jaegil; Keyes, Tom
2015-05-01
Metastable β ice holds small guest molecules in stable gas hydrates, so its solid-liquid equilibrium is of interest. However, aqueous crystal-liquid transitions are very difficult to simulate. A new molecular dynamics algorithm generates trajectories in a generalized NPT ensemble and equilibrates states of coexisting phases with a selectable enthalpy. With replicas spanning the range between β ice and liquid water, we find the statistical temperature from the enthalpy histograms and characterize the transition by the entropy, introducing a general computational procedure for first-order transitions.
Sampling and kriging spatial means: efficiency and conditions.
Wang, Jin-Feng; Li, Lian-Fa; Christakos, George
2009-01-01
Sampling and estimation of geographical attributes that vary across space (e.g., area temperature, urban pollution level, provincial cultivated land, regional population mortality and state agricultural production) are common yet important constituents of many real-world applications. Spatial attribute estimation and the associated accuracy depend on the available sampling design and statistical inference modelling. In the present work, our concern is areal attribute estimation, in which the spatial sampling and Kriging means are compared in terms of mean values, variances of mean values, comparative efficiencies and underlying conditions. Both the theoretical analysis and the empirical study show that the mean Kriging technique outperforms other commonly-used techniques. Estimation techniques that account for spatial correlation (dependence) are more efficient than those that do not, whereas the comparative efficiencies of the various methods change with surface features. The mean Kriging technique can be applied to other spatially distributed attributes, as well.
Sampling and Kriging Spatial Means: Efficiency and Conditions
George Christakos
2009-07-01
Full Text Available Sampling and estimation of geographical attributes that vary across space (e.g., area temperature, urban pollution level, provincial cultivated land, regional population mortality and state agricultural production are common yet important constituents of many real-world applications. Spatial attribute estimation and the associated accuracy depend on the available sampling design and statistical inference modelling. In the present work, our concern is areal attribute estimation, in which the spatial sampling and Kriging means are compared in terms of mean values, variances of mean values, comparative efficiencies and underlying conditions. Both the theoretical analysis and the empirical study show that the mean Kriging technique outperforms other commonly-used techniques. Estimation techniques that account for spatial correlation (dependence are more efficient than those that do not, whereas the comparative efficiencies of the various methods change with surface features. The mean Kriging technique can be applied to other spatially distributed attributes, as well.
Approximate determination of efficiency for activity measurements of cylindrical samples
Helbig, W. [Nuclear Engineering and Analytics Rossendorf, Inc. (VKTA), Dresden (Germany); Bothe, M. [Nuclear Engineering and Analytics Rossendorf, Inc. (VKTA), Dresden (Germany)
1997-03-01
Some calibration samples are necessary with the same geometrical parameters but of different materials, containing known activities A homogeniously distributed. Their densities are measured, their mass absorption coefficients may be unknown. These calibration samples are positioned in the counting geometry, for instance directly on the detector. The efficiency function {epsilon}(E) for each sample is gained by measuring the gamma spectra and evaluating all usable gamma energy peaks. From these {epsilon}(E) the common valid {epsilon}{sub geom}(E) will be deduced. For this purpose the functions {epsilon}{sub mu}(E) for these samples have to be established. (orig.)
Efficient maximal Poisson-disk sampling and remeshing on surfaces
Guo, Jianwei
2015-02-01
Poisson-disk sampling is one of the fundamental research problems in computer graphics that has many applications. In this paper, we study the problem of maximal Poisson-disk sampling on mesh surfaces. We present a simple approach that generalizes the 2D maximal sampling framework to surfaces. The key observation is to use a subdivided mesh as the sampling domain for conflict checking and void detection. Our approach improves the state-of-the-art approach in efficiency, quality and the memory consumption.
Clustered nested sampling: efficient Bayesian inference for cosmology
Shaw, R; Hobson, M P
2007-01-01
Bayesian model selection provides the cosmologist with an exacting tool to distinguish between competing models based purely on the data, via the Bayesian evidence. Previous methods to calculate this quantity either lacked general applicability or were computationally demanding. However, nested sampling (Skilling 2004), which was recently applied successfully to cosmology by Muhkerjee et al. 2006, overcomes both of these impediments. Their implementation restricts the parameter space sampled, and thus improves the efficiency, using a decreasing ellipsoidal bound in the $n$-dimensional parameter space centred on the maximum likelihood point. However, if the likelihood function contains any multi-modality, then the ellipse is prevented from constraining the sampling region efficiently. In this paper we introduce a method of clustered ellipsoidal nested sampling which can form multiple ellipses around each individual peak in the likelihood. In addition we have implemented a method for determining the expectation...
Convolution kernel design and efficient algorithm for sampling density correction.
Johnson, Kenneth O; Pipe, James G
2009-02-01
Sampling density compensation is an important step in non-cartesian image reconstruction. One of the common techniques to determine weights that compensate for differences in sampling density involves a convolution. A new convolution kernel is designed for sampling density attempting to minimize the error in a fully reconstructed image. The resulting weights obtained using this new kernel are compared with various previous methods, showing a reduction in reconstruction error. A computationally efficient algorithm is also presented that facilitates the calculation of the convolution of finite kernels. Both the kernel and the algorithm are extended to 3D. Copyright 2009 Wiley-Liss, Inc.
Recovery efficiencies for Burkholderia thailandensis from various aerosol sampling media
Paul eDabisch
2012-06-01
Full Text Available Burkholderia thailandensis is used in the laboratory as a surrogate of the more virulent B. pseudomallei. Since inhalation is believed to be a natural route of infection for B. pseudomallei, many animal studies with B. pseudomallei and B. thailandensis utilize the inhalation route of exposure. The aim of the present study was to quantify the recovery efficiency of culturable B. thailandensis from several common aerosol sampling devices to ensure that collected microorganisms could be reliably recovered post-collection. The sampling devices tested included 25-mm gelatin filters, 25-mm stainless steel disks used in Mercer cascade impactors, and two types of glass impingers. The results demonstrate that while several processing methods tested resulted in significantly lower physical recovery efficiencies than other methods, it was possible to obtain culturable recovery efficiencies for B. thailandensis and physical recovery efficiencies for 1 μm fluorescent spheres of at least 0.95 from all of the sampling media tested given an appropriate sample processing procedure. The results of the present study also demonstrated that the bubbling action of liquid media in all-glass impingers (AGIs can result in physical loss of material from the collection medium, although additional studies are needed to verify the exact mechanisms involved. Overall, the results of this study demonstrate that the collection mechanism as well as the post-collection processing method can significantly affect the recovery from and retention of culturable microorganisms in sampling media, potentially affecting the calculated airborne concentration and any subsequent estimations of risk or dose derived from such data.
Efficient calculation of rate constants: Downhill versus uphill sampling
Klenin, Konstantin V.
2014-08-01
The classical transition state theory (TST), together with the notion of transmission coefficient, provides a useful tool for calculation of rate constants for rare events. However, in complex biomolecular reactions, such as protein folding, it is difficult to find a good reaction coordinate, so the transition state is ill-defined. In this case, other approaches are more popular, such as the transition interface sampling (TIS) and the forward flux sampling (FFS). Here, we show that the algorithms developed in the frames of TIS and FFS can be successfully applied, after a modification, for calculation of the transmission coefficient. The new procedure (which we call "downhill sampling") is more efficient in comparison with the traditional TIS and FFS ("uphill sampling") even if the reaction coordinate is bad. We also propose a new computational scheme that combines the advantages of TST, TIS, and FFS.
Efficient infill sampling for unconstrained robust optimization problems
Rehman, Samee Ur; Langelaar, Matthijs
2016-08-01
A novel infill sampling criterion is proposed for efficient estimation of the global robust optimum of expensive computer simulation based problems. The algorithm is especially geared towards addressing problems that are affected by uncertainties in design variables and problem parameters. The method is based on constructing metamodels using Kriging and adaptively sampling the response surface via a principle of expected improvement adapted for robust optimization. Several numerical examples and an engineering case study are used to demonstrate the ability of the algorithm to estimate the global robust optimum using a limited number of expensive function evaluations.
Efficiency of Monte Carlo sampling in chaotic systems.
Leitão, Jorge C; Lopes, J M Viana Parente; Altmann, Eduardo G
2014-11-01
In this paper we investigate how the complexity of chaotic phase spaces affect the efficiency of importance sampling Monte Carlo simulations. We focus on flat-histogram simulations of the distribution of finite-time Lyapunov exponent in a simple chaotic system and obtain analytically that the computational effort: (i) scales polynomially with the finite time, a tremendous improvement over the exponential scaling obtained in uniform sampling simulations; and (ii) the polynomial scaling is suboptimal, a phenomenon known as critical slowing down. We show that critical slowing down appears because of the limited possibilities to issue a local proposal in the Monte Carlo procedure when it is applied to chaotic systems. These results show how generic properties of chaotic systems limit the efficiency of Monte Carlo simulations.
F -Discrepancy for Efficient Sampling in Approximate Dynamic Programming.
Cervellera, Cristiano; Maccio, Danilo
2016-07-01
In this paper, we address the problem of generating efficient state sample points for the solution of continuous-state finite-horizon Markovian decision problems through approximate dynamic programming. It is known that the selection of sampling points at which the value function is observed is a key factor when such function is approximated by a model based on a finite number of evaluations. A standard approach consists in generating these points through a random or deterministic procedure, aiming at a balanced covering of the state space. Yet, this solution may not be efficient if the state trajectories are not uniformly distributed. Here, we propose to exploit F -discrepancy, a quantity that measures how closely a set of random points represents a probability distribution, and introduce an example of an algorithm based on such concept to automatically select point sets that are efficient with respect to the underlying Markovian process. An error analysis of the approximate solution is provided, showing how the proposed algorithm enables convergence under suitable regularity hypotheses. Then, simulation results are provided concerning an inventory forecasting test problem. The tests confirm in general the important role of F -discrepancy, and show how the proposed algorithm is able to yield better results than uniform sampling, using sets even 50 times smaller.
Johal, Ramandeep S.; Rai, Renuka
2016-01-01
We show the validity of some results of finite-time thermodynamics, also within the quasi-static framework of classical thermodynamics. First, we consider the efficiency at maximum work (η_0) from finite source and sink modelled as identical thermodynamic systems. The near-equilibrium regime is characterized by expanding the internal energy up to second order (i.e. up to linear response) in the difference of initial entropies of the source and the sink. It is shown that the efficiency is given by a universal expression 2 ηC / (4-η_C) , where ηC is the Carnot efficiency. Then, different sizes of source and sink are treated, by combining different numbers of copies of the same thermodynamic system. The efficiency of this process is found to be \\bmη0 = η_C/ (2-γ η_C) , where the parameter γ depends only on the relative size of the source and the sink. This implies that within the linear response theory, η0 is bounded as η_C}/{2} ≤\\bm{η_0≤ {η_C}/{(2 - η_C)} , where the upper (lower) bound is obtained with a sink much larger (smaller) in size than the source. We also remark on the behavior of the efficiency beyond linear response.
Sampling efficiency of modified 37-mm sampling cassettes using computational fluid dynamics.
Anthony, T Renée; Sleeth, Darrah; Volckens, John
2016-01-01
In the U.S., most industrial hygiene practitioners continue to rely on the closed-face cassette (CFC) to assess worker exposures to hazardous dusts, primarily because ease of use, cost, and familiarity. However, mass concentrations measured with this classic sampler underestimate exposures to larger particles throughout the inhalable particulate mass (IPM) size range (up to aerodynamic diameters of 100 μm). To investigate whether the current 37-mm inlet cap can be redesigned to better meet the IPM sampling criterion, computational fluid dynamics (CFD) models were developed, and particle sampling efficiencies associated with various modifications to the CFC inlet cap were determined. Simulations of fluid flow (standard k-epsilon turbulent model) and particle transport (laminar trajectories, 1-116 μm) were conducted using sampling flow rates of 10 L min(-1) in slow moving air (0.2 m s(-1)) in the facing-the-wind orientation. Combinations of seven inlet shapes and three inlet diameters were evaluated as candidates to replace the current 37-mm inlet cap. For a given inlet geometry, differences in sampler efficiency between inlet diameters averaged less than 1% for particles through 100 μm, but the largest opening was found to increase the efficiency for the 116 μm particles by 14% for the flat inlet cap. A substantial reduction in sampler efficiency was identified for sampler inlets with side walls extending beyond the dimension of the external lip of the current 37-mm CFC. The inlet cap based on the 37-mm CFC dimensions with an expanded 15-mm entry provided the best agreement with facing-the-wind human aspiration efficiency. The sampler efficiency was increased with a flat entry or with a thin central lip adjacent to the new enlarged entry. This work provides a substantial body of sampling efficiency estimates as a function of particle size and inlet geometry for personal aerosol samplers.
Efficient computation of smoothing splines via adaptive basis sampling
Ma, Ping
2015-06-24
© 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n^{3}). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.
Nezarat, Amin; Dastghaibifard, G H
2015-01-01
One of the most complex issues in the cloud computing environment is the problem of resource allocation so that, on one hand, the cloud provider expects the most profitability and, on the other hand, users also expect to have the best resources at their disposal considering the budget constraints and time. In most previous work conducted, heuristic and evolutionary approaches have been used to solve this problem. Nevertheless, since the nature of this environment is based on economic methods, using such methods can decrease response time and reducing the complexity of the problem. In this paper, an auction-based method is proposed which determines the auction winner by applying game theory mechanism and holding a repetitive game with incomplete information in a non-cooperative environment. In this method, users calculate suitable price bid with their objective function during several round and repetitions and send it to the auctioneer; and the auctioneer chooses the winning player based the suggested utility function. In the proposed method, the end point of the game is the Nash equilibrium point where players are no longer inclined to alter their bid for that resource and the final bid also satisfies the auctioneer's utility function. To prove the response space convexity, the Lagrange method is used and the proposed model is simulated in the cloudsim and the results are compared with previous work. At the end, it is concluded that this method converges to a response in a shorter time, provides the lowest service level agreement violations and the most utility to the provider.
Ahn, Surl Hee; Grate, Jay W.; Darve, Eric F.
2017-08-21
Molecular dynamics (MD) simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules but are limited by the timescale barrier, i.e., we may be unable to efficiently obtain properties because we need to run microseconds or longer simulations using femtoseconds time steps. While there are several existing methods to overcome this timescale barrier and efficiently sample thermodynamic and/or kinetic properties, problems remain in regard to being able to sample un- known systems, deal with high-dimensional space of collective variables, and focus the computational effort on slow timescales. Hence, a new sampling method, called the “Concurrent Adaptive Sampling (CAS) algorithm,” has been developed to tackle these three issues and efficiently obtain conformations and pathways. The method is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective vari- ables and uses macrostates (a partition of the collective variable space) to enhance the sampling. The exploration is done by running a large number of short simula- tions, and a clustering technique is used to accelerate the sampling. In this paper, we introduce the new methodology and show results from two-dimensional models and bio-molecules, such as penta-alanine and triazine polymer
Hanley, Nick D. [Department of Economics, University of Stirling, Stirling FK9 4LA (United Kingdom); McGregor, Peter G.; Swales, J. Kim [Fraser of Allander Institute, CPPR and Department of Economics, University of Strathclyde, Sir William Duncan Building, 130 Rottenrow, Glasgow G4 0GE (United Kingdom); Turner, Karen [Fraser of Allander Institute and Department of Economics, University of Strathclyde, Sir William Duncan Building, 130 Rottenrow, Glasgow G4 0GE (United Kingdom)
2006-02-01
Sustainable development is a key objective of UK national and regional policies. Improvements in resource productivity have been suggested as both a measure of progress towards sustainable development and as a means of achieving sustainability. Making 'more with less' intuitively seems to be good for the environment, and this is the presumption of current UK policy. However, in a system-wide context, improvements in energy efficiency lower the cost of energy in efficiency units and may even stimulate the consumption and production of energy measured in physical units, and increase pollution. Simulations of a computable general equilibrium model of Scotland suggest that an across the board stimulus to energy efficiency there would actually stimulate energy production and consumption and lead to a deterioration in environmental indicators. The implication is that policies directed at stimulating energy efficiency are not, in themselves, sufficient to secure environmental improvements: this may require the use of complementary energy policies designed to moderate incentives to increased energy consumption. (author)
An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method
Campolina, Daniel; Lima, Paulo Rubens I., E-mail: campolina@cdtn.br, E-mail: pauloinacio@cpejr.com.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil). Servico de Tecnologia de Reatores; Pereira, Claubia; Veloso, Maria Auxiliadora F., E-mail: claubia@nuclear.ufmg.br, E-mail: dora@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Dept. de Engenharia Nuclear
2015-07-01
Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k{sub eff} was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)
Efficient triangulation of Poisson-disk sampled point sets
Guo, Jianwei
2014-05-06
In this paper, we present a simple yet efficient algorithm for triangulating a 2D input domain containing a Poisson-disk sampled point set. The proposed algorithm combines a regular grid and a discrete clustering approach to speedup the triangulation. Moreover, our triangulation algorithm is flexible and performs well on more general point sets such as adaptive, non-maximal Poisson-disk sets. The experimental results demonstrate that our algorithm is robust for a wide range of input domains and achieves significant performance improvement compared to the current state-of-the-art approaches. © 2014 Springer-Verlag Berlin Heidelberg.
An efficient sampling technique for sums of bandpass functions
Lawton, W. M.
1982-01-01
A well known sampling theorem states that a bandlimited function can be completely determined by its values at a uniformly placed set of points whose density is at least twice the highest frequency component of the function (Nyquist rate). A less familiar but important sampling theorem states that a bandlimited narrowband function can be completely determined by its values at a properly chosen, nonuniformly placed set of points whose density is at least twice the passband width. This allows for efficient digital demodulation of narrowband signals, which are common in sonar, radar and radio interferometry, without the side effect of signal group delay from an analog demodulator. This theorem was extended by developing a technique which allows a finite sum of bandlimited narrowband functions to be determined by its values at a properly chosen, nonuniformly placed set of points whose density can be made arbitrarily close to the sum of the passband widths.
Comparing Server Energy Use and Efficiency Using Small Sample Sizes
Coles, Henry C.; Qin, Yong; Price, Phillip N.
2014-11-01
This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a
Sampling efficiency of national, EU and global stratifications : exploring by using CL2000
Metzger, M.J.; Brus, D.J.; Ortega, M.
2012-01-01
Stratification, dividing the statistical population into less heterogeneous subgroups before sampling, can help improve sampling efficiency by improving representativeness and reducing sampling error. This report explores the added sampling efficiency that is achieved by using the European Environme
Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.
2017-08-01
Molecular dynamics simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules, but they are limited by the time scale barrier. That is, we may not obtain properties' efficiently because we need to run microseconds or longer simulations using femtosecond time steps. To overcome this time scale barrier, we can use the weighted ensemble (WE) method, a powerful enhanced sampling method that efficiently samples thermodynamic and kinetic properties. However, the WE method requires an appropriate partitioning of phase space into discrete macrostates, which can be problematic when we have a high-dimensional collective space or when little is known a priori about the molecular system. Hence, we developed a new WE-based method, called the "Concurrent Adaptive Sampling (CAS) algorithm," to tackle these issues. The CAS algorithm is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective variables and adaptive macrostates to enhance the sampling in the high-dimensional space. This is especially useful for systems in which we do not know what the right reaction coordinates are, in which case we can use many collective variables to sample conformations and pathways. In addition, a clustering technique based on the committor function is used to accelerate sampling the slowest process in the molecular system. In this paper, we introduce the new method and show results from two-dimensional models and bio-molecules, specifically penta-alanine and a triazine trimer.
The Efficiency of the On-line Samplings
Ileana Gabriela NICULESCU-ARON
2006-01-01
Full Text Available The rapid growth of the technology from the last decades led to the collateral development of many other sciences. One of the most important inventions was the Internet and the web technologies with a tremendous impact on the society. Statistics, as a social science, at its turn in ongoing development has only to gain from that. Lately, the on line sampling technique greatly developed. Each web site of a certain importance includes in various forms of the questionnaires. These vary from a mere question to lengthy ones and are a part of daily life of those who access the World Wide Web. The main question is how feasible are the results derived from these samplings as the main issue is the representativiness. A nonrepresentative sampling is a futile one. It is a more convenient solution to post a question on the web page and to wait for an answer from the page’s visitors? But how representative is this answer for the target audience? The present paper aims to list the on-lone methodology as well as analyze their efficiency through presenting their advantages and drawbacks.
General space-efficient sampling algorithm for suboptimal alignment
CHEN; Yi; BAI; Yan-qin
2009-01-01
Suboptimal alignments always reveal additional interesting biological features and have been successfully used to informally estimate the significance of an optimal alignment. Besides, traditional dynamic programming algorithms for sequence comparison require quadratic space, and hence are infeasible for long protein or DNA sequences. In this paper, a space-efficient sampling algorithm for computing suboptimal alignments is described. The algorithm uses a general gap model, where the cost associated with gaps is given by an affine score, and randomly selects an alignment according to the distribution of weights of all potential alignments. If x and y are two sequences with lengths n and m, respectively, then the space requirement of this algorithm is linear to the sum of n and m. Finally, an example illustrates the utility of the algorithm.
Pseudospectral Gaussian quantum dynamics: Efficient sampling of potential energy surfaces
Heaps, Charles W.; Mazziotti, David A.
2016-04-01
Trajectory-based Gaussian basis sets have been tremendously successful in describing high-dimensional quantum molecular dynamics. In this paper, we introduce a pseudospectral Gaussian-based method that achieves accurate quantum dynamics using efficient, real-space sampling of the time-dependent basis set. As in other Gaussian basis methods, we begin with a basis set expansion using time-dependent Gaussian basis functions guided by classical mechanics. Unlike other Gaussian methods but characteristic of the pseudospectral and collocation methods, the basis set is tested with N Dirac delta functions, where N is the number of basis functions, rather than using the basis function as test functions. As a result, the integration for matrix elements is reduced to function evaluation. Pseudospectral Gaussian dynamics only requires O ( N ) potential energy calculations, in contrast to O ( N 2 ) evaluations in a variational calculation. The classical trajectories allow small basis sets to sample high-dimensional potentials. Applications are made to diatomic oscillations in a Morse potential and a generalized version of the Henon-Heiles potential in two, four, and six dimensions. Comparisons are drawn to full analytical evaluation of potential energy integrals (variational) and the bra-ket averaged Taylor (BAT) expansion, an O ( N ) approximation used in Gaussian-based dynamics. In all cases, the pseudospectral Gaussian method is competitive with full variational calculations that require a global, analytical, and integrable potential energy surface. Additionally, the BAT breaks down when quantum mechanical coherence is particularly strong (i.e., barrier reflection in the Morse oscillator). The ability to obtain variational accuracy using only the potential energy at discrete points makes the pseudospectral Gaussian method a promising avenue for on-the-fly dynamics, where electronic structure calculations become computationally significant.
Forward flux sampling-type schemes for simulating rare events: Efficiency analysis
Allen, R.J.; Frenkel, D.; Wolde, P.R. ten
2006-01-01
We analyze the efficiency of several simulation methods which we have recently proposed for calculating rate constants for rare events in stochastic dynamical systems in or out of equilibrium. We derive analytical expressions for the computational cost of using these methods and for the statistical
Coe, Joshua D; Sewell, Thomas D; Shaw, M Sam
2009-08-21
An optimized variant of the nested Markov chain Monte Carlo [n(MC)(2)] method [J. Chem. Phys. 130, 164104 (2009)] is applied to fluid N(2). In this implementation of n(MC)(2), isothermal-isobaric (NPT) ensemble sampling on the basis of a pair potential (the "reference" system) is used to enhance the efficiency of sampling based on Perdew-Burke-Ernzerhof density functional theory with a 6-31G(*) basis set (PBE6-31G(*), the "full" system). A long sequence of Monte Carlo steps taken in the reference system is converted into a trial step taken in the full system; for a good choice of reference potential, these trial steps have a high probability of acceptance. Using decorrelated samples drawn from the reference distribution, the pressure and temperature of the full system are varied such that its distribution overlaps maximally with that of the reference system. Optimized pressures and temperatures then serve as input parameters for n(MC)(2) sampling of dense fluid N(2) over a wide range of thermodynamic conditions. The simulation results are combined to construct the Hugoniot of nitrogen fluid, yielding predictions in excellent agreement with experiment.
Efficiency and accuracy of Monte Carlo (importance) sampling
Waarts, P.H.
2003-01-01
Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed
Kowalski, Piotr M
2011-01-01
The mass-dependent equilibrium stable isotope fractionation between different materials is an important geochemical process. Here we present an efficient method to compute the isotope fractionation between complex minerals and fluids at high pressure, P, and temperature, T, representative for the Earth's crust and mantle. The method is tested by computation of the equilibrium fractionation of lithium isotopes between aqueous fluids and various Li bearing minerals such as staurolite, spodumene and mica. We are able to correctly predict the direction of the isotope fractionation as observed in the experiments. On the quantitative level the computed fractionation factors agree within 1.0 permil with the experimental values indicating predictive power of ab initio methods. We show that with ab initio methods we are able to investigate the underlying mechanisms driving the equilibrium isotope fractionation process, such as coordination of the fractionating elements, their bond strengths to the neighboring atoms, c...
Mäenpää, Kimmo; Leppänen, Matti T.; Figueiredo, Kaisa;
2015-01-01
of hydrophobic organic chemicals in biota lipids. The authors' aim was to assess the equilibrium status of polychlorinated biphenyls (PCBs) in a contaminated lake ecosystem and along its discharge course using equilibrium sampling devices for measurements in sediment and water and by also analyzing biota...... in model lipids. Overall, the studied ecosystem appeared to be in disequilibrium for the studied phases: sediment, water, and biota. Chemical activities of PCBs were higher in sediment than in water, which implies that the sediment functioned as a partitioning source of PCBs and that net diffusion occurred...... from the sediment to the water column. Measured lipid-normalized PCB concentrations in biota were generally below equilibrium lipid concentrations relative to the sediment (CLip ⇌Sed ) or water (CLip ⇌W ), indicating that PCB levels in the organisms were below the maximum partitioning levels...
The efficiency of systematic sampling in stereology-reconsidered
Gundersen, Hans Jørgen Gottlieb; Jensen, Eva B. Vedel; Kieu, K
1999-01-01
In the present paper, we summarize and further develop recent research in the estimation of the variance of stereological estimators based on systematic sampling. In particular, it is emphasized that the relevant estimation procedure depends on the sampling density. The validity of the variance...... estimation is examined in a collection of data sets, obtained by systematic sampling. Practical recommendations are also provided in a separate section....
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses
Ismet DOGAN
2015-10-01
Full Text Available Objective: Choosing the most efficient statistical test is one of the essential problems of statistics. Asymptotic relative efficiency is a notion which enables to implement in large samples the quantitative comparison of two different tests used for testing of the same statistical hypothesis. The notion of the asymptotic efficiency of tests is more complicated than that of asymptotic efficiency of estimates. This paper discusses the effect of sample size on expected values and variances of non-parametric tests for independent two samples and determines the most effective test for different sample sizes using Fraser efficiency value. Material and Methods: Since calculating the power value in comparison of the tests is not practical most of the time, using the asymptotic relative efficiency value is favorable. Asymptotic relative efficiency is an indispensable technique for comparing and ordering statistical test in large samples. It is especially useful in nonparametric statistics where there exist numerous heuristic tests such as the linear rank tests. In this study, the sample size is determined as 2 ≤ n ≤ 50. Results: In both balanced and unbalanced cases, it is found that, as the sample size increases expected values and variances of all the tests discussed in this paper increase as well. Additionally, considering the Fraser efficiency, Mann-Whitney U test is found as the most efficient test among the non-parametric tests that are used in comparison of independent two samples regardless of their sizes. Conclusion: According to Fraser efficiency, Mann-Whitney U test is found as the most efficient test.
Efficiently estimating salmon escapement uncertainty using systematically sampled data
Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.
2007-01-01
Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.
Adaptive cluster sampling: An efficient method for assessing inconspicuous species
Andrea M. Silletti; Joan Walker
2003-01-01
Restorationistis typically evaluate the success of a project by estimating the population sizes of species that have been planted or seeded. Because total census is raely feasible, they must rely on sampling methods for population estimates. However, traditional random sampling designs may be inefficient for species that, for one reason or another, are challenging to...
Efficient Unbiased Rendering using Enlightened Local Path Sampling
Kristensen, Anders Wang
. The downside to using these algorithms is that they can be slow to converge. Due to the nature of Monte Carlo methods, the results are random variables subject to variance. This manifests itself as noise in the images, which can only be reduced by generating more samples. The reason these methods are slow...... is because of a lack of eeffective methods of importance sampling. Most global illumination algorithms are based on local path sampling, which is essentially a recipe for constructing random walks. Using this procedure paths are built based on information given explicitly as part of scene description......, such as the location of the light sources or cameras, or the re flection models at each point. In this work we explore new methods of importance sampling paths. Our idea is to analyze the scene before rendering and compute various statistics that we use to improve importance sampling. The first of these are adjoint...
Efficient estimation for ergodic diffusions sampled at high frequency
Sørensen, Michael
of estimators including most of the pre- viously proposed estimators for diffusion processes, for instance GMM-estimators and the maximum likelihood estimator. Simple conditions are given that ensure rate optimality, where estimators of parameters in the diffusion coefficient converge faster than estimators...... of parameters in the drift coefficient, and for efficiency. The conditions turn out to be equal to those implying small Δ-optimality in the sense of Jacobsen and thus gives an interpretation of this concept in terms of classical sta- tistical concepts. Optimal martingale estimating functions in the sense...... of Godambe and Heyde are shown to be give rate optimal and efficient estimators under weak conditions....
Efficient Sample Tracking With OpenLabFramework
List, Markus; Schmidt, Steffen; Trojnar, Jakub
2014-01-01
and genetically engineered cell lines. OpenLabFramework is a newly developed web-application for sample tracking, particularly laid out to fill this gap, but with an open architecture allowing it to be extended for other biological materials and functional data. Its sample tracking mechanism is fully customizable...... of samples created and need to be replaced with state-of-the-art laboratory information management systems. Such systems have been developed in large numbers, but they are often limited to specific research domains and types of data. One domain so far neglected is the management of libraries of vector clones...
Efficient sample tracking with OpenLabFramework.
List, Markus; Schmidt, Steffen; Trojnar, Jakub; Thomas, Jochen; Thomassen, Mads; Kruse, Torben A; Tan, Qihua; Baumbach, Jan; Mollenhauer, Jan
2014-03-04
The advance of new technologies in biomedical research has led to a dramatic growth in experimental throughput. Projects therefore steadily grow in size and involve a larger number of researchers. Spreadsheets traditionally used are thus no longer suitable for keeping track of the vast amounts of samples created and need to be replaced with state-of-the-art laboratory information management systems. Such systems have been developed in large numbers, but they are often limited to specific research domains and types of data. One domain so far neglected is the management of libraries of vector clones and genetically engineered cell lines. OpenLabFramework is a newly developed web-application for sample tracking, particularly laid out to fill this gap, but with an open architecture allowing it to be extended for other biological materials and functional data. Its sample tracking mechanism is fully customizable and aids productivity further through support for mobile devices and barcoded labels.
Ding, Weili; Lu, Ming
2007-01-01
Lacking guidance of general equilibrium (GE) theories in public economics and the corresponding proper mechanisms, China has not surprisingly witnessed an inequality in educational expenditures across regions as well as insufficiency of funds for education in poor areas. It is wrongly thought that what happens is due to the decentralized financing…
Ding, Weili; Lu, Ming
2007-01-01
Lacking guidance of general equilibrium (GE) theories in public economics and the corresponding proper mechanisms, China has not surprisingly witnessed an inequality in educational expenditures across regions as well as insufficiency of funds for education in poor areas. It is wrongly thought that what happens is due to the decentralized financing…
Efficient importance sampling in low dimensions using affine arithmetic
Everitt, Richard G.
2017-01-01
Despite the development of sophisticated techniques such as sequential Monte Carlo, importance sampling (IS) remains an important Monte Carlo method for low dimensional target distributions. This paper describes a new technique for constructing proposal distributions for IS, using affine arithmetic. This work builds on the Moore rejection sampler to which we provide a comparison.
Efficient sampling of Gaussian graphical models using conditional Bayes factors
Hinne, M.; Lenkoski, A.; Heskes, T.M.; Gerven, M.A.J. van
2014-01-01
Bayesian estimation of Gaussian graphical models has proven to be challenging because the conjugate prior distribution on the Gaussian precision matrix, the G-Wishart distribution, has a doubly intractable partition function. Recent developments provide a direct way to sample from the G-Wishart
Grassi, A., E-mail: agrassi@unict.it [Dipartimento di Scienze del Farmaco, v.le A. Doria 6, Università di Catania, 95125 Catania (Italy); Lombardo, G.M. [Dipartimento di Scienze del Farmaco, v.le A. Doria 6, Università di Catania, 95125 Catania (Italy); Pannuzzo, M., E-mail: martina.pannuzzo@gmail.com [Department of Computational Biology, University of Erlangen-Nuremberg, Staudtstrasse 5, 91058 Erlangen (Germany); Raudino, A., E-mail: araudino@dipchi.unict.it [Dipartimento di Scienze Chimiche, v.le A. Doria 6, Università di Catania, 95125 Catania (Italy)
2015-02-06
Highlights: • We analyzed the particles dynamics subject to an oscillating “trap”. • Numerical solution to FPE with various trap's oscillating frequencies and amplitudes. • Out-of-equilibrium calculations describe the evolution toward a “pseudo” equilibrium state. • At pseudo-equilibrium state trapped particles density depends on frequency and amplitude. - Abstract: We investigated the time evolution of a distribution of random diffusing particles around an oscillating non-ideal trap. The problem has been addressed by numerically solving a mono-dimensional Fokker–Planck equation (FPE) for a confined distribution of particles in the presence of an oscillating potential well (trap) of finite depth. Accurate numerical solutions of the FPE have been obtained and expressed as a function of the trap oscillation amplitudes and frequencies. Results show a marked influence of the oscillations both on the capture kinetics and trap efficiency in equilibrium conditions. All the calculated properties exhibit a saturation behavior both at high and low frequencies.
Efficiency of snake sampling methods in the Brazilian semiarid region.
Mesquita, Paula C M D; Passos, Daniel C; Cechin, Sonia Z
2013-09-01
The choice of sampling methods is a crucial step in every field survey in herpetology. In countries where time and financial support are limited, the choice of the methods is critical. The methods used to sample snakes often lack objective criteria, and the traditional methods have apparently been more important when making the choice. Consequently researches using not-standardized methods are frequently found in the literature. We have compared four commonly used methods for sampling snake assemblages in a semiarid area in Brazil. We compared the efficacy of each method based on the cost-benefit regarding the number of individuals and species captured, time, and financial investment. We found that pitfall traps were the less effective method in all aspects that were evaluated and it was not complementary to the other methods in terms of abundance of species and assemblage structure. We conclude that methods can only be considered complementary if they are standardized to the objectives of the study. The use of pitfall traps in short-term surveys of the snake fauna in areas with shrubby vegetation and stony soil is not recommended.
Leonard Charles Ferrington Jr
2014-12-01
Full Text Available Relative efficiencies of standard dip-net sampling (SDN versus collections of surface-floating pupal exuviae (SFPE were determined for detecting Chironomidae at catchment and site scales and at subfamily/tribe-, genus- and species-levels based on simultaneous, equal-effort sampling on a monthly basis for one year during a biodiversity assessment of Bear Run Nature Reserve. Results showed SFPE was more efficient than SDN at catchment scales for detecting both genera and species. At site scales, SDN sampling was more efficient for assessment of a first-order site. No consistent pattern, except for better efficiency of SFPE to detect Orthocladiinae genera, was observed at genus-level for two second-order sites. However, SFPE was consistently more efficient at detecting species of Orthocladiinae, Chironomini and Tanytarsini at the second order sites. SFPE was more efficient at detecting both genera and species at two third-order sites. The differential efficiencies of the two methods are concluded to be related to stream order and size, substrate size, flow and water velocity, depth and habitat heterogeneity, and differential ability to discriminate species among pupal exuviae specimens versus larval specimens. Although both approaches are considered necessary for comprehensive biodiversity assessments of Chironomidae, our results suggest that there is an optimal, but different, allocation of sampling effort for detecting Chironomidae across stream orders and at differing spatial and taxonomic scales.Article submitted 13. August 2014, accepted 31. October 2014, published 22. December 2014.
Onychomycosis: Sampling, diagnosing as efficiant part of hospital pharmacology
Ignjatović Vesna A.
2014-01-01
Full Text Available Introduction Onychomycosis is a fungal infection of one or more nails. Causes of onychomycosis are dermatophytes, yeasts and non-dermatophyte molds, but the most common cause is Trichophytonrubrum (T. Rubrum from the group of dermatophyte fungi. The aims Using sampling determination of the most common clinical type of onychomycosis, lokalization and involvement of the nail plate, and monitoring the efficacy of methods/tests in the diagnosis of nail onychomycosis. Material and methods This paper is a part of academic IV phase study. The study included 30 patients with onychomycosis. Each sample was seeded on Sabouraud Dextrose Agar (SDA and Diluted SDA (D-SDA at 28°C and 37°C, as well as the Dermatophyte Test Medium (DTM at 28°C. Identification of isolated fungi to the level of genus/species has been based on macroscopic and microscopic characteristics by KOH and Blancophor fluorescent dye. PCR were performed to detect T. Rubrum-specific and pan-dermatophyte multiplex PCR product. Informed consent was obtained from all patients. Results The most common clinical form was subungual lateral distal onychomycosis (DLSOof the hands and feet pollex fingernails, while the size of the involvement of the nail plate was 1/2 - 1/3 in the majority of patients. Cultivation gave a positive result in 50% of cases and the most commonly isolated microorganism was the T. Rubrum. For negative cultures (50% the PCR was carried out which demonstrated high sensitivity and T. Rubrum remained the most frequently detected. Conclusions Using the methods of cultivation and PCR, onychomycosis was confirmed in 28 (93.3% patients. Cultivation gave a negative result in 50% of cases, while the PCR was positive in 86.6%. Our research shows the highest incidence of T. Rubrum (60%. In continuation of this study will be analyzed the choice and effectiveness of therapy.
Monique Florenzano
2008-09-01
Full Text Available General equilibrium is a central concept of economic theory. Unlike partial equilibrium analysis which study the equilibrium of a particular market under the clause “ceteris paribus” that revenues and prices on the other markets stay approximately unaffected, the ambition of a general equilibrium model is to analyze the simultaneous equilibrium in all markets of a competitive economy. Definition of the abstract model, some of its basic results and insights are presented. The important issues of uniqueness and local uniqueness of equilibrium are sketched; they are the condition for a predictive power of the theory and its ability to allow for statics comparisons. Finally, we review the main extensions of the general equilibrium model. Besides the natural extensions to infinitely many commodities and to a continuum of agents, some examples show how economic theory can accommodate the main ideas in order to study some contexts which were not thought of by the initial model
Waiting and weighting: Information sampling is a balance between efficiency and error-reduction.
Meier, Kimberly M; Blair, Mark R
2013-02-01
The current study investigates the relative extent to which information utility and planning efficiency guide information-sampling strategies in a classification task. Prior research has pointed to the importance of probability gain, the degree to which sampling a feature reduces the chance of error, in contexts where participants are restricted to one sample. We monitored participants as they sampled information in an unrestricted context and recorded whether they began their search with a high gain feature or an efficient feature that ultimately allowed for fewer samples per trial. Participants preferred to sample the more efficient feature first, especially when feature information had a higher access cost (Experiment 1). When access costs were all but eliminated using eye-tracking (Experiment 2), participants' fixations still emphasized efficiency over high probability gain, though probability gain was shown to influence access patterns.
Afidah Abdul Rahim
2016-10-01
Full Text Available The optimal activated carbon produced from Prosopis africana seed hulls (PASH-AC was obtained using the impregnation ratio of 3.19, activation temperature of 780 °C and activation time of 63 min with surface area of 1095.56 m2/g and monolayer adsorption capacity of 498.67 mg/g. The adsorption data were also modeled using five various forms of the linearized Langmuir equations as well as Freundlich and Temkin adsorption isotherms. In comparing the legitimacy of each isotherm model, chi square (χ2 was incorporated with the correlation coefficient (R2 to justify the basis for selecting the best adsorption model. Langmuir-2 > Freundlich > Temkin isotherms was the best order that described the equilibrium adsorption data. The results revealed pseudo-second-order to be the most ideal model in describing the kinetics data.
Ghorbanzadeh, A M; Norouzi, S; Mohammadi, T [Department of Physics, Sharif University of Technology, PO Box 11365-9161, Tehran (Iran, Islamic Republic of)
2005-10-21
The efficient production of syngas from a CH{sub 4}+CO{sub 2} mixture in an atmospheric pulsed glow discharge, sustained by corona pre-ionization, has been investigated. The products were mainly syngas (CO, H{sub 2}) and hydrocarbons up to C{sub 4}, with acetylene having the highest selectivity. The energy efficiency was within 15-40% for different experimental conditions, which demonstrates a comprehensive improvement relative to the achievements of other types of non-equilibrium plasma. These values are, however, comparable with the efficiencies obtained by gliding arc plasmas but this plasma operates at near room temperature. Furthermore, it has been shown that the energy efficiency is increased by decreasing the effective residence time. The effects of molar ratio CH{sub 4} : CO{sub 2}, voltage, repetition rate and gas flow rate on conversion, energy efficiencies and the selectivities have also been investigated. The higher efficiency obtained in this kind of plasma is discussed and attributed to the short pulse regime and electric field uniformity.
Chito, Diana; Weng, Liping; Galceran, Josep; Companys, Encarnació; Puy, Jaume; van Riemsdijk, Willem H; van Leeuwen, Herman P
2012-04-01
The determination of free Zn(2+) ion concentration is a key in the study of environmental systems like river water and soils, due to its impact on bioavailability and toxicity. AGNES (Absence of Gradients and Nernstian Equilibrium Stripping) and DMT (Donnan Membrane Technique) are emerging techniques suited for the determination of free heavy metal concentrations, especially in the case of Zn(2+), given that there is no commercial Ion Selective Electrode. In this work, both techniques have been applied to synthetic samples (containing Zn and NTA) and natural samples (Rhine river water and soils), showing good agreement. pH fluctuations in DMT and N(2)/CO(2) purging system used in AGNES did not affect considerably the measurements done in Rhine river water and soil samples. Results of DMT in situ of Rhine river water are comparable to those of AGNES in the lab. The comparison of this work provides a cross-validation for both techniques.
On the efficiency of biased sampling of the multiple state path ensemble
Rogal, J.; Bolhuis, P.G.
2010-01-01
Developed for complex systems undergoing rare events involving many (meta)stable states, the multiple state transition path sampling aims to sample from an extended path ensemble including all possible trajectories between any pair of (meta)stable states. The key issue for an efficient sampling of t
An Efficient Constraint Boundary Sampling Method for Sequential RBDO Using Kriging Surrogate Model
Kim, Jihoon; Jang, Junyong; Kim, Shinyu; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of); Cho, Sugil; Kim, Hyung Woo; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Busan (Korea, Republic of)
2016-06-15
Reliability-based design optimization (RBDO) requires a high computational cost owing to its reliability analysis. A surrogate model is introduced to reduce the computational cost in RBDO. The accuracy of the reliability depends on the accuracy of the surrogate model of constraint boundaries in the surrogated-model-based RBDO. In earlier researches, constraint boundary sampling (CBS) was proposed to approximate accurately the boundaries of constraints by locating sample points on the boundaries of constraints. However, because CBS uses sample points on all constraint boundaries, it creates superfluous sample points. In this paper, efficient constraint boundary sampling (ECBS) is proposed to enhance the efficiency of CBS. ECBS uses the statistical information of a kriging surrogate model to locate sample points on or near the RBDO solution. The efficiency of ECBS is verified by mathematical examples.
Yang Zhao, Yang; Aarnink, A.J.A.; Wang, Wei; Fabri, T.; Groot Koerkamp, P.W.G.; Jong, de M.C.M.
2014-01-01
Introduction. The airborne transmission of infectious diseases in livestock production is increasingly receiving research attention. Reliable techniques of air sampling are crucial to underpin the findings of such studies. This study evaluated the physical and biological efficiencies and detection l
Ikebe, Jinzen; Sakuraba, Shun; Kono, Hidetoshi
2014-01-05
A novel, efficient sampling method for biomolecules is proposed. The partial multicanonical molecular dynamics (McMD) was recently developed as a method that improved generalized ensemble (GE) methods to focus sampling only on a part of a system (GEPS); however, it was not tested well. We found that partial McMD did not work well for polylysine decapeptide and gave significantly worse sampling efficiency than a conventional GE. Herein, we elucidate the fundamental reason for this and propose a novel GEPS, adaptive lambda square dynamics (ALSD), which can resolve the problem faced when using partial McMD. We demonstrate that ALSD greatly increases the sampling efficiency over a conventional GE. We believe that ALSD is an effective method and is applicable to the conformational sampling of larger and more complicated biomolecule systems. Copyright © 2013 Wiley Periodicals, Inc.
Efficiency comparisons of fish sampling gears for a lentic ecosystem health assessments in Korea
Jeong-Ho Han
2016-12-01
Full Text Available The key objective of this study was to analyze the sampling efficiency of various fish sampling gears for a lentic ecosystem health assessment. A fish survey for the lentic ecosystem health assessment model was sampled twice from 30 reservoirs during 2008–2012. During the study, fishes of 81 species comprising 53,792 individuals were sampled from 30 reservoirs. A comparison of sampling gears showed that casting nets were the best sampling gear with high species richness (69 species, whereas minnow traps were the worst gear with low richness (16 species. Fish sampling efficiency, based on the number of individual catch per unit effort, was best in fyke nets (28,028 individuals and worst in minnow traps (352 individuals. When we compared trammel nets and kick nets versus fyke nets and casting nets, the former were useful in terms of the number of fish individuals but not in terms of the number of fish species.
Designing efficient surveys: spatial arrangement of sample points for detection of invasive species
Ludek Berec; John M. Kean; Rebecca Epanchin-Niell; Andrew M. Liebhold; Robert G. Haight
2015-01-01
Effective surveillance is critical to managing biological invasions via early detection and eradication. The efficiency of surveillance systems may be affected by the spatial arrangement of sample locations. We investigate how the spatial arrangement of sample points, ranging from random to fixed grid arrangements, affects the probability of detecting a target...
Calculating sample sizes for cluster randomized trials: we can keep it simple and efficient !
van Breukelen, Gerard J.P.; Candel, Math J.J.M.
2012-01-01
Objective: Simple guidelines for efficient sample sizes in cluster randomized trials with unknown intraclass correlation and varying cluster sizes. Methods: A simple equation is given for the optimal number of clusters and sample size per cluster. Here, optimal means maximizing power for a given
C. F. D. Rocha
Full Text Available Studies on anurans in restinga habitats are few and, as a result, there is little information on which methods are more efficient for sampling them in this environment. Ten methods are usually used for sampling anuran communities in tropical and sub-tropical areas. In this study we evaluate which methods are more appropriate for this purpose in the restinga environment of Parque Nacional da Restinga de Jurubatiba. We analyzed six methods among those usually used for anuran samplings. For each method, we recorded the total amount of time spent (in min., the number of researchers involved, and the number of species captured. We calculated a capture efficiency index (time necessary for a researcher to capture an individual frog in order to make comparable the data obtained. Of the methods analyzed, the species inventory (9.7 min/searcher /ind.- MSI; richness = 6; abundance = 23 and the breeding site survey (9.5 MSI; richness = 4; abundance = 22 were the most efficient. The visual encounter inventory (45.0 MSI and patch sampling (65.0 MSI methods were of comparatively lower efficiency restinga, whereas the plot sampling and the pit-fall traps with drift-fence methods resulted in no frog capture. We conclude that there is a considerable difference in efficiency of methods used in the restinga environment and that the complete species inventory method is highly efficient for sampling frogs in the restinga studied and may be so in other restinga environments. Methods that are usually efficient in forested areas seem to be of little value in open restinga habitats.
Mehdinia, Ali; Akbari, Maryam; Baradaran Kayyal, Tohid; Azad, Mohammad
2015-02-01
In this work, magnetic di-thio functionalized mesoporous silica nanoparticles (DT-MCM-41) were prepared by grafting dithiocarbamate groups within the channels of magnetic mesoporous silica nanocomposites. The functionalized nanoparticles exhibited proper magnetic behavior. They were easily separated from the aqueous solution by applying an external magnetic field. The results indicated that the functionalized nanoparticles had a potential for high-efficient removal of Hg(2+) in environmental samples. The maximum adsorption capacity of the sorbent was 538.9 mg g(-1), and it took about 10 min to achieve the equilibrium adsorption. The resulted adsorption capacity was higher than similar works for adsorption of mercury. It can be due to the presence of di-thio and amine active groups in the structure of sorbent. The special properties of MCM-41 like large surface area and high porosity also provided a facile accessibility of the mercury ions into the ligand sites. The complete removal of mercury ions was attained with dithiocarbamate groups in a wide range of mercury concentrations. The recovery studies were also applied for the river water, seawater, and wastewater samples, and the values were over of 97 %.
Centrifugal Contactor Efficiency Measurements
Mincher, Bruce Jay [Idaho National Lab. (INL), Idaho Falls, ID (United States); Tillotson, Richard Dean [Idaho National Lab. (INL), Idaho Falls, ID (United States); Grimes, Travis Shane [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2017-01-01
The contactor efficiency of a 2-cm acrylic centrifugal contactor, fabricated by ANL using 3D printer technology was measured by comparing a contactor test run to 5-min batch contacts. The aqueous phase was ~ 3 ppm depleted uranium in 3 M HNO3, and the organic phase was 1 M DAAP/dodecane. Sampling during the contactor run showed that equilibrium was achieved within < 3 minutes. The contactor efficiency at equilibrium was 95% to 100 %, depending on flowrate.
Monte Carlo efficiency improvement by multiple sampling of conditioned integration variables
Weitz, Sebastian; Blanco, Stéphane; Charon, Julien; Dauchet, Jérémi; El Hafi, Mouna; Eymet, Vincent; Farges, Olivier; Fournier, Richard; Gautrais, Jacques
2016-12-01
We present a technique that permits to increase the efficiency of multidimensional Monte Carlo algorithms when the sampling of the first, unconditioned random variable consumes much more computational time than the sampling of the remaining, conditioned random variables while its variability contributes only little to the total variance. This is in particular relevant for transport problems in complex and randomly distributed geometries. The proposed technique is based on an new Monte Carlo estimator in which the conditioned random variables are sampled more often than the unconditioned one. A significant contribution of the present Short Note is an automatic procedure for calculating the optimal number of samples of the conditioned random variable per sample of the unconditioned one. The technique is illustrated by a current research example where it permits to increase the efficiency by a factor 100.
Efficient free energy calculations by combining two complementary tempering sampling methods.
Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun
2017-01-14
Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.
Standardized Method for Measuring Collection Efficiency from Wipe-sampling of Trace Explosives.
Verkouteren, Jennifer R; Lawrence, Jeffrey A; Staymates, Matthew E; Sisco, Edward
2017-04-10
One of the limiting steps to detecting traces of explosives at screening venues is effective collection of the sample. Wipe-sampling is the most common procedure for collecting traces of explosives, and standardized measurements of collection efficiency are needed to evaluate and optimize sampling protocols. The approach described here is designed to provide this measurement infrastructure, and controls most of the factors known to be relevant to wipe-sampling. Three critical factors (the applied force, travel distance, and travel speed) are controlled using an automated device. Test surfaces are chosen based on similarity to the screening environment, and the wipes can be made from any material considered for use in wipe-sampling. Particle samples of the explosive 1,3,5-trinitroperhydro-1,3,5-triazine (RDX) are applied in a fixed location on the surface using a dry-transfer technique. The particle samples, recently developed to simulate residues made after handling explosives, are produced by inkjet printing of RDX solutions onto polytetrafluoroethylene (PTFE) substrates. Collection efficiency is measured by extracting collected explosive from the wipe, and then related to critical sampling factors and the selection of wipe material and test surface. These measurements are meant to guide the development of sampling protocols at screening venues, where speed and throughput are primary considerations.
Reachable Distance Space: Efficient Sampling-Based Planning for Spatially Constrained Systems
Xinyu Tang,
2010-01-25
Motion planning for spatially constrained robots is difficult due to additional constraints placed on the robot, such as closure constraints for closed chains or requirements on end-effector placement for articulated linkages. It is usually computationally too expensive to apply sampling-based planners to these problems since it is difficult to generate valid configurations. We overcome this challenge by redefining the robot\\'s degrees of freedom and constraints into a new set of parameters, called reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the number of the robot\\'s degrees of freedom. In addition to supporting efficient sampling of configurations, we show that the RD-space formulation naturally supports planning and, in particular, we design a local planner suitable for use by sampling-based planners. We demonstrate the effectiveness and efficiency of our approach for several systems including closed chain planning with multiple loops, restricted end-effector sampling, and on-line planning for drawing/sculpting. We can sample single-loop closed chain systems with 1,000 links in time comparable to open chain sampling, and we can generate samples for 1,000-link multi-loop systems of varying topologies in less than a second. © 2010 The Author(s).
Is aging raw cattle urine efficient for sampling Anopheles arabiensis Patton?
Mahande, A.M.; Mwang'onde, B.J.; Msangi, S.; Kimaro, E.; Mnyone, L.L.; Mazigo, H.D.; Mahande, M.J.; Kweka, E.J.
2010-01-01
Background: To ensure sustainable routine surveillance of mosquito vectors, simple, effective and ethically acceptable tools are required. As a part of that, we evaluated the efficiency of resting boxes baited with fresh and aging cattle urine for indoor and outdoor sampling of An. arabiensis in the
van der Burg, W.; van Willigenburg, T.
1998-01-01
The basic idea of reflective equilibrium, as a method for theory construction and decision making in ethics, is that we should bring together a broad variety of moral and non-moral beliefs and, through a process of critical scrutiny and mutual adjustment, combine these into one coherent belief syste
van der Burg, W.; van Willigenburg, T.
1998-01-01
The basic idea of reflective equilibrium, as a method for theory construction and decision making in ethics, is that we should bring together a broad variety of moral and non-moral beliefs and, through a process of critical scrutiny and mutual adjustment, combine these into one coherent belief syste
Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb
2008-01-01
cerebellum, total number of orexin positive neurons in transgenic mice brain, and estimating the absolute area and the areal fraction of β islet cells in dog pancreas. The proportionator was at least eight times more efficient (precision and time combined) than traditional computer controlled sampling.......Quantification of tissue properties is improved using the general proportionator sampling and estimation procedure: automatic image analysis and non-uniform sampling with probability proportional to size (PPS). The complete region of interest is partitioned into fields of view, and every field...
Optimal sampling efficiency in Monte Carlo simulation with an approximate potential.
Coe, Joshua D; Sewell, Thomas D; Shaw, M Sam
2009-04-28
Building on the work of Iftimie et al. [J. Chem. Phys. 113, 4852 (2000)] and Gelb [J. Chem. Phys. 118, 7747 (2003)], Boltzmann sampling of an approximate potential (the "reference" system) is used to build a Markov chain in the isothermal-isobaric ensemble. At the end points of the chain, the energy is evaluated at a more accurate level (the "full" system) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. For reference system chains of sufficient length, consecutive full energies are statistically decorrelated and thus far fewer are required to build ensemble averages with a given variance. Without modifying the original algorithm, however, the maximum reference chain length is too short to decorrelate full configurations without dramatically lowering the acceptance probability of the composite move. This difficulty stems from the fact that the reference and full potentials sample different statistical distributions. By manipulating the thermodynamic variables characterizing the reference system (pressure and temperature, in this case), we maximize the average acceptance probability of composite moves, lengthening significantly the random walk between consecutive full energy evaluations. In this manner, the number of full energy evaluations needed to precisely characterize equilibrium properties is dramatically reduced. The method is applied to a model fluid, but implications for sampling high-dimensional systems with ab initio or density functional theory potentials are discussed.
Trapping efficiency of 1,3-dichloropropene isomers by XAD-4 sorbent tubes for air sampling
Gao, S.; Pflaum, T.; Qin, R.
2011-08-01
Emission monitoring is necessary to evaluate the impact of air pollutants such as soil fumigants on the environment. Quantifying fumigant emissions often involves the use of air sampling tubes filled with sorbents to trap fumigants. 1,3-dichloropropene (1,3-D) and chloropicrin (CP) are being increasingly used in combination in soil fumigation since the phase-out of methyl bromide. Charcoal and XAD-4 resins are used for trapping 1,3-D and CP, respectively. If one sampling tube can trap both chemicals, the number of samples, the amount of work, and associated costs can be significantly reduced. The objective of this research was to determine the efficiency of XAD-4 sorbent tubes for trapping cis- and trans-1,3-D isomers as a function of flow rate (100-1000 ml min -1) and sampling time period (10-360 min). The results showed that XAD-4 could trap both 1,3-D isomers as efficiently as charcoal, but breakthrough occurred depending on the amount of sorbent materials in a tube, fumigant amount, flow rate and sampling time period. No significant breakthrough was observed from either small (120 mg) or large (600 mg) XAD-4 sorbent tubes over short sampling time periods (≤30 min) at any flow rate. Longer sampling period at low flow rates (100 ml min -1) resulted in ≥50% breakthrough from the small tubes during a 3 h sampling period; but no remarkable breakthrough from the large XAD-4 tubes up to 6 h sampling period when 3.0 mg 1,3-D isomers were tested. Field data showed agreement with laboratory tests. At high flow rates (1000 ml min -1), >40% breakthrough was observed from large XAD-4 sorbent tubes during 3 h trapping tests suggesting that short sampling time intervals are necessary to avoid potential breakthrough of fumigants from the sampling tubes.
Efficient Low-Sensitivity Sampling of Multiband Signals with Bounded Components
Selva, J
2010-01-01
This paper presents an efficient method to sample multiband signals with bounded components, at a rate below the Nyquist limit, while keeping at the same time the numerical sensitivity at a low level. The method is based on band-limited windowing, followed by trigonometric approximation in consecutive time intervals. The key point is that the trigonometric approximation "inherits" the multiband property, that is, its coefficients are formed by bursts of non-zero elements corresponding to the multiband components. It is shown that this method can be well combined with the recently proposed synchronous multi-rate sampling (SMRS) scheme, given that the resulting linear system is sparse and formed by ones and zeroes. The proposed method allows one to trade sampling efficiency for noise sensitivity, and is specially well suited for bounded signals with unbounded energy like those in communications, navigation, audio systems, etc. Besides, it is also applicable to finite energy signals and periodic band-limited sig...
Favrot, Scott D.; Kwak, Thomas J.
2016-01-01
Potamodromy (i.e., migration entirely in freshwater) is a common life history strategy of North American lotic fishes, and efficient sampling methods for potamodromous fishes are needed to formulate conservation and management decisions. Many potamodromous fishes inhabit medium-sized rivers and are mobile during spawning migrations, which complicates sampling with conventional gears (e.g., nets and electrofishing). We compared the efficiency of a passive migration technique (resistance board weirs) and an active technique (prepositioned areal electrofishers; [PAEs]) for sampling migrating potamodromous fishes in Valley River, a southern Appalachian Mountain river, from March through July 2006 and 2007. A total of 35 fish species from 10 families were collected, 32 species by PAE and 19 species by weir. Species richness and diversity were higher for PAE catch, and species dominance (i.e., proportion of assemblage composed of the three most abundant species) was higher for weir catch. Prepositioned areal electrofisher catch by number was considerably higher than weir catch, but biomass was lower for PAE catch. Weir catch decreased following the spawning migration, while PAEs continued to collect fish. Sampling bias associated with water velocity was detected for PAEs, but not weirs, and neither gear demonstrated depth bias in wadeable reaches. Mean fish mortality from PAEs was five times greater than that from weirs. Catch efficiency and composition comparisons indicated that weirs were effective at documenting migration chronology, sampling nocturnal migration, and yielding samples unbiased by water velocity or habitat, with low mortality. Prepositioned areal electrofishers are an appropriate sampling technique for seasonal fish occupancy objectives, while weirs are more suitable for quantitatively describing spawning migrations. Our comparative results may guide fisheries scientists in selecting an appropriate sampling gear and regime for research, monitoring
Wahab, M Farooq; Dasgupta, Purnendu K; Kadjo, Akinde F; Armstrong, Daniel W
2016-02-11
With increasingly efficient columns, eluite peaks are increasingly narrower. To take full advantage of this, choice of the detector response time and the data acquisition rate a.k.a. detector sampling frequency, have become increasingly important. In this work, we revisit the concept of data sampling from the theorem variously attributed to Whittaker, Nyquist, Kotelnikov, and Shannon. Focusing on time scales relevant to the current practice of high performance liquid chromatography (HPLC) and optical absorbance detection (the most commonly used method), even for very narrow simulated peaks Fourier transformation shows that theoretical minimum sampling frequency is still relatively low (fast chromatography on a state-of-the-art column (38,000 plates), we evaluate the responses produced by different present generation instruments, each with their unique black box digital filters. We show that the common wisdom of sampling 20 points per peak can be inadequate for high efficiency columns and that the sampling frequency and response choices do affect the peak shape. If the sampling frequency is too low or response time is too large, the observed peak shapes will not remain as narrow as they really are - this is especially true for high efficiency and high speed separations. It is shown that both sampling frequency and digital filtering affect the retention time, noise amplitude, peak shape and width in a complex fashion. We show how a square-wave driven light emitting diode source can reveal the nature of the embedded filter. We discuss time uncertainties related to the choice of sampling frequency. Finally, we suggest steps to obtain optimum results from a given system.
Comparison of the efficiency between two sampling plans for aflatoxins analysis in maize.
Mallmann, Adriano Olnei; Marchioro, Alexandro; Oliveira, Maurício Schneider; Rauber, Ricardo Hummes; Dilkin, Paulo; Mallmann, Carlos Augusto
2014-01-01
Variance and performance of two sampling plans for aflatoxins quantification in maize were evaluated. Eight lots of maize were sampled using two plans: manual, using sampling spear for kernels; and automatic, using a continuous flow to collect milled maize. Total variance and sampling, preparation, and analysis variance were determined and compared between plans through multifactor analysis of variance. Four theoretical distribution models were used to compare aflatoxins quantification distributions in eight maize lots. The acceptance and rejection probabilities for a lot under certain aflatoxin concentration were determined using variance and the information on the selected distribution model to build the operational characteristic curves (OC). Sampling and total variance were lower at the automatic plan. The OC curve from the automatic plan reduced both consumer and producer risks in comparison to the manual plan. The automatic plan is more efficient than the manual one because it expresses more accurately the real aflatoxin contamination in maize.
Efficient and exact sampling of simple graphs with given arbitrary degree sequence
Del Genio, Charo I; Toroczkai, Zoltan; Bassler, Kevin E
2010-01-01
Uniform sampling from graphical realizations of a given degree sequence is a fundamental component in simulation-based measurements of network observables, with applications ranging from epidemics, through social networks to Internet modeling. Existing graph sampling methods are either link-swap based (Markov-Chain Monte Carlo algorithms) or stub-matching based (the Configuration Model). Both types are ill-controlled, with typically unknown mixing times for link-swap methods and uncontrolled rejections for the Configuration Model. Here we propose an efficient, polynomial time algorithm that generates statistically independent graph samples with a given, arbitrary, degree sequence. The algorithm provides a weight associated with each sample, allowing the observable to be measured either uniformly over the graph ensemble, or, alternatively, with a desired distribution. Unlike other algorithms, this method always produces a sample, without back-tracking or rejections. Using a central limit theorem-based reasonin...
Comparison of the efficiency between two sampling plans for aflatoxins analysis in maize
Adriano Olnei Mallmann
2014-01-01
Full Text Available Variance and performance of two sampling plans for aflatoxins quantification in maize were evaluated. Eight lots of maize were sampled using two plans: manual, using sampling spear for kernels; and automatic, using a continuous flow to collect milled maize. Total variance and sampling, preparation, and analysis variance were determined and compared between plans through multifactor analysis of variance. Four theoretical distribution models were used to compare aflatoxins quantification distributions in eight maize lots. The acceptance and rejection probabilities for a lot under certain aflatoxin concentration were determined using variance and the information on the selected distribution model to build the operational characteristic curves (OC. Sampling and total variance were lower at the automatic plan. The OC curve from the automatic plan reduced both consumer and producer risks in comparison to the manual plan. The automatic plan is more efficient than the manual one because it expresses more accurately the real aflatoxin contamination in maize.
Martinez Vega, Mabel Virginia; Wulfsohn, D.; Zamora, I.
2012-01-01
‘fractionator’ tree sampling procedure and supporting handheld software (Gardi et al., 2007; Wulfsohn et al., 2012) to obtain representative samples of fruit from a 7.6-ha apple orchard (Malus ×domestica ‘Fuji Raku Raku’) in central Chile. The resulting sample consisted of 70 fruit on 56 branch segments...... distributed across 36 trees for yield estimation. A sub-sample of 56 fruit (one per branch segment) was removed; and, individual fruit mass, firmness and contents of malic acid, soluble solids and starch were measured in the laboratory. The data also were used to obtain an imprecise, but unbiased, estimate......In situ assessment of fruit quality and yield can provide critical data for marketing and for logistical planning of the harvest, as well as for site-specific management. Our objective was to develop and validate efficient field sampling procedures for this purpose. We used the previously reported...
Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems
Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros
2015-04-01
In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the
Elsheikh, Ahmed H.
2014-02-01
An efficient Bayesian calibration method based on the nested sampling (NS) algorithm and non-intrusive polynomial chaos method is presented. Nested sampling is a Bayesian sampling algorithm that builds a discrete representation of the posterior distributions by iteratively re-focusing a set of samples to high likelihood regions. NS allows representing the posterior probability density function (PDF) with a smaller number of samples and reduces the curse of dimensionality effects. The main difficulty of the NS algorithm is in the constrained sampling step which is commonly performed using a random walk Markov Chain Monte-Carlo (MCMC) algorithm. In this work, we perform a two-stage sampling using a polynomial chaos response surface to filter out rejected samples in the Markov Chain Monte-Carlo method. The combined use of nested sampling and the two-stage MCMC based on approximate response surfaces provides significant computational gains in terms of the number of simulation runs. The proposed algorithm is applied for calibration and model selection of subsurface flow models. © 2013.
Su, Wei-Chung; Tolchinsky, Alexander D; Chen, Bean T; Sigaev, Vladimir I; Cheng, Yung Sung
2012-09-01
The need to determine occupational exposure to bioaerosols has notably increased in the past decade, especially for microbiology-related workplaces and laboratories. Recently, two new cyclone-based personal bioaerosol samplers were developed by the National Institute for Occupational Safety and Health (NIOSH) in the USA and the Research Center for Toxicology and Hygienic Regulation of Biopreparations (RCT & HRB) in Russia to monitor bioaerosol exposure in the workplace. Here, a series of wind tunnel experiments were carried out to evaluate the physical sampling performance of these two samplers in moving air conditions, which could provide information for personal biological monitoring in a moving air environment. The experiments were conducted in a small wind tunnel facility using three wind speeds (0.5, 1.0 and 2.0 m s(-1)) and three sampling orientations (0°, 90°, and 180°) with respect to the wind direction. Monodispersed particles ranging from 0.5 to 10 μm were employed as the test aerosols. The evaluation of the physical sampling performance was focused on the aspiration efficiency and capture efficiency of the two samplers. The test results showed that the orientation-averaged aspiration efficiencies of the two samplers closely agreed with the American Conference of Governmental Industrial Hygienists (ACGIH) inhalable convention within the particle sizes used in the evaluation tests, and the effect of the wind speed on the aspiration efficiency was found negligible. The capture efficiencies of these two samplers ranged from 70% to 80%. These data offer important information on the insight into the physical sampling characteristics of the two test samplers.
Ahmed, Salwa A.; Soliman, Ezzat M.
2013-11-01
Monitoring pollutants in water samples is a challenge to analysts. So, the removal of Napthol blue black (NBB) and Erichrome blue black R (EBBR) from aqueous solutions was investigated using magnetic chelated silica particles. Magnetic solids are widely used in detection and analytical systems because of the performance advantages they offer compared to similar solids that lack magnetic properties. In this context, a fast, simple and clean method for modification of magnetic particles (Fe3O4) with silica gel was developed using microwave technique to introduce silica gel coated magnetic particles (SG-MPs) sorbent. The magnetic sorbent was characterized by the FT-IR, X-ray diffraction (XRD), and scan electron microscope (SEM) analyses. The effects of pH, time, weight of sorbent and initial concentration of dye were evaluated. It was interesting to find from results that SG-MPs exhibits high percentage extraction of the studied dyes (100% for NBB and 98.75% for EBBR) from aqueous solutions. The Freundlich isotherm with r2 = 0.973 and 0.962 and Langmuir isotherms with r2 = 0.993 and 0.988 for NBB and EBBR, respectively were used to describe adsorption equilibrium. Also, adsorption kinetic experiments have been carried out and the data have been well fitted by a pseudo-second-order equation r2 = 1.0 for NBB and 0.999 for EBBR. The prepared sorbent with rapid adsorption rate and separation convenience was applied for removal of NBB and EBBR pollutants from natural water samples with good precision (RSD% = 0.05-0.3%).
Ahmed, Salwa A., E-mail: salwa_kasem2003@yahoo.com; Soliman, Ezzat M.
2013-11-01
Monitoring pollutants in water samples is a challenge to analysts. So, the removal of Napthol blue black (NBB) and Erichrome blue black R (EBBR) from aqueous solutions was investigated using magnetic chelated silica particles. Magnetic solids are widely used in detection and analytical systems because of the performance advantages they offer compared to similar solids that lack magnetic properties. In this context, a fast, simple and clean method for modification of magnetic particles (Fe{sub 3}O{sub 4}) with silica gel was developed using microwave technique to introduce silica gel coated magnetic particles (SG-MPs) sorbent. The magnetic sorbent was characterized by the FT-IR, X-ray diffraction (XRD), and scan electron microscope (SEM) analyses. The effects of pH, time, weight of sorbent and initial concentration of dye were evaluated. It was interesting to find from results that SG-MPs exhibits high percentage extraction of the studied dyes (100% for NBB and 98.75% for EBBR) from aqueous solutions. The Freundlich isotherm with r{sup 2} = 0.973 and 0.962 and Langmuir isotherms with r{sup 2} = 0.993 and 0.988 for NBB and EBBR, respectively were used to describe adsorption equilibrium. Also, adsorption kinetic experiments have been carried out and the data have been well fitted by a pseudo-second-order equation r{sup 2} = 1.0 for NBB and 0.999 for EBBR. The prepared sorbent with rapid adsorption rate and separation convenience was applied for removal of NBB and EBBR pollutants from natural water samples with good precision (RSD% = 0.05–0.3%).
Yang Zhao
2014-09-01
Full Text Available [b]Introduction[/b]. The airborne transmission of infectious diseases in livestock production is increasingly receiving research attention. Reliable techniques of air sampling are crucial to underpin the findings of such studies. This study evaluated the physical and biological efficiencies and detection limits of four samplers (Andersen 6-stage impactor, all-glass impinger “AGI-30”, OMNI-3000 and MD8 with gelatin filter for collecting aerosols of infectious bursal disease virus (IBDV. [b]Materials and Method[/b]. IBDV aerosols mixed with a physical tracer (uranine were generated in an isolator, and then collected by the bioaerosol samplers. Samplers’ physical and biological efficiencies were derived based on the tracer concentration and the virus/tracer ratio, respectively. Detection limits for the samplers were estimated with the obtained efficiency data. [b]Results.[/b] Physical efficiencies of the AGI-30 (96% and the MD8 (100% were significantly higher than that of the OMNI-3000 (60%. Biological efficiency of the OMNI-3000 (23% was significantly lower than 100% (P < 0.01, indicating inactivation of airborne virus during sampling. The AGI-30, the Andersen impactor and the MD8 did not significantly inactivate virus during sampling. The 2-min detection limits of the samplers on airborne IBDV were 4.1 log[sub]10[/sub] 50% egg infective dose (EID[sub]50[/sub] m [sup]-3[/sup] for the Andersen impactor, 3.3 log[sub]10[/sub] EID50 m [sup]-3[/sup] for the AGI-30, 2.5 log[sub]10[/sub] EID50 m [sup]-3[/sup] for the OMNI-3000, and 2.9 log[sub]10[/sub] EID[sub]50[/sub] m [sup]-3[/sup] for the MD8. The mean half-life of IBDV aerosolized at 20 °C and 70% was 11.9 min. Conclusion. Efficiencies of different samplers vary. Despite its relatively low sampling efficiency, the OMNI-3000 is suitable for use in environments with low viral concentrations because its high flow rate gives a low detection limit. With the 4 samplers investigated, negative air
Chau, Nancy H.
2009-01-01
This paper presents a capability-augmented model of on the job search, in which sweatshop conditions stifle the capability of the working poor to search for a job while on the job. The augmented setting unveils a sweatshop equilibrium in an otherwise archetypal Burdett-Mortensen economy, and reconciles a number of oft noted yet perplexing features of sweatshop economies. We demonstrate existence of multiple rational expectation equilibria, graduation pathways out of sweatshops in complete abs...
Choi, Jinhyeok; Kim, Hyeonjin
2016-12-01
To improve the efficacy of undersampled MRI, a method of designing adaptive sampling functions is proposed that is simple to implement on an MR scanner and yet effectively improves the performance of the sampling functions. An approximation of the energy distribution of an image (E-map) is estimated from highly undersampled k-space data acquired in a prescan and efficiently recycled in the main scan. An adaptive probability density function (PDF) is generated by combining the E-map with a modeled PDF. A set of candidate sampling functions are then prepared from the adaptive PDF, among which the one with maximum energy is selected as the final sampling function. To validate its computational efficiency, the proposed method was implemented on an MR scanner, and its robust performance in Fourier-transform (FT) MRI and compressed sensing (CS) MRI was tested by simulations and in a cherry tomato. The proposed method consistently outperforms the conventional modeled PDF approach for undersampling ratios of 0.2 or higher in both FT-MRI and CS-MRI. To fully benefit from undersampled MRI, it is preferable that the design of adaptive sampling functions be performed online immediately before the main scan. In this way, the proposed method may further improve the efficacy of the undersampled MRI. Copyright © 2016 Elsevier Inc. All rights reserved.
SymPix: A spherical grid for efficient sampling of rotationally invariant operators
Seljebotn, Dag Sverre
2015-01-01
We present SymPix, a special-purpose spherical grid optimized for efficient sampling of rotationally invariant linear operators. This grid is conceptually similar to the Gauss-Legendre (GL) grid, aligning sample points with iso-latitude rings located on Legendre polynomial zeros. Unlike the GL grid, however, the number of grid points per ring varies as a function of latitude, avoiding expensive over-sampling near the poles and ensuring nearly equal sky area per grid point. The ratio between the number of grid points in two neighbouring rings is required to be a low-order rational number (3, 2, 1, 4/3, 5/4 or 6/5) to maintain a high degree of symmetries. Our main motivation for this grid is to solve linear systems using multi-grid methods, and to construct efficient preconditioners through pixel-space sampling of the linear operator in question. The GL grid is not suitable for these purposes due to its massive over-sampling near the poles, leading to nearly degenerate linear systems, while HEALPix, another com...
In Vitro Efficient Expansion of Tumor Cells Deriving from Different Types of Human Tumor Samples
Ilaria Turin
2014-03-01
Full Text Available Obtaining human tumor cell lines from fresh tumors is essential to advance our understanding of antitumor immune surveillance mechanisms and to develop new ex vivo strategies to generate an efficient anti-tumor response. The present study delineates a simple and rapid method for efficiently establishing primary cultures starting from tumor samples of different types, while maintaining the immuno-histochemical characteristics of the original tumor. We compared two different strategies to disaggregate tumor specimens. After short or long term in vitro expansion, cells analyzed for the presence of malignant cells demonstrated their neoplastic origin. Considering that tumor cells may be isolated in a closed system with high efficiency, we propose this methodology for the ex vivo expansion of tumor cells to be used to evaluate suitable new drugs or to generate tumor-specific cytotoxic T lymphocytes or vaccines.
False-Negative Rate and Recovery Efficiency Performance of a Validated Sponge Wipe Sampling Method
Krauter, Paula A.; Piepel, Greg F.; Boucher, Raymond; Tezak, Matt; Amidan, Brett G.; Einfeld, Wayne
2012-01-01
Recovery of spores from environmental surfaces varies due to sampling and analysis methods, spore size and characteristics, surface materials, and environmental conditions. Tests were performed to evaluate a new, validated sponge wipe method using Bacillus atrophaeus spores. Testing evaluated the effects of spore concentration and surface material on recovery efficiency (RE), false-negative rate (FNR), limit of detection (LOD), and their uncertainties. Ceramic tile and stainless steel had the...
Plahuta, Maja; Tišler, Tatjana; Toman, Mihael Jožef; Pintar, Albin
2014-03-01
Bisphenol A (BPA) is a well-known endocrine disruptor with adverse oestrogen-like effects eliciting adverse effects in humans and wildlife. For this reason it is necessary to set up an efficient removal of BPA from wastewaters, before they are discharged into surface waters. The aim of this study was to compare the efficiency of BPA removal from aqueous samples with photolytic, photocatalytic, and UV/H₂O₂ oxidation. BPA solutions were illuminated with different bulbs (halogen; 17 W UV, 254 nm; and 150 W UV, 365 nm) with or without the TiO₂ P-25 catalyst or H₂O₂ (to accelerate degradation). Acute toxicity and oestrogenic activity of treated samples were determined using luminescent bacteria (Vibrio fischeri), water fleas (Daphnia magna), zebrafish embryos (Danio rerio), and Yeast Estrogen Screen (YES) assay with genetically modified yeast Saccharomyces cerevisiae. The results confirmed that BPA is toxic and oestrogenically active. Chemical analysis showed a reduction of BPA levels after photolytic treatment and 100 % conversion of BPA by photocatalytic and UV/H₂O₂ oxidation. The toxicity and oestrogenic activity of BPA were largely reduced in photolytically treated samples. Photocatalytic oxidation, however, either did not reduce BPA toxic and oestrogenic effects or even increased them in comparison with the baseline, untreated BPA solution. Our findings suggest that chemical analysis is not sufficient to determine the efficiency of advanced oxidation processes in removing pollutants from water and needs to be complemented with biological tests.
Efficiently Sampling Multiplicative Attribute Graphs Using a Ball-Dropping Process
Yun, Hyokun
2012-01-01
We introduce a novel and efficient sampling algorithm for the Multiplicative Attribute Graph Model (MAGM - Kim and Leskovec (2010)}). Our algorithm is \\emph{strictly} more efficient than the algorithm proposed by Yun and Vishwanathan (2012), in the sense that our method extends the \\emph{best} time complexity guarantee of their algorithm to a larger fraction of parameter space. Both in theory and in empirical evaluation on sparse graphs, our new algorithm outperforms the previous one. To design our algorithm, we first define a stochastic \\emph{ball-dropping process} (BDP). Although a special case of this process was introduced as an efficient approximate sampling algorithm for the Kronecker Product Graph Model (KPGM - Leskovec et al. (2010)}), neither \\emph{why} such an approximation works nor \\emph{what} is the actual distribution this process is sampling from has been addressed so far to the best of our knowledge. Our rigorous treatment of the BDP enables us to clarify the rational behind a BDP approximatio...
Trejo, Salvador; Toscano-Flores, José J; Matute, Esmeralda; Ramírez-Dueñas, María de Lourdes
2015-01-01
The aim of this study was to obtain the genotype and gene frequency from parents of children with attention-deficit/hyperactivity disorder (ADHD) and then assess the Hardy–Weinberg equilibrium of genotype frequency of the variable number tandem repeat (VNTR) III exon of the dopamine receptor D4 (DRD4) gene. The genotypes of the III exon of 48 bp VNTR repeats of the DRD4 gene were determined by polymerase chain reaction in a sample of 30 parents of ADHD cases. In the 60 chromosomes analyzed, the following frequencies of DRD4 gene polymorphisms were observed: six chromosomes (c) with two repeat alleles (r) (10%); 1c with 3r (1.5%); 36c with 4r (60%); 1c with 5r (1.5%); and 16c with 7r (27%). The genotypic distribution of the 30 parents was two parents (p) with 2r/2r (6.67%); 1p with 2r/4r (3.33%); 1p with 2r/5r (3.33%); 1p with 3r/4r (3.33%); 15p with 4r/4r (50%); 4p with 4r/7r (13.33); and 6p with 7r/7r (20%). A Hardy–Weinberg disequilibrium (χ2=13.03, P<0.01) was found due to an over-representation of the 7r/7r genotype. These results suggest that the 7r polymorphism of the DRD4 gene is associated with the ADHD condition in a Mexican population. PMID:26082657
Trejo S
2015-06-01
Full Text Available Salvador Trejo, José J Toscano-Flores, Esmeralda Matute, María de Lourdes Ramírez-Dueñas Laboratorio de Neuropsicología y Neurolingüística, Instituto de Neurociencias CUCBA, Guadalajara, Jalisco, Mexico Abstract: The aim of this study was to obtain the genotype and gene frequency from parents of children with attention-deficit/hyperactivity disorder (ADHD and then assess the Hardy–Weinberg equilibrium of genotype frequency of the variable number tandem repeat (VNTR III exon of the dopamine receptor D4 (DRD4 gene. The genotypes of the III exon of 48 bp VNTR repeats of the DRD4 gene were determined by polymerase chain reaction in a sample of 30 parents of ADHD cases. In the 60 chromosomes analyzed, the following frequencies of DRD4 gene polymorphisms were observed: six chromosomes (c with two repeat alleles (r (10%; 1c with 3r (1.5%; 36c with 4r (60%; 1c with 5r (1.5%; and 16c with 7r (27%. The genotypic distribution of the 30 parents was two parents (p with 2r/2r (6.67%; 1p with 2r/4r (3.33%; 1p with 2r/5r (3.33%; 1p with 3r/4r (3.33%; 15p with 4r/4r (50%; 4p with 4r/7r (13.33; and 6p with 7r/7r (20%. A Hardy–Weinberg disequilibrium (χ2=13.03, P<0.01 was found due to an over-representation of the 7r/7r genotype. These results suggest that the 7r polymorphism of the DRD4 gene is associated with the ADHD condition in a Mexican population. Keywords: ADHD, parents, DRD4, HWE
Determination of the efficiency of a detector in gamma spectrometry of large-volume samples
Tertyshnik, E G
2012-01-01
The experimental - calculated method is proposed to determine the full energy peak efficiency (FEPE) of detectors {\\epsilon}(E) in case a measurement of the large-volume samples. Water is used as standard absorber in which the linear attenuation coefficients for photons {\\mu}0 (E) is well known. The value {\\mu} (E) in sample material (matrix of the sample) is determined experimentally by means of spectrometer. The formulas are given for calculation of the ratio {\\epsilon}(E)/ {\\epsilon}0(E), where {\\epsilon}0(E) is FEPE of the detector for photons those are arising in the container filled with water (it is found by adding in the container of the Reference radioactive solutions). To prove the validity of the method ethanol (density 0,8 g/cm3) and water solutions of salts (density 1,2 and 1,5 g/cm3) were used for simulation of the samples with different attenuation coefficients. Standard deviation between experimental and calculated efficiencies has been about 5 %.
Efficiency of Airborne Sample Analysis Platform (ASAP) bioaerosol sampler for pathogen detection.
Sharma, Anurag; Clark, Elizabeth; McGlothlin, James D; Mittal, Suresh K
2015-01-01
The threat of bioterrorism and pandemics has highlighted the urgency for rapid and reliable bioaerosol detection in different environments. Safeguarding against such threats requires continuous sampling of the ambient air for pathogen detection. In this study we investigated the efficacy of the Airborne Sample Analysis Platform (ASAP) 2800 bioaerosol sampler to collect representative samples of air and identify specific viruses suspended as bioaerosols. To test this concept, we aerosolized an innocuous replication-defective bovine adenovirus serotype 3 (BAdV3) in a controlled laboratory environment. The ASAP efficiently trapped the surrogate virus at 5 × 10(3) plaque-forming units (p.f.u.) [2 × 10(5) genome copy equivalent] concentrations or more resulting in the successful detection of the virus using quantitative PCR. These results support the further development of ASAP for bioaerosol pathogen detection.
Efficiency of Airborne Sample Analysis Platform (ASAP Bioaerosol Sampler for Pathogen Detection
Anurag eSharma
2015-05-01
Full Text Available The threat of bioterrorism and pandemics has highlighted the urgency for rapid and reliable bioaerosol detection in different environments. Safeguarding against such threats requires continuous sampling of the ambient air for pathogen detection. In this study we investigated the efficacy of the Airborne Sample Analysis Platform (ASAP 2800 bioaerosol sampler to collect representative samples of air and identify specific viruses suspended as bioaerosols. To test this concept, we aerosolized an innocuous replication-defective bovine adenovirus serotype 3 (BAdV3 in a controlled laboratory environment. The ASAP efficiently trapped the surrogate virus at 5×10E3 plaque-forming units (p.f.u. [2×10E5 genome copy equivalent] concentrations or more resulting in the successful detection of the virus using quantitative PCR. These results support the further development of ASAP for bioaerosol pathogen detection.
Ma, Yanyuan
2013-09-01
We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.
Sample-efficient Strategies for Learning in the Presence of Noise
Cesa-Bianchi, N.; Dichterman, E.; Fischer, Paul
1999-01-01
In this paper, we prove various results about PAC learning in the presence of malicious noise. Our main interest is the sample size behavior of learning algorithms. We prove the first nontrivial sample complexity lower bound in this model by showing that order of &egr;/&Dgr;2 + d/&Dgr; (up...... to logarithmic factors) examples are necessary for PAC learning any target class of {#123;0,1}#125;-valued functions of VC dimension d, where &egr; is the desired accuracy and &eegr; = &egr;/(1 + &egr;) - &Dgr; the malicious noise rate (it is well known that any nontrivial target class cannot be PAC learned...... with accuracy &egr; and malicious noise rate &eegr; &egr;/(1 + &egr;), this irrespective to sample complexity). We also show that this result cannot be significantly improved in general by presenting efficient learning algorithms for the class of all subsets of d elements and the class of unions of at most d...
An efficient approach for Mars Sample Return using emerging commercial capabilities
Gonzales, Andrew A.; Stoker, Carol R.
2016-06-01
Mars Sample Return is the highest priority science mission for the next decade as recommended by the 2011 Decadal Survey of Planetary Science (Squyres, 2011 [1]). This article presents the results of a feasibility study for a Mars Sample Return mission that efficiently uses emerging commercial capabilities expected to be available in the near future. The motivation of our study was the recognition that emerging commercial capabilities might be used to perform Mars Sample Return with an Earth-direct architecture, and that this may offer a desirable simpler and lower cost approach. The objective of the study was to determine whether these capabilities can be used to optimize the number of mission systems and launches required to return the samples, with the goal of achieving the desired simplicity. All of the major element required for the Mars Sample Return mission are described. Mission system elements were analyzed with either direct techniques or by using parametric mass estimating relationships. The analysis shows the feasibility of a complete and closed Mars Sample Return mission design based on the following scenario: A SpaceX Falcon Heavy launch vehicle places a modified version of a SpaceX Dragon capsule, referred to as "Red Dragon", onto a Trans Mars Injection trajectory. The capsule carries all the hardware needed to return to Earth Orbit samples collected by a prior mission, such as the planned NASA Mars 2020 sample collection rover. The payload includes a fully fueled Mars Ascent Vehicle; a fueled Earth Return Vehicle, support equipment, and a mechanism to transfer samples from the sample cache system onboard the rover to the Earth Return Vehicle. The Red Dragon descends to land on the surface of Mars using Supersonic Retropropulsion. After collected samples are transferred to the Earth Return Vehicle, the single-stage Mars Ascent Vehicle launches the Earth Return Vehicle from the surface of Mars to a Mars phasing orbit. After a brief phasing period, the
Evaluation of an automated protocol for efficient and reliable DNA extraction of dietary samples.
Wallinger, Corinna; Staudacher, Karin; Sint, Daniela; Thalinger, Bettina; Oehm, Johannes; Juen, Anita; Traugott, Michael
2017-08-01
Molecular techniques have become an important tool to empirically assess feeding interactions. The increased usage of next-generation sequencing approaches has stressed the need of fast DNA extraction that does not compromise DNA quality. Dietary samples here pose a particular challenge, as these demand high-quality DNA extraction procedures for obtaining the minute quantities of short-fragmented food DNA. Automatic high-throughput procedures significantly decrease time and costs and allow for standardization of extracting total DNA. However, these approaches have not yet been evaluated for dietary samples. We tested the efficiency of an automatic DNA extraction platform and a traditional CTAB protocol, employing a variety of dietary samples including invertebrate whole-body extracts as well as invertebrate and vertebrate gut content samples and feces. Extraction efficacy was quantified using the proportions of successful PCR amplifications of both total and prey DNA, and cost was estimated in terms of time and material expense. For extraction of total DNA, the automated platform performed better for both invertebrate and vertebrate samples. This was also true for prey detection in vertebrate samples. For the dietary analysis in invertebrates, there is still room for improvement when using the high-throughput system for optimal DNA yields. Overall, the automated DNA extraction system turned out as a promising alternative to labor-intensive, low-throughput manual extraction methods such as CTAB. It is opening up the opportunity for an extensive use of this cost-efficient and innovative methodology at low contamination risk also in trophic ecology.
Efficient and exact sampling of simple graphs with given arbitrary degree sequence.
Charo I Del Genio
Full Text Available Uniform sampling from graphical realizations of a given degree sequence is a fundamental component in simulation-based measurements of network observables, with applications ranging from epidemics, through social networks to Internet modeling. Existing graph sampling methods are either link-swap based (Markov-Chain Monte Carlo algorithms or stub-matching based (the Configuration Model. Both types are ill-controlled, with typically unknown mixing times for link-swap methods and uncontrolled rejections for the Configuration Model. Here we propose an efficient, polynomial time algorithm that generates statistically independent graph samples with a given, arbitrary, degree sequence. The algorithm provides a weight associated with each sample, allowing the observable to be measured either uniformly over the graph ensemble, or, alternatively, with a desired distribution. Unlike other algorithms, this method always produces a sample, without back-tracking or rejections. Using a central limit theorem-based reasoning, we argue, that for large , and for degree sequences admitting many realizations, the sample weights are expected to have a lognormal distribution. As examples, we apply our algorithm to generate networks with degree sequences drawn from power-law distributions and from binomial distributions.
Full-scale biofilter reduction efficiencies assessed using portable 24-hour sampling units.
Akdeniz, Neslihan; Janni, Kevin A
2012-02-01
Portable 24-hr sampling units were used to collect air samples from eight biofilters on four animal feeding operations. The biofilters were located on a dairy, a swine nursery, and two swine finishing farms. Biofilter media characteristics (age, porosity, density, particle size, water absorption capacity, pressure drop) and ammonia (NH3), hydrogen sulfide (H2S), sulfur dioxide (SO2), methane (CH4), and nitrous oxide (N2O) reduction efficiencies of the biofilters were assessed. The deep bed biofilters at the dairy farm, which were in use for a few months, had the most porous media and lowest unit pressure drops. The average media porosity and density were 75% and 180 kg/m3, respectively. Reduction efficiencies of H2S and NH3 (biofilter 1: 64% NH3, 76% H2S; biofilter 2: 53% NH3, 85% H2S) were close to those reported for pilot-scale biofilters. No N2O production was measured at the dairy farm. The highest H2S, SO2, NH3, and CH4 reduction efficiencies were measured from a flat-bed biofilter at the swine nursery farm. However, the highest N2O generation (29.2%) was also measured from this biofilter. This flat-bed biofilter media was dense and had the lowest porosity. A garden sprinkler was used to add water to this biofilter, which may have filled media pores and caused N2O production under anaerobic conditions. Concentrations of H2S and NH3 were determined using the portable 24-hr sampling units and compared to ones measured with a semicontinuous gas sampling system at one farm. Flat-bed biofilters at the swine finishing farms also produced low amounts of N2O. The N2O production rate of the newer media (2 years old) with higher porosity was lower than that of older media (3 years old) (P = 0.042).
Estill, Cheryl Fairfield; Baron, Paul A; Beard, Jeremy K; Hein, Misty J; Larsen, Lloyd D; Rose, Laura; Schaefer, Frank W; Noble-Wang, Judith; Hodges, Lisa; Lindquist, H D Alan; Deye, Gregory J; Arduino, Matthew J
2009-07-01
After the 2001 anthrax incidents, surface sampling techniques for biological agents were found to be inadequately validated, especially at low surface loadings. We aerosolized Bacillus anthracis Sterne spores within a chamber to achieve very low surface loading (ca. 3, 30, and 200 CFU per 100 cm(2)). Steel and carpet coupons seeded in the chamber were sampled with swab (103 cm(2)) or wipe or vacuum (929 cm(2)) surface sampling methods and analyzed at three laboratories. Agar settle plates (60 cm(2)) were the reference for determining recovery efficiency (RE). The minimum estimated surface concentrations to achieve a 95% response rate based on probit regression were 190, 15, and 44 CFU/100 cm(2) for sampling steel surfaces and 40, 9.2, and 28 CFU/100 cm(2) for sampling carpet surfaces with swab, wipe, and vacuum methods, respectively; however, these results should be cautiously interpreted because of high observed variability. Mean REs at the highest surface loading were 5.0%, 18%, and 3.7% on steel and 12%, 23%, and 4.7% on carpet for the swab, wipe, and vacuum methods, respectively. Precision (coefficient of variation) was poor at the lower surface concentrations but improved with increasing surface concentration. The best precision was obtained with wipe samples on carpet, achieving 38% at the highest surface concentration. The wipe sampling method detected B. anthracis at lower estimated surface concentrations and had higher RE and better precision than the other methods. These results may guide investigators to more meaningfully conduct environmental sampling, quantify contamination levels, and conduct risk assessment for humans.
A cold finger cooling system for the efficient graphitisation of microgram-sized carbon samples
Yang, Bin; Smith, A. M.; Hua, Quan
2013-01-01
At ANSTO, we use the Bosch reaction to convert sample CO2 to graphite for production of our radiocarbon AMS targets. Key to the efficient graphitisation of ultra-small samples are the type of iron catalyst used and the effective trapping of water vapour during the reaction. Here we report a simple liquid nitrogen cooling system that enables us to rapidly adjust the temperature of the cold finger in our laser-heated microfurnace. This has led to an improvement in the graphitisation of microgram-sized carbon samples. This simple system uses modest amounts of liquid nitrogen (typically <200 mL/h during graphitisation) and is compact and reliable. We have used it to produce over 120 AMS targets containing between 5 and 20 μg of carbon, with conversion efficiencies for 5 μg targets ranging from 80% to 100%. In addition, this cooling system has been adapted for use with our conventional graphitisation reactors and has also improved their performance.
Liu, Fei; Zhang, Xi; Jia, Yan
2015-01-01
In this paper, we propose a computer information processing algorithm that can be used for biomedical image processing and disease prediction. A biomedical image is considered a data object in a multi-dimensional space. Each dimension is a feature that can be used for disease diagnosis. We introduce a new concept of the top (k1,k2) outlier. It can be used to detect abnormal data objects in the multi-dimensional space. This technique focuses on uncertain space, where each data object has several possible instances with distinct probabilities. We design an efficient sampling algorithm for the top (k1,k2) outlier in uncertain space. Some improvement techniques are used for acceleration. Experiments show our methods' high accuracy and high efficiency.
Efficient Sampling of Band-limited Signals from Sine Wave Crossings
Selva, J
2010-01-01
This paper presents an efficient method for reconstructing a band-limited signal in the discrete domain from its crossings with a sine wave. The method makes it possible to design A/D converters that only deliver the crossing timings, which are then used to interpolate the input signal at arbitrary instants. Potentially, it may allow for reductions in power consumption and complexity, as well as for an increase in the achievable sampling bandwidth. The reconstruction in the discrete domain is based on a recently-proposed modification of the Lagrange interpolator, which is readily implementable with linear complexity and efficiently, given that it re-uses known schemes for variable fractional-delay (VFD) filters. As a spin-off, the method allows one to perform spectral analysis from sine wave crossings with the complexity of the FFT. Finally, the results in the paper are validated in a numerical example.
MO Cardoso
2008-06-01
Full Text Available The threat to public health represented by Salmonella is at least partially a consequence of its ecology in poultry hosts. Good manufacturing practices in the processing plant can reduce the contamination of poultry products, and critical control point principles are essential throughout the chain production. One procedure adopted in critical points control to prevent and to reduce Salmonella in farms and poultry products is the use of disinfectants. This study aimed at evaluating disinfectant efficiency against Salmonella enteritidis samples isolated from broiler carcasses in Rio Grande do Sul State between 1995 and 1996. The tested disinfectants were: phenol 1:256, quaternary ammonium 1:2500, glutaraldehyde 1:200, and iodine 1:500, with contact times of 5, 10, 15, and 20 in an in vitro test. .Phenolic compounds showed better results, iodine and glutaraldehyde showed intermediary results, and quaternary ammonium presented efficiency at all contact times evaluated in the in vitro test.
EsPRESSo: Efficient Privacy-Preserving Evaluation of Sample Set Similarity
Blundo, Carlo; Gasti, Paolo
2011-01-01
This paper presents the first practical construction for privacy-preserving evaluation of sample set similarity, based on the well-known Jaccard index measure. In this problem, two mutually distrustful entities determine how similar their sets are, without disclosing their content to each other. We propose two efficient protocols: the first securely computes the Jaccard index of two sets; the second approximates it using MinHash techniques, at a significantly lower cost and with same privacy guarantees. This building block is attractive in many relevant applications, including document similarity, biometric authentication, multimedia file retrieval, and genetic tests. We demonstrate, both analytically and experimentally, that our constructions -- while not bounded to any specific application -- are appreciably more efficient than prior specialized techniques.
de Oliveira, Mário J
2017-01-01
This textbook provides an exposition of equilibrium thermodynamics and its applications to several areas of physics with particular attention to phase transitions and critical phenomena. The applications include several areas of condensed matter physics and include also a chapter on thermochemistry. Phase transitions and critical phenomena are treated according to the modern development of the field, based on the ideas of universality and on the Widom scaling theory. For each topic, a mean-field or Landau theory is presented to describe qualitatively the phase transitions. These theories include the van der Waals theory of the liquid-vapor transition, the Hildebrand-Heitler theory of regular mixtures, the Griffiths-Landau theory for multicritical points in multicomponent systems, the Bragg-Williams theory of order-disorder in alloys, the Weiss theory of ferromagnetism, the Néel theory of antiferromagnetism, the Devonshire theory for ferroelectrics and Landau-de Gennes theory of liquid crystals. This new edit...
Oliveira, Mário J
2013-01-01
This textbook provides an exposition of equilibrium thermodynamics and its applications to several areas of physics with particular attention to phase transitions and critical phenomena. The applications include several areas of condensed matter physics and include also a chapter on thermochemistry. Phase transitions and critical phenomena are treated according to the modern development of the field, based on the ideas of universality and on the Widom scaling theory. For each topic, a mean-field or Landau theory is presented to describe qualitatively the phase transitions. These theories include the van der Waals theory of the liquid-vapor transition, the Hildebrand-Heitler theory of regular mixtures, the Griffiths-Landau theory for multicritical points in multicomponent systems, the Bragg-Williams theory of order-disorder in alloys, the Weiss theory of ferromagnetism, the Néel theory of antiferromagnetism, the Devonshire theory for ferroelectrics and Landau-de Gennes theory of liquid crystals. This textbo...
Efficient Estimation for Diffusions Sampled at High Frequency Over a Fixed Time Interval
Jakobsen, Nina Munkholt; Sørensen, Michael
Parametric estimation for diffusion processes is considered for high frequency observations over a fixed time interval. The processes solve stochastic differential equations with an unknown parameter in the diffusion coefficient. We find easily verified conditions on approximate martingale...... estimating functions under which estimators are consistent, rate optimal, and efficient under high frequency (in-fill) asymptotics. The asymptotic distributions of the estimators are shown to be normal variance-mixtures, where the mixing distribution generally depends on the full sample path of the diffusion...
Efficient isolation of multiphoton processes and detection of collective states in dilute samples
Bruder, Lukas; Stienkemeier, Frank
2015-01-01
A novel technique to sensitively and selectively isolate multiple-quantum coherences in a femtosecond pump-probe setup is presented. Detecting incoherent observables and imparting lock-in amplification, even weak signals of highly dilute samples can be acquired. Applying this method, efficient isolation of one- and two-photon transitions in a rubidium-doped helium droplet beam experiment is demonstrated and collective resonances up to fourth order are observed in a potassium vapor for the first time. Our approach provides new perspectives for coherent experiments in the deep UV and novel multidimensional spectroscopy schemes, in particular when selective detection of particles in dilute gas-phase targets is possible.
Sivakumar, Mani; Sakthivel, Mani; Chen, Shen-Ming
2017-03-15
Well-defined CoS nanorods (NR) were synthesized using a simple hydrothermal method, and were tested as an electrode material for electro-oxidation of vanillin. The NR material was characterized with regard to morphology, crystallinity, and electro-activity by use of appropriate analytical techniques. The resulting CoS NR@Nafion modified glassy carbon electrode (GCE) exhibited efficient electro-oxidation of vanillin with a considerable linear range of current-vs-concentration (0.5-56μM vanillin) and a detection limit of 0.07μM. Also, food samples containing vanillin were studied to test suitability for commercial applications.
Chito, D.; Weng, L.P.; Galceran, J.; Companys, E.; Puy, J.; Riemsdijk, van W.H.; Leeuwen, van H.P.
2012-01-01
The determination of free Zn2+ ion concentration is a key in the study of environmental systems like river water and soils, due to its impact on bioavailability and toxicity. AGNES (Absence of Gradients and Nernstian Equilibrium Stripping) and DMT (Donnan Membrane Technique) are emerging techniques
Yousefian, V.; Weinberg, M.H.; Haimes, R.
1980-02-01
The NASA CEC Code was the starting point for PACKAGE, whose function is to evaluate the composition of a multiphase combustion product mixture under the following chemical conditions: (1) total equilibrium with pure condensed species; (2) total equilibrium with ideal liquid solution; (3) partial equilibrium/partial finite rate chemistry; and (4) fully finite rate chemistry. The last three conditions were developed to treat the evolution of complex mixtures such as coal combustion products. The thermodynamic variable pairs considered are either pressure (P) and enthalpy, P and entropy, at P and temperature. Minimization of Gibbs free energy is used. This report gives detailed discussions of formulation and input/output information used in the code. Sample problems are given. The code development, description, and current programming constraints are discussed. (DLC)
Wiebke, Jonas; Pahl, Elke; Schwerdtfeger, Peter
2012-07-01
A simple and efficient internal-coordinate importance sampling protocol for the Monte Carlo computation of (up to fourth-order) virial coefficients ̅B(n) of atomic systems is proposed. The key feature is a multivariate sampling distribution that mimics the product structure of the dominating pairwise-additive parts of the ̅B(n). This scheme is shown to be competitive over routine numerical methods and, as a proof of principle, applied to neon: The second, third, and fourth virial coefficients of neon as well as equation-of-state data are computed from ab initio two- and three-body potentials; four-body contributions are found to be insignificant. Kirkwood-Wigner quantum corrections to first order are found to be crucial to the observed agreement with recent ab initio and experimental reference data sets but are likely inadequate at very low temperatures.
Efficient, uninformative sampling of limb darkening coefficients for two-parameter laws
Kipping, David M
2013-01-01
Stellar limb darkening affects a wide range of astronomical measurements and is frequently modeled with a parametric model using polynomials in the cosine of the angle between the line of sight and the emergent intensity. Two-parameter laws are particularly popular for cases where one wishes to fit freely for the limb darkening coefficients (i.e. an uninformative prior) due to the compact prior volume and the fact more complex models rarely obtain unique solutions with present data. In such cases, we show that the two limb darkening coefficients are constrained by three physical boundary conditions, describing a triangular region in the two-dimensional parameter space. We show that uniformly distributed samples may be drawn from this region with optimal efficiency by a technique developed by computer graphical programming: triangular sampling. Alternatively, one can use make draws using a uniform, bivariate Dirichlet distribution. We provide simple expressions for these parametrizations for both techniques ap...
Pedro Saa
2015-04-01
Full Text Available Kinetic models provide the means to understand and predict the dynamic behaviour of enzymes upon different perturbations. Despite their obvious advantages, classical parameterizations require large amounts of data to fit their parameters. Particularly, enzymes displaying complex reaction and regulatory (allosteric mechanisms require a great number of parameters and are therefore often represented by approximate formulae, thereby facilitating the fitting but ignoring many real kinetic behaviours. Here, we show that full exploration of the plausible kinetic space for any enzyme can be achieved using sampling strategies provided a thermodynamically feasible parameterization is used. To this end, we developed a General Reaction Assembly and Sampling Platform (GRASP capable of consistently parameterizing and sampling accurate kinetic models using minimal reference data. The former integrates the generalized MWC model and the elementary reaction formalism. By formulating the appropriate thermodynamic constraints, our framework enables parameterization of any oligomeric enzyme kinetics without sacrificing complexity or using simplifying assumptions. This thermodynamically safe parameterization relies on the definition of a reference state upon which feasible parameter sets can be efficiently sampled. Uniform sampling of the kinetics space enabled dissecting enzyme catalysis and revealing the impact of thermodynamics on reaction kinetics. Our analysis distinguished three reaction elasticity regions for common biochemical reactions: a steep linear region (0> ΔGr >-2 kJ/mol, a transition region (-2> ΔGr >-20 kJ/mol and a constant elasticity region (ΔGr <-20 kJ/mol. We also applied this framework to model more complex kinetic behaviours such as the monomeric cooperativity of the mammalian glucokinase and the ultrasensitive response of the phosphoenolpyruvate carboxylase of Escherichia coli. In both cases, our approach described appropriately not only
GUIDE TO CALCULATING TRANSPORT EFFICIENCY OF AEROSOLS IN OCCUPATIONAL AIR SAMPLING SYSTEMS
Hogue, M.; Hadlock, D.; Thompson, M.; Farfan, E.
2013-11-12
This report will present hand calculations for transport efficiency based on aspiration efficiency and particle deposition losses. Because the hand calculations become long and tedious, especially for lognormal distributions of aerosols, an R script (R 2011) will be provided for each element examined. Calculations are provided for the most common elements in a remote air sampling system, including a thin-walled probe in ambient air, straight tubing, bends and a sample housing. One popular alternative approach would be to put such calculations in a spreadsheet, a thorough version of which is shared by Paul Baron via the Aerocalc spreadsheet (Baron 2012). To provide greater transparency and to avoid common spreadsheet vulnerabilities to errors (Burns 2012), this report uses R. The particle size is based on the concept of activity median aerodynamic diameter (AMAD). The AMAD is a particle size in an aerosol where fifty percent of the activity in the aerosol is associated with particles of aerodynamic diameter greater than the AMAD. This concept allows for the simplification of transport efficiency calculations where all particles are treated as spheres with the density of water (1g cm-3). In reality, particle densities depend on the actual material involved. Particle geometries can be very complicated. Dynamic shape factors are provided by Hinds (Hinds 1999). Some example factors are: 1.00 for a sphere, 1.08 for a cube, 1.68 for a long cylinder (10 times as long as it is wide), 1.05 to 1.11 for bituminous coal, 1.57 for sand and 1.88 for talc. Revision 1 is made to correct an error in the original version of this report. The particle distributions are based on activity weighting of particles rather than based on the number of particles of each size. Therefore, the mass correction made in the original version is removed from the text and the calculations. Results affected by the change are updated.
Morera-Gómez, Yasser; Cartas-Aguila, Héctor A; Alonso-Hernández, Carlos M; Bernal-Castillo, Jose L; Guillén-Arruebarrena, Aniel
2015-03-01
Monte Carlo efficiency transfer method was used to determine the full energy peak efficiency of a coaxial n-type HPGe detector. The efficiencies calibration curves for three Certificate Reference Materials were determined by efficiency transfer using a (152)Eu reference source. The efficiency values obtained after efficiency transfer were used to calculate the activity concentration of the radionuclides detected in the three materials, which were measured in a low-background gamma spectrometry system. Reported and calculated activity concentration show a good agreement with mean deviations of 5%, which is satisfactory for environmental samples measurement.
Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.
2017-07-01
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity
Multiple replica repulsion technique for efficient conformational sampling of biological systems.
Malevanets, Anatoly; Wodak, Shoshana J
2011-08-17
Here, we propose a technique for sampling complex molecular systems with many degrees of freedom. The technique, termed "multiple replica repulsion" (MRR), does not suffer from poor scaling with the number of degrees of freedom associated with common replica exchange procedures and does not require sampling at high temperatures. The algorithm involves creation of multiple copies (replicas) of the system, which interact with one another through a repulsive potential that can be applied to the system as a whole or to portions of it. The proposed scheme prevents oversampling of the most populated states and provides accurate descriptions of conformational perturbations typically associated with sampling ground-state energy wells. The performance of MRR is illustrated for three systems of increasing complexity. A two-dimensional toy potential surface is used to probe the sampling efficiency as a function of key parameters of the procedure. MRR simulations of the Met-enkephalin pentapeptide, and the 76-residue protein ubiquitin, performed in presence of explicit water molecules and totaling 32 ns each, investigate the ability of MRR to characterize the conformational landscape of the peptide, and the protein native basin, respectively. Results obtained for the enkephalin peptide reflect more closely the extensive conformational flexibility of this peptide than previously reported simulations. Those obtained for ubiquitin show that conformational ensembles sampled by MRR largely encompass structural fluctuations relevant to biological recognition, which occur on the microsecond timescale, or are observed in crystal structures of ubiquitin complexes with other proteins. MRR thus emerges as a very promising simple and versatile technique for modeling the structural plasticity of complex biological systems. Copyright © 2011 Biophysical Society. Published by Elsevier Inc. All rights reserved.
False-Negative Rate and Recovery Efficiency Performance of a Validated Sponge Wipe Sampling Method
Krauter, Paula; Piepel, Gregory F.; Boucher, Raymond; Tezak, Matthew S.; Amidan, Brett G.; Einfeld, Wayne
2012-02-01
Recovery of spores from environmental surfaces varies due to sampling and analysis methods, spore size and characteristics, surface materials, and environmental conditions. Tests were performed to evaluate a new, validated sponge wipe method using Bacillus atrophaeus spores. Testing evaluated the effects of spore concentration and surface material on recovery efficiency (RE), false-negative rate (FNR), limit of detection (LOD), and their uncertainties. Ceramic tile and stainless steel had the highest mean RE values (48.9 and 48.1%, respectively). Faux leather, vinyl tile, and painted wood had mean RE values of 30.3, 25.6, and 25.5, respectively, while plastic had the lowest mean RE (9.8%). Results show roughly linear dependences of RE and FNR on surface roughness, with smoother surfaces resulting in higher mean REs and lower FNRs. REs were not influenced by the low spore concentrations tested (3.10x10^-3 to 1.86 CFU/cm^2). Stainless steel had the lowest mean FNR (0.123), and plastic had the highest mean FNR (0.479). The LOD90 (>1 CFU detected 90% of the time) varied with surface material, from 0.015 CFU/cm^2 on stainless steel up to 0.039 on plastic. It may be possible to improve sampling results by considering surface roughness in selecting sampling locations and interpreting spore recovery data. Further, FNR values (calculated as a function of concentration and surface material) can be used presampling to calculate the numbers of samples for statistical sampling plans with desired performance and postsampling to calculate the confidence in characterization and clearance decisions.
False-negative rate and recovery efficiency performance of a validated sponge wipe sampling method.
Krauter, Paula A; Piepel, Greg F; Boucher, Raymond; Tezak, Matt; Amidan, Brett G; Einfeld, Wayne
2012-02-01
Recovery of spores from environmental surfaces varies due to sampling and analysis methods, spore size and characteristics, surface materials, and environmental conditions. Tests were performed to evaluate a new, validated sponge wipe method using Bacillus atrophaeus spores. Testing evaluated the effects of spore concentration and surface material on recovery efficiency (RE), false-negative rate (FNR), limit of detection (LOD), and their uncertainties. Ceramic tile and stainless steel had the highest mean RE values (48.9 and 48.1%, respectively). Faux leather, vinyl tile, and painted wood had mean RE values of 30.3, 25.6, and 25.5, respectively, while plastic had the lowest mean RE (9.8%). Results show roughly linear dependences of RE and FNR on surface roughness, with smoother surfaces resulting in higher mean REs and lower FNRs. REs were not influenced by the low spore concentrations tested (3.10 × 10(-3) to 1.86 CFU/cm(2)). Stainless steel had the lowest mean FNR (0.123), and plastic had the highest mean FNR (0.479). The LOD(90) (≥1 CFU detected 90% of the time) varied with surface material, from 0.015 CFU/cm(2) on stainless steel up to 0.039 on plastic. It may be possible to improve sampling results by considering surface roughness in selecting sampling locations and interpreting spore recovery data. Further, FNR values (calculated as a function of concentration and surface material) can be used presampling to calculate the numbers of samples for statistical sampling plans with desired performance and postsampling to calculate the confidence in characterization and clearance decisions.
Ice nucleation efficiency of natural dust samples in the immersion mode
Kaufmann, Lukas; Marcolli, Claudia; Hofer, Julian; Pinti, Valeria; Hoyle, Christopher R.; Peter, Thomas
2016-09-01
tested suspension concentrations, and a microcline mineral showed bulk freezing temperatures even above 270 K. This makes microcline (KAlSi3O8) an exceptionally good ice-nucleating mineral, superior to all other analysed K-feldspars, (Na, Ca)-feldspars, and the clay minerals. In summary, the mineralogical composition can explain the observed freezing behaviour of 5 of the investigated 12 natural dust samples, and partly for 6 samples, leaving the freezing efficiency of only 1 sample not easily explained in terms of its mineral reference components. While this suggests that mineralogical composition is a major determinant of ice-nucleating ability, in practice, most natural samples consist of a mixture of minerals, and this mixture seems to lead to remarkably similar ice nucleation abilities, regardless of their exact composition, so that global models, in a first approximation, may represent mineral dust as a single species with respect to ice nucleation activity. However, more sophisticated representations of ice nucleation by mineral dusts should rely on the mineralogical composition based on a source scheme of dust emissions.
Irreducibility and efficiency of ESIP to sample marker genotypes in large pedigrees with loops
Schelling Matthias
2002-09-01
Full Text Available Abstract Markov chain Monte Carlo (MCMC methods have been proposed to overcome computational problems in linkage and segregation analyses. This approach involves sampling genotypes at the marker and trait loci. Among MCMC methods, scalar-Gibbs is the easiest to implement, and it is used in genetics. However, the Markov chain that corresponds to scalar-Gibbs may not be irreducible when the marker locus has more than two alleles, and even when the chain is irreducible, mixing has been observed to be slow. Joint sampling of genotypes has been proposed as a strategy to overcome these problems. An algorithm that combines the Elston-Stewart algorithm and iterative peeling (ESIP sampler to sample genotypes jointly from the entire pedigree is used in this study. Here, it is shown that the ESIP sampler yields an irreducible Markov chain, regardless of the number of alleles at a locus. Further, results obtained by ESIP sampler are compared with other methods in the literature. Of the methods that are guaranteed to be irreducible, ESIP was the most efficient.
Curvelet-based sampling for accurate and efficient multimodal image registration
Safran, M. N.; Freiman, M.; Werman, M.; Joskowicz, L.
2009-02-01
We present a new non-uniform adaptive sampling method for the estimation of mutual information in multi-modal image registration. The method uses the Fast Discrete Curvelet Transform to identify regions along anatomical curves on which the mutual information is computed. Its main advantages of over other non-uniform sampling schemes are that it captures the most informative regions, that it is invariant to feature shapes, orientations, and sizes, that it is efficient, and that it yields accurate results. Extensive evaluation on 20 validated clinical brain CT images to Proton Density (PD) and T1 and T2-weighted MRI images from the public RIRE database show the effectiveness of our method. Rigid registration accuracy measured at 10 clinical targets and compared to ground truth measurements yield a mean target registration error of 0.68mm(std=0.4mm) for CT-PD and 0.82mm(std=0.43mm) for CT-T2. This is 0.3mm (1mm) more accurate in the average (worst) case than five existing sampling methods. Our method has the lowest registration errors recorded to date for the registration of CT-PD and CT-T2 images in the RIRE website when compared to methods that were tested on at least three patient datasets.
Kitao, Akio; Harada, Ryuhei; Nishihara, Yasutaka; Tran, Duy Phuoc
2016-12-01
Parallel Cascade Selection Molecular Dynamics (PaCS-MD) was proposed as an efficient conformational sampling method to investigate conformational transition pathway of proteins. In PaCS-MD, cycles of (i) selection of initial structures for multiple independent MD simulations and (ii) conformational sampling by independent MD simulations are repeated until the convergence of the sampling. The selection is conducted so that protein conformation gradually approaches a target. The selection of snapshots is a key to enhance conformational changes by increasing the probability of rare event occurrence. Since the procedure of PaCS-MD is simple, no modification of MD programs is required; the selections of initial structures and the restart of the next cycle in the MD simulations can be handled with relatively simple scripts with straightforward implementation. Trajectories generated by PaCS-MD were further analyzed by the Markov state model (MSM), which enables calculation of free energy landscape. The combination of PaCS-MD and MSM is reported in this work.
Efficient adaptive designs with mid-course sample size adjustment in clinical trials
Bartroff, Jay
2011-01-01
Adaptive designs have been proposed for clinical trials in which the nuisance parameters or alternative of interest are unknown or likely to be misspecified before the trial. Whereas most previous works on adaptive designs and mid-course sample size re-estimation have focused on two-stage or group sequential designs in the normal case, we consider here a new approach that involves at most three stages and is developed in the general framework of multiparameter exponential families. Not only does this approach maintain the prescribed type I error probability, but it also provides a simple but asymptotically efficient sequential test whose finite-sample performance, measured in terms of the expected sample size and power functions, is shown to be comparable to the optimal sequential design, determined by dynamic programming, in the simplified normal mean case with known variance and prespecified alternative, and superior to the existing two-stage designs and also to adaptive group sequential designs when the al...
Testing the sampling efficiency of a nuclear power station stack monitor
Stroem, L.H. [Instrumentinvest, Nykoeping (Sweden)
1997-08-01
The test method comprises the injection of known amounts of monodisperse particles in the stack air stream, at a suitable point upstream of the sampling installation. To find a suitable injection polls, the gas flow was mapped by means of a tracer gas, released in various points in the stack base. The resulting concentration distributions at the stack sampler level were observed by means of an array of gas detectors. An injection point that produced symmetrical distribution over the stack area, and low concentrations at the stack walls was selected for the particle tests. Monodisperse particles of 6, 10, and 19 {mu}m aerodynamic diameter, tagged with dysprosium, were dispersed in the selected injection point. Particle concentration at the sampler level was measured. The losses to the stack walls were found to be less than 10 %. The particle concentrations at the four sampler inlets were calculated from the observed gas distribution. The amount calculated to be aspirated into the sampler piping was compared with the quantity collected by the sampling train ordinary filter, to obtain the sampling line transmission efficiency. 1 ref., 2 figs.
Demaison, Jean; Császár, Attila G.
2012-09-01
Based on a sample of 38 molecules, 47 accurate equilibrium CO bond lengths have been collected and analyzed. These ultimate experimental (reEX), semiexperimental (reSE), and Born-Oppenheimer (reBO) equilibrium structures are compared to reBO estimates from two lower-level techniques of electronic structure theory, MP2(FC)/cc-pVQZ and B3LYP/6-311+G(3df,2pd). A linear relationship is found between the best equilibrium bond lengths and their MP2 or B3LYP estimates. These (and similar) linear relationships permit to estimate the CO bond length with an accuracy of 0.002 Å within the full range of 1.10-1.43 Å, corresponding to single, double, and triple CO bonds, for a large number of molecules. The variation of the CO bond length is qualitatively explained using the Atoms in Molecules method. In particular, a nice correlation is found between the CO bond length and the bond critical point density and it appears that the CO bond is at the same time covalent and ionic. Conditions which permit the computation of an accurate ab initio Born-Oppenheimer equilibrium structure are discussed. In particular, the core-core and core-valence correlation is investigated and it is shown to roughly increase with the bond length.
Conroy, M.J.; Runge, J.P.; Barker, R.J.; Schofield, M.R.; Fonnesbeck, C.J.
2008-01-01
Many organisms are patchily distributed, with some patches occupied at high density, others at lower densities, and others not occupied. Estimation of overall abundance can be difficult and is inefficient via intensive approaches such as capture-mark-recapture (CMR) or distance sampling. We propose a two-phase sampling scheme and model in a Bayesian framework to estimate abundance for patchily distributed populations. In the first phase, occupancy is estimated by binomial detection samples taken on all selected sites, where selection may be of all sites available, or a random sample of sites. Detection can be by visual surveys, detection of sign, physical captures, or other approach. At the second phase, if a detection threshold is achieved, CMR or other intensive sampling is conducted via standard procedures (grids or webs) to estimate abundance. Detection and CMR data are then used in a joint likelihood to model probability of detection in the occupancy sample via an abundance-detection model. CMR modeling is used to estimate abundance for the abundance-detection relationship, which in turn is used to predict abundance at the remaining sites, where only detection data are collected. We present a full Bayesian modeling treatment of this problem, in which posterior inference on abundance and other parameters (detection, capture probability) is obtained under a variety of assumptions about spatial and individual sources of heterogeneity. We apply the approach to abundance estimation for two species of voles (Microtus spp.) in Montana, USA. We also use a simulation study to evaluate the frequentist properties of our procedure given known patterns in abundance and detection among sites as well as design criteria. For most population characteristics and designs considered, bias and mean-square error (MSE) were low, and coverage of true parameter values by Bayesian credibility intervals was near nominal. Our two-phase, adaptive approach allows efficient estimation of
AN ECONOMIC RELIABILITY EFFICIENT GROUP ACCEPTANCE SAMPLING PLANS FOR FAMILY PARETO DISTRIBUTIONS
Muhammad Ismail
2013-12-01
Full Text Available The present research article deals with an economic reliability efficient group acceptance sampling plan for time truncated tests which are based on the total number of failures assuming that the life time of a product follows the family for Pareto distribution. This research is proposed when a multiple number of products as a group can be observed simultaneously in a tester. The minimum termination time required for a given group size and acceptance number is determined such that the producer and consumer risks are satisfied for specific standard of quality level, while the number of groups and the number of testers are pre-assumed. Comparison studies are made between the proposed plan and the existing plan on the basis of minimum termination time. Two real examples are also discussed.
AREA EFFICIENT FRACTIONAL SAMPLE RATE CONVERSION ARCHITECTURE FOR SOFTWARE DEFINED RADIOS
Latha Sahukar
2014-09-01
Full Text Available The modern software defined radios (SDRs use complex signal processing algorithms to realize efficient wireless communication schemes. Several such algorithms require a specific symbol to sample ratio to be maintained. In this context the fractional rate converter (FRC becomes a crucial block in the receiver part of SDR. The paper presents an area optimized dynamic FRC block, for low power SDR applications. The limitations of conventional cascaded interpolator and decimator architecture for FRC are also presented. Extending the SINC function interpolation based architecture; towards high area optimization and providing run time configuration with time register are presented. The area and speed analysis are carried with Xilinx FPGA synthesis tools. Only 15% area occupancy with maximum clock speed of 133 MHz are reported on Spartan-6 Lx45 Field Programmable Gate Array (FPGA.
Bruder, Lukas; Binz, Marcel; Stienkemeier, Frank
2015-11-01
A phase modulation technique to sensitively and selectively isolate multiple-quantum coherences in a femtosecond pump-probe setup is presented. By detecting incoherent observables and incorporating lock-in amplification, even weak signals of highly dilute samples can be acquired. Applying this method, efficient isolation of one- and two-photon quantum beats in a rubidium-doped helium droplet beam experiment is demonstrated and collective resonances are observed in a potassium vapor for the first time up to fourth order. Our approach provides promising perspectives for coherent time-resolved experiments in the deep UV and multidimensional spectroscopy schemes, in particular when mass-selective detection of particles in dilute gas-phase targets is possible.
Li, Roger W; Brown, Brian; Edwards, Marion H; Ngo, Charlie V; Chat, Sandy W; Levi, Dennis M
2012-01-01
Vernier acuity, a form of visual hyperacuity, is amongst the most precise forms of spatial vision. Under optimal conditions Vernier thresholds are much finer than the inter-photoreceptor distance. Achievement of such high precision is based substantially on cortical computations, most likely in the primary visual cortex. Using stimuli with added positional noise, we show that Vernier processing is reduced with advancing age across a wide range of noise levels. Using an ideal observer model, we are able to characterize the mechanisms underlying age-related loss, and show that the reduction in Vernier acuity can be mainly attributed to the reduction in efficiency of sampling, with no significant change in the level of internal position noise, or spatial distortion, in the visual system.
An efficient self-optimized sampling method for rare events in nonequilibrium systems
JIANG HuiJun; PU MingFeng; HOU ZhongHuai
2014-01-01
Rare events such as nucleation processes are of ubiquitous importance in real systems.The most popular method for nonequilibrium systems,forward flux sampling（FFS）,samples rare events by using interfaces to partition the whole transition process into sequence of steps along an order parameter connecting the initial and final states.FFS usually suffers from two main difficulties：low computational efficiency due to bad interface locations and even being not applicable when trapping into unknown intermediate metastable states.In the present work,we propose an approach to overcome these difficulties,by self-adaptively locating the interfaces on the fly in an optimized manner.Contrary to the conventional FFS which set the interfaces with equal distance of the order parameter,our approach determines the interfaces with equal transition probability which is shown to satisfy the optimization condition.This is done by firstly running long local trajectories starting from the current interface i to get the conditional probability distribution Pc（〉i｜i）,and then determining i＋1by equaling Pc（i＋1｜i）to a give value p0.With these optimized interfaces,FFS can be run in a much more efficient way.In addition,our approach can conveniently find the intermediate metastable states by monitoring some special long trajectories that neither end at the initial state nor reach the next interface,the number of which will increase sharply from zero if such metastable states are encountered.We apply our approach to a two-state model system and a two-dimensional lattice gas Ising model.Our approach is shown to be much more efficient than the conventional FFS method without losing accuracy,and it can also well reproduce the two-step nucleation scenario of the Ising model with easy identification of the intermediate metastable state.
Rached, Nadhir B.
2015-11-13
The outage capacity (OC) is among the most important performance metrics of communication systems operating over fading channels. Of interest in the present paper is the evaluation of the OC at the output of the Equal Gain Combining (EGC) and the Maximum Ratio Combining (MRC) receivers. In this case, it can be seen that this problem turns out to be that of computing the Cumulative Distribution Function (CDF) for the sum of independent random variables. Since finding a closedform expression for the CDF of the sum distribution is out of reach for a wide class of commonly used distributions, methods based on Monte Carlo (MC) simulations take pride of price. In order to allow for the estimation of the operating range of small outage probabilities, it is of paramount importance to develop fast and efficient estimation methods as naive Monte Carlo (MC) simulations would require high computational complexity. In this line, we propose in this work two unified, yet efficient, hazard rate twisting Importance Sampling (IS) based approaches that efficiently estimate the OC of MRC or EGC diversity techniques over generalized independent fading channels. The first estimator is shown to possess the asymptotic optimality criterion and applies for arbitrary fading models, whereas the second one achieves the well-desired bounded relative error property for the majority of the well-known fading variates. Moreover, the second estimator is shown to achieve the asymptotic optimality property under the particular Log-normal environment. Some selected simulation results are finally provided in order to illustrate the substantial computational gain achieved by the proposed IS schemes over naive MC simulations.
Development of high-efficiency passive counters (HEPC) for the verification of large LEU samples
Peerani, P. [European Commission, DG-JRC, IPSC, Ispra (Italy)], E-mail: paolo.peerani@jrc.it; Canadell, V.; Garijo, J.; Jackson, K. [European Commission, DG-TREN/I, Nuclear Inspections (Luxembourg); Jaime, R.; Looman, M.; Ravazzani, A. [European Commission, DG-JRC, IPSC, Ispra (Italy); Schwalbach, P. [European Commission, DG-TREN/I, Nuclear Inspections (Luxembourg); Swinhoe, M. [Los Alamos National Laboratory, Los Alamos, NM (United States)
2009-04-01
A paper describing the conceptual idea of using passive neutron assay for the verification of large size uranium samples in fuel fabrication plants was first presented at the 2001 ESARDA conference. The advantages of this technique, as a replacement of active interrogation using the PHOto-Neutron Interrogation Device (PHONID) device, were evident provided that a suitable detector with higher efficiency than those commercially available would be realised. The previous paper also included a feasibility study based on the experimental data. To implement this technique, a high-efficiency passive counter (HEPC) has been designed by the JRC, Ispra. JRC has also built a first smaller-scale prototype. This paper will describe the tests made in the PERLA laboratory and report the performance of the prototype. In parallel, the design of the large HEPC has been finalised for Euratom safeguards. Two units for the fuel fabrication plants in Dessel (B) and Juzbado (E) have been produced by a commercial manufacturer under JRC specifications. The two detectors have been installed in the two sites in summer 2004 after an extensive test campaign in PERLA. Since then they are in use and some feedback on the experience gained is reported at the end of this paper.
Mohiuddin Ahmed
2015-07-01
Full Text Available There is significant interest in the data mining and network management communities to efficiently analyse huge amounts of network traffic, given the amount of network traffic generated even in small networks. Summarization is a primary data mining task for generating a concise yet informative summary of the given data and it is a research challenge to create summary from network traffic data. Existing clustering based summarization techniques lack the ability to create a suitable summary for further data mining tasks such as anomaly detection and require the summary size as an external input. Additionally, for complex and high dimensional network traffic datasets, there is often no single clustering solution that explains the structure of the given data. In this paper, we investigate the use of multiview clustering to create a meaningful summary using original data instances from network traffic data in an efficient manner. We develop a mathematically sound approach to select the summary size using a sampling technique. We compare our proposed approach with regular clustering based summarization incorporating the summary size calculation method and random approach. We validate our proposed approach using the benchmark network traffic dataset and state-of-theart summary evaluation metrics.
David Simoncini
Full Text Available Fragment assembly is a powerful method of protein structure prediction that builds protein models from a pool of candidate fragments taken from known structures. Stochastic sampling is subsequently used to refine the models. The structures are first represented as coarse-grained models and then as all-atom models for computational efficiency. Many models have to be generated independently due to the stochastic nature of the sampling methods used to search for the global minimum in a complex energy landscape. In this paper we present EdaFold(AA, a fragment-based approach which shares information between the generated models and steers the search towards native-like regions. A distribution over fragments is estimated from a pool of low energy all-atom models. This iteratively-refined distribution is used to guide the selection of fragments during the building of models for subsequent rounds of structure prediction. The use of an estimation of distribution algorithm enabled EdaFold(AA to reach lower energy levels and to generate a higher percentage of near-native models. [Formula: see text] uses an all-atom energy function and produces models with atomic resolution. We observed an improvement in energy-driven blind selection of models on a benchmark of EdaFold(AA in comparison with the [Formula: see text] AbInitioRelax protocol.
Mostafa Bentahir
Full Text Available Separating CBRN mixed samples that contain both chemical and biological warfare agents (CB mixed sample in liquid and solid matrices remains a very challenging issue. Parameters were set up to assess the performance of a simple filtration-based method first optimized on separate C- and B-agents, and then assessed on a model of CB mixed sample. In this model, MS2 bacteriophage, Autographa californica nuclear polyhedrosis baculovirus (AcNPV, Bacillus atrophaeus and Bacillus subtilis spores were used as biological agent simulants whereas ethyl methylphosphonic acid (EMPA and pinacolyl methylphophonic acid (PMPA were used as VX and soman (GD nerve agent surrogates, respectively. Nanoseparation centrifugal devices with various pore size cut-off (30 kD up to 0.45 µm and three RNA extraction methods (Invisorb, EZ1 and Nuclisens were compared. RNA (MS2 and DNA (AcNPV quantification was carried out by means of specific and sensitive quantitative real-time PCRs (qPCR. Liquid chromatography coupled to time-of-flight mass spectrometry (LC/TOFMS methods was used for quantifying EMPA and PMPA. Culture methods and qPCR demonstrated that membranes with a 30 kD cut-off retain more than 99.99% of biological agents (MS2, AcNPV, Bacillus Atrophaeus and Bacillus subtilis spores tested separately. A rapid and reliable separation of CB mixed sample models (MS2/PEG-400 and MS2/EMPA/PMPA contained in simple liquid or complex matrices such as sand and soil was also successfully achieved on a 30 kD filter with more than 99.99% retention of MS2 on the filter membrane, and up to 99% of PEG-400, EMPA and PMPA recovery in the filtrate. The whole separation process turnaround-time (TAT was less than 10 minutes. The filtration method appears to be rapid, versatile and extremely efficient. The separation method developed in this work constitutes therefore a useful model for further evaluating and comparing additional separation alternative procedures for a safe handling and
Carlowitz, Christian; Girg, Thomas; Ghaleb, Hatem; Du, Xuan-Quang
2017-08-01
For ultra-high speed communication systems at high center frequencies above 100 GHz, we propose a disruptive change in system architecture to address major issues regarding amplifier chains with a large number of amplifier stages. They cause a high noise figure and high power consumption when operating close to the frequency limits of the underlying semiconductor technologies. Instead of scaling a classic homodyne transceiver system, we employ repeated amplification in single-stage amplifiers through positive feedback as well as synthesizer-free self-mixing demodulation at the receiver to simplify the system architecture notably. Since the amplitude and phase information for the emerging oscillation is defined by the input signal and the oscillator is only turned on for a very short time, it can be left unstabilized and thus come without a PLL. As soon as gain is no longer the most prominent issue, relaxed requirements for all the other major components allow reconsidering their implementation concepts to achieve further improvements compared to classic systems. This paper provides the first comprehensive overview of all major design aspects that need to be addressed upon realizing a SPARS-based transceiver. At system level, we show how to achieve high data rates and a noise performance comparable to classic systems, backed by scaled demonstrator experiments. Regarding the transmitter, design considerations for efficient quadrature modulation are discussed. For the frontend components that replace PA and LNA amplifier chains, implementation techniques for regenerative sampling circuits based on super-regenerative oscillators are presented. Finally, an analog-to-digital converter with outstanding performance and complete interfaces both to the analog baseband as well as to the digital side completes the set of building blocks for efficient ultra-high speed communication.
Melvin, Neal R; Poda, Daniel; Sutherland, Robert J
2007-10-01
When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.
Yang, Mingjun; Yang, Lijiang; Gao, Yiqin; Hu, Hao
2014-07-28
Umbrella sampling is an efficient method for the calculation of free energy changes of a system along well-defined reaction coordinates. However, when there exist multiple parallel channels along the reaction coordinate or hidden barriers in directions perpendicular to the reaction coordinate, it is difficult for conventional umbrella sampling to reach convergent sampling within limited simulation time. Here, we propose an approach to combine umbrella sampling with the integrated tempering sampling method. The umbrella sampling method is applied to chemically more relevant degrees of freedom that possess significant barriers. The integrated tempering sampling method is used to facilitate the sampling of other degrees of freedom which may possess statistically non-negligible barriers. The combined method is applied to two model systems, butane and ACE-NME molecules, and shows significantly improved sampling efficiencies as compared to standalone conventional umbrella sampling or integrated tempering sampling approaches. Further analyses suggest that the enhanced performance of the new method come from the complemented advantages of umbrella sampling with a well-defined reaction coordinate and integrated tempering sampling in orthogonal space. Therefore, the combined approach could be useful in the simulation of biomolecular processes, which often involves sampling of complex rugged energy landscapes.
Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W
2015-06-01
Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount.
Alyssa M Anderson
2012-10-01
Full Text Available Collections of Chironomidae surface-floating pupal exuviae (SFPE provide an effective means of assessing water quality in streams. Although not widely used in the United States, the technique is not new and has been shown to be more cost-efficient than traditional dip-net sampling techniques in organically enriched stream in an urban landscape. The intent of this research was to document the efficiency of sorting SFPE samples relative to dip-net samples in trout streams with catchments varying in amount of urbanization and differences in impervious surface. Samples of both SFPE and dip-nets were collected from 17 sample sites located on 12 trout streams in Duluth, MN, USA. We quantified time needed to sort subsamples of 100 macroinvertebrates from dip-net samples, and less than or greater than 100 chironomid exuviae from SFPE samples. For larger samples of SFPE, the time required to subsample up to 300 exuviae was also recorded. The average time to sort subsamples of 100 specimens was 22.5 minutes for SFPE samples, compared to 32.7 minutes for 100 macroinvertebrates in dip-net samples. Average time to sort up to 300 exuviae was 37.7 minutes. These results indicate that sorting SFPE samples is more time-efficient than traditional dip-net techniques in trout streams with varying catchment characteristics.doi: 10.5324/fn.v31i0.1380.Published online: 17 October 2012.
Holmes, Adam; Umrigar, Cyrus
2016-01-01
We introduce a new selected configuration interaction plus perturbation theory algorithm that is based on a deterministic analog of our recent efficient heat-bath sampling algorithm. This Heat-bath Configuration Interaction (HCI) algorithm makes use of two parameters that control the tradeoff between speed and accuracy, one which controls the selection of determinants to add to a variational wavefunction, and one which controls the the selection of determinants used to compute the perturbative correction to the variational energy. We show that HCI provides an accurate treatment of both static and dynamic correlation by computing the potential energy curve of the multireference carbon dimer in the cc-pVDZ basis. We then demonstrate the speed and accuracy of HCI by recovering the full configuration interaction energy of both the carbon dimer in the cc-pVTZ basis and the strongly-correlated chromium dimer in the Ahlrichs VDZ basis, correlating all electrons, to an accuracy of better than 1 mHa, in just a few min...
Tourism Equilibrium Price Trends
Mohammad Mohebi
2012-01-01
Full Text Available Problem statement: A review of the tourism history shows that tourism as an industry was virtually unknown in Malaysia until the late 1960s. Since then, it has developed and grown into a major industry, making an important contribution to the country's economy. By allocating substantial funds to the promotion of tourism and the provision of the necessary infrastructure, the government has played an important role in the impressive progress of the Malaysian tourism industry. One of the important factors which can attract tourists to Malaysia is the tourism price. Has the price of tourism decreased? To answer this question, it is necessary to obtain the equilibrium prices as well as the yearly trend for Malaysia during the sample period as it will be useful for analysis of the infrastructure situation of the tourism industry in this country. The purpose of the study is to identify equilibrium tourism price trends in Malaysian tourism market. Approach: We use hotel room as representative of tourism market. Quarterly data from 1995-2009 are used and a dynamic model of simultaneous equation is employed. Results: Based on the result during the period of 1995 until 2000, the growth rate of the equilibrium price was greater than consumer price index and producer price index. Conclusion: In the Malaysian tourism market, new infrastructure during this period had not been developed to keep pace with tourist arrivals.
Efficient Multilevel and Multi-index Sampling Methods in Stochastic Differential Equations
Haji-Ali, Abdul Lateef
2016-05-22
Most problems in engineering and natural sciences involve parametric equations in which the parameters are not known exactly due to measurement errors, lack of measurement data, or even intrinsic variability. In such problems, one objective is to compute point or aggregate values, called “quantities of interest”. A rapidly growing research area that tries to tackle this problem is Uncertainty Quantification (UQ). As the name suggests, UQ aims to accurately quantify the uncertainty in quantities of interest. To that end, the approach followed in this thesis is to describe the parameters using probabilistic measures and then to employ probability theory to approximate the probabilistic information of the quantities of interest. In this approach, the parametric equations must be accurately solved for multiple values of the parameters to explore the dependence of the quantities of interest on these parameters, using various so-called “sampling methods”. In almost all cases, the parametric equations cannot be solved exactly and suitable numerical discretization methods are required. The high computational complexity of these numerical methods coupled with the fact that the parametric equations must be solved for multiple values of the parameters make UQ problems computationally intensive, particularly when the dimensionality of the underlying problem and/or the parameter space is high. This thesis is concerned with optimizing existing sampling methods and developing new ones. Starting with the Multilevel Monte Carlo (MLMC) estimator, we first prove its normality using the Lindeberg-Feller CLT theorem. We then design the Continuation Multilevel Monte Carlo (CMLMC) algorithm that efficiently approximates the parameters required to run MLMC. We also optimize the hierarchies of one-dimensional discretization parameters that are used in MLMC and analyze the tolerance splitting parameter between the statistical error and the bias constraints. An important contribution
Boni, Sara Macente; Oyafuso, Luiza Keiko; Soler, Rita de Cassia; Lindoso, José Angelo Lauletta
2017-06-01
Traditional diagnostic methods used to detect American Tegumentary Leishmaniasis, such as histopathology using biopsy samples, culture techniques, and direct search for parasites, have low sensitivity and require invasive collection procedures. This study evaluates the efficiency of noninvasive sampling methods (swab) along with Polymerase Chain Reaction (PCR) for diagnosing American Tegumentary Leishmaniasis using skin and mucous samples from 25 patients who had tested positive for leishmaniasis. The outcome of the tests performance on swab samples was compatible with PCR results on biopsy samples. The findings have also shown that PCR-kDNA test is more efficient than PCR-HSP70 and qPCR tests (sensitivity of 92.3%, 40.7%, and 41%, respectively). Given the high sensitivity of the tests and the fact that the sampling method using swabs affords greater patient comfort and safety, it could be said that this method is a promising alternative to conventional biopsy-based methods for the molecular diagnosis of leishmaniasis.
Bruggeman, M; Verheyen, L; Vidmar, T; Liu, B
2016-03-01
We present a numerical fitting method for transmission data that outputs an equivalent sample composition. This output is used as input to a generalised efficiency transfer model based on the EFFTRAN software integrated in a LIMS. The procedural concept allows choosing between efficiency transfer with a predefined sample composition or with an experimentally determined composition based on a transmission measurement. The method can be used for simultaneous quantification of low-energy gamma emitters like (210)Pb, (241)Am, (234)Th in typical environmental samples.
Zhao, Y.; Aarnink, A.J.A.; Doornenbal, P.; Huynh, T.T.T.; Groot Koerkamp, P.W.G.; Landman, W.J.M.; Jong, de M.C.M.
2011-01-01
Using uranine as a physical tracer, this study assessed the sampling efficiencies of four bioaerosol samplers (Andersen 6-stage impactor, all glass impinger “AGI-30,” OMNI-3000, and Airport MD8 with gelatin filter) for collecting Gram-positive bacteria (Enterococcus faecalis), Gram-negative bacteria
Efficiency of Event-Based Sampling According to Error Energy Criterion
Marek Miskowicz
2010-01-01
The paper belongs to the studies that deal with the effectiveness of the particular event-based sampling scheme compared to the conventional periodic sampling as a reference. In the present study, the event-based sampling according to a constant energy of sampling error is analyzed. This criterion is suitable for applications where the energy of sampling error should be bounded (i.e., in building automation, or in greenhouse climate monitoring and control). Compared to the integral sampling c...
Improving the sampling efficiency of the Grand Canonical Simulated Quenching approach
Perez, Danny [Los Alamos National Laboratory; Vernon, Louis J. [Los Alamos National Laboratory
2012-04-04
Most common atomistic simulation techniques, like molecular dynamics or Metropolis Monte Carlo, operate under a constant interatomic Hamiltonian with a fixed number of atoms. Internal (atom positions or velocities) or external (simulation cell size or geometry) variables are then evolved dynamically or stochastically to yield sampling in different ensembles, such as microcanonical (NVE), canonical (NVT), isothermal-isobaric (NPT), etc. Averages are then taken to compute relevant physical properties. At least two limitations of these standard approaches can seriously hamper their application to many important systems: (1) they do not allow for the exchange of particles with a reservoir, and (2) the sampling efficiency is insufficient to allow the obtention of converged results because of the very long intrinsic timescales associated with these quantities. To fix ideas, one might want to identify low (free) energy configurations of grain boundaries (GB). In reality, grain boundaries are in contact the grains which act as reservoirs of defects (e.g., vacancies and interstitials). Since the GB can exchange particles with its environment, the most stable configuration cannot provably be found by sampling from NVE or NVT ensembles alone: one needs to allow the number of atoms in the sample to fluctuate. The first limitation can be circumvented by working in the grand canonical ensemble (TV ) or its derivatives (such as the semi-grand-canonical ensemble useful for the study of substitutional alloys). Monte Carlo methods have been the first to adapt to this kind of system where the number of atoms is allowed to fluctuate. Many of these methods are based on the Widom insertion method [Widom63] where the chemical potential of a given chemical species can be inferred from the potential energy changes upon random insertion of a new particle within the simulation cell. Other techniques, such as the Gibbs ensemble Monte Carlo [Panagiotopoulos87] where exchanges of particles are
Efficiency of event-based sampling according to error energy criterion.
Miskowicz, Marek
2010-01-01
The paper belongs to the studies that deal with the effectiveness of the particular event-based sampling scheme compared to the conventional periodic sampling as a reference. In the present study, the event-based sampling according to a constant energy of sampling error is analyzed. This criterion is suitable for applications where the energy of sampling error should be bounded (i.e., in building automation, or in greenhouse climate monitoring and control). Compared to the integral sampling criteria, the error energy criterion gives more weight to extreme sampling error values. The proposed sampling principle extends a range of event-based sampling schemes and makes the choice of particular sampling criterion more flexible to application requirements. In the paper, it is proved analytically that the proposed event-based sampling criterion is more effective than the periodic sampling by a factor defined by the ratio of the maximum to the mean of the cubic root of the signal time-derivative square in the analyzed time interval. Furthermore, it is shown that the sampling according to energy criterion is less effective than the send-on-delta scheme but more effective than the sampling according to integral criterion. On the other hand, it is indicated that higher effectiveness in sampling according to the selected event-based criterion is obtained at the cost of increasing the total sampling error defined as the sum of errors for all the samples taken.
Hrubý Jan
2012-04-01
Full Text Available Mathematical modeling of the non-equilibrium condensing transonic steam flow in the complex 3D geometry of a steam turbine is a demanding problem both concerning the physical concepts and the required computational power. Available accurate formulations of steam properties IAPWS-95 and IAPWS-IF97 require much computation time. For this reason, the modelers often accept the unrealistic ideal-gas behavior. Here we present a computation scheme based on a piecewise, thermodynamically consistent representation of the IAPWS-95 formulation. Density and internal energy are chosen as independent variables to avoid variable transformations and iterations. On the contrary to the previous Tabular Taylor Series Expansion Method, the pressure and temperature are continuous functions of the independent variables, which is a desirable property for the solution of the differential equations of the mass, energy, and momentum conservation for both phases.
Luca Corazzini
2015-01-01
by error and noise in behavior. Results change when we consider a more general QRE specification with cross-subject heterogeneity in concerns for (group efficiency. In this case, we find that the majority of the subjects make contributions that are compatible with the hypothesis of preference for (group efficiency. A likelihood-ratio test confirms the superiority of the more general specification of the QRE model over alternative specifications.
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Hoogerheide, L.F.; Opschoor, A.; Dijk, van, Nico M.
2012-01-01
This discussion paper was published in the Journal of Econometrics (2012). Vol. 171(2), 101-120. A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of...
Ji, Yan-ling; Chen, Xiao-wei; Zhang, Zhu-bao; Li, Jing; Xie, Tian-yao
2014-10-01
Based on an efficient sample clean-up and field-amplified sample injection online preconcentration technique in capillary electrophoresis with contactless conductivity detection, a new analytical method for the sensitive determination of melamine in milk samples was established. In order to remove the complex matrix interference, which resulted in a serious problem during field-amplified sample injection, liquid-liquid extraction was utilized. As a result, liquid-liquid extraction provides excellent sample clean-up efficiency when ethyl acetate was used as organic extraction by adjusting the pH of the sample solution to 9.5. Both inorganic salts and biological macromolecules are effectively removed by liquid-liquid extraction. The sample clean-up procedure, capillary electrophoresis separation parameters and field-amplified sample injection conditions are discussed in detail. The capillary electrophoresis separation was achieved within 5 min under the following conditions: an uncoated fused-silica capillary, 12 mM HAc + 10 mM NaAc (pH = 4.6) as running buffer, separation voltage of +13 kV, electrokinetic injection of +12 kV × 10 s. Preliminary validation of the method performance with spiked melamine provided recoveries >90%, with limits of detection and quantification of 0.015 and 0.050 mg/kg, respectively. The relative standard deviations of intra- and inter-day were below 6%. This newly developed method is sensitive and cost effective, therefore, suitable for screening of melamine contamination in milk products.
An Efficient MCMC Algorithm to Sample Binary Matrices with Fixed Marginals
Verhelst, Norman D.
2008-01-01
Uniform sampling of binary matrices with fixed margins is known as a difficult problem. Two classes of algorithms to sample from a distribution not too different from the uniform are studied in the literature: importance sampling and Markov chain Monte Carlo (MCMC). Existing MCMC algorithms converge slowly, require a long burn-in period and yield…
Martinez, M.; Wulfsohn, Dvora-Laio; Zamora, I.
2012-01-01
'fractionator' tree sampling procedure and supporting handheld software (Gardi et al., 2007; Wulfsohn et al., 2012) to obtain representative samples of fruit from a 7.6-ha apple orchard (Malus ×domestica 'Fuji Raku Raku') in central Chile. The resulting sample consisted of 70 fruit on 56 branch segments...
Martinez Vega, Mabel Virginia; Wulfsohn, D.; Zamora, I.
2012-01-01
‘fractionator’ tree sampling procedure and supporting handheld software (Gardi et al., 2007; Wulfsohn et al., 2012) to obtain representative samples of fruit from a 7.6-ha apple orchard (Malus ×domestica ‘Fuji Raku Raku’) in central Chile. The resulting sample consisted of 70 fruit on 56 branch segments...
An energy-efficient adaptive sampling scheme for wireless sensor networks
Masoum, Alireza; Meratnia, Nirvana; Havinga, Paul J.M.
2013-01-01
Wireless sensor networks are new monitoring platforms. To cope with their resource constraints, in terms of energy and bandwidth, spatial and temporal correlation in sensor data can be exploited to find an optimal sampling strategy to reduce number of sampling nodes and/or sampling frequencies while
Barrera Manuel
2017-03-01
Full Text Available A methodology to determine the full energy peak efficiency (FEPE for precise gamma spectrometry measurements of environmental samples with high-purity germanium (HPGe detector, valid when this efficiency depends on the energy of the radiation E, the height of the cylindrical sample H, and its density ρ, is introduced. The methodology consists of an initial calibration as a function of E and H and the application of a self-attenuation factor, depending on the density of the sample ρ, in order to correct for the different attenuation of the generic sample in relation to the measured standard. The obtained efficiency can be used in the whole range of interest studied, E = 120–2000 keV, H = 1–5 cm, and ρ = 0.8–1.7 g/cm3, being its uncertainty below 5%. The efficiency has been checked by the measurement of standards, resulting in a good agreement between experimental and expected activities. The described methodology can be extended to similar situations when samples show geometric and compaction differences.
Lautenbach, Ebbing; Santana, Evelyn; Lee, Abby; Tolomeo, Pam; Black, Nicole; Babson, Andrew; Perencevich, Eli N; Harris, Anthony D; Smith, Catherine A; Maslow, Joel
2008-04-01
We assessed the rate of recovery of fluoroquinolone-resistant and fluoroquinolone-susceptible Escherichia coli isolates from culture of frozen perirectal swab samples compared with the results for culture of the same specimen before freezing. Recovery rates for these 2 classes of E. coli were 91% and 83%, respectively. The majority of distinct strains recovered from the initial sample were also recovered from the frozen sample. The strains that were not recovered were typically present only in low numbers in the initial sample. These findings emphasize the utility of frozen surveillance samples.
Computing Equilibrium Free Energies Using Non-Equilibrium Molecular Dynamics
Christoph Dellago
2013-12-01
Full Text Available As shown by Jarzynski, free energy differences between equilibrium states can be expressed in terms of the statistics of work carried out on a system during non-equilibrium transformations. This exact result, as well as the related Crooks fluctuation theorem, provide the basis for the computation of free energy differences from fast switching molecular dynamics simulations, in which an external parameter is changed at a finite rate, driving the system away from equilibrium. In this article, we first briefly review the Jarzynski identity and the Crooks fluctuation theorem and then survey various algorithms building on these relations. We pay particular attention to the statistical efficiency of these methods and discuss practical issues arising in their implementation and the analysis of the results.
Non-equilibrium modelling of distillation
Wesselingh, JA; Darton, R
1997-01-01
There are nasty conceptual problems in the classical way of describing distillation columns via equilibrium stages, and efficiencies or HETP's. We can nowadays avoid these problems by simulating the behaviour of a complete column in one go using a non-equilibrium model. Such a model has phase
I.P. van Staveren (Irene)
2009-01-01
textabstractThe dominant economic theory, neoclassical economics, employs a single economic evaluative criterion: efficiency. Moreover, it assigns this criterion a very specific meaning. Other – heterodox – schools of thought in economics tend to use more open concepts of efficiency, related to comm
Khare, Y. P.; Martinez, C. J.; Munoz-Carpena, R.
2015-12-01
Improved knowledge about fundamental physical processes, advances in computing power, and a focus on integrated modeling has resulted in complex environmental and water resources models. However, the high-dimensionality of these models adds to overall uncertainty and poses issues when evaluating them for sensitivity, parameter identification, and optimization through rigorous computer experiments. The parameter screening method of elementary effects (EE) offers a perfect blend of useful properties inherited from inexpensive one-at-a time methods and expensive global techniques. Since its development EE has undergone improvements largely on the sampling side with over seven sampling strategies developed during the last decade. These strategies can broadly be classified into trajectory-based and polytope-based schemes. Trajectory-based strategies are more widely used, conceptually simple, and generally use the principle of spreading the sample points in the input hyper-space as widely as possible through oversampling. Due to this their implementation have been found to be impractically time consuming for high-dimensional cases (when # input factors > 50, say). Here, we enhanced the Sampling for Uniformity (SU) (Khare et al., 2015), a trajectory-based EE sampling scheme founded on the dual principle of spread and uniformity. This new scheme - enhanced SU (eSU) is the same as SU except the manner in which intermediate trajectory points are formed. It was tested for sample uniformity, spread, sampling time, and screening efficiency. Experiments were repeated with combinations of the number of trajectories and oversampling size. Preliminary results indicate that eSU is superior to SU by some margin with respect to all four criteria. Interestingly, in the case of eSU oversampling size had no impact on any of the evaluation criteria except linear increament in sampling time. Pending further investigation, this has opened a new avenue to substantially bring down the
Efficient and scalable serial extraction of DNA and RNA from frozen tissue samples.
Mathot, Lucy; Lindman, Monica; Sjöblom, Tobias
2011-01-07
Advances in cancer genomics have created a demand for scalable sample processing. We here present a process for serial extraction of nucleic acids from the same frozen tissue sample based on magnetic silica particles. The process is automation friendly with high recoveries of pure DNA and RNA suitable for analysis.
Ion exchange equilibrium constants
Marcus, Y
2013-01-01
Ion Exchange Equilibrium Constants focuses on the test-compilation of equilibrium constants for ion exchange reactions. The book first underscores the scope of the compilation, equilibrium constants, symbols used, and arrangement of the table. The manuscript then presents the table of equilibrium constants, including polystyrene sulfonate cation exchanger, polyacrylate cation exchanger, polymethacrylate cation exchanger, polysterene phosphate cation exchanger, and zirconium phosphate cation exchanger. The text highlights zirconium oxide anion exchanger, zeolite type 13Y cation exchanger, and
Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao
2017-04-01
Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.
Deckers, Sylvie M; Sindic, Marianne; Anceau, Christine; Brostaux, Yves; Detry, Jean G
2010-11-01
Agar contact microbiological sampling techniques, based on a transfer of the microorganisms present on a surface to a culture medium, are widely used to assess and control surface cleanliness and to evaluate microbial contamination levels. The effectiveness of these techniques depends on many environmental parameters that influence the strength of attachment of the bacteria to the surface. In the present study, stainless steel and high density polyethylene surfaces were inoculated with known concentrations of Staphylococcus epidermidis. Following an experimental design, the surfaces were sampled with different types of replicate organism direct agar contact plates and Petrifilm; results indicated that recovery rates were influenced by the presence of egg white albumin or Tween 80 in the inoculum solutions or by the introduction of surfactants into the contact agar of the microbiological sampling techniques. The techniques yielded significantly different results, depending on sampling conditions, underlining the need for a standardization of laboratory experiments to allow relevant comparisons of such techniques.
Ortiz, Fernando E.; Kelmelis, Eric J.; Arce, Gonzalo R.
2007-04-01
According to the Shannon-Nyquist theory, the number of samples required to reconstruct a signal is proportional to its bandwidth. Recently, it has been shown that acceptable reconstructions are possible from a reduced number of random samples, a process known as compressive sampling. Taking advantage of this realization has radical impact on power consumption and communication bandwidth, crucial in applications based on small/mobile/unattended platforms such as UAVs and distributed sensor networks. Although the benefits of these compression techniques are self-evident, the reconstruction process requires the solution of nonlinear signal processing algorithms, which limit applicability in portable and real-time systems. In particular, (1) the power consumption associated with the difficult computations offsets the power savings afforded by compressive sampling, and (2) limited computational power prevents these algorithms to maintain pace with the data-capturing sensors, resulting in undesirable data loss. FPGA based computers offer low power consumption and high computational capacity, providing a solution to both problems simultaneously. In this paper, we present an architecture that implements the algorithms central to compressive sampling in an FPGA environment. We start by studying the computational profile of the convex optimization algorithms used in compressive sampling. Then we present the design of a pixel pipeline suitable for FPGA implementation, able to compute these algorithms.
丁维莉; 陆铭
2007-01-01
Lacking guidance of general equilibrium (GE) theories in public economics and the corresponding proper mechanisms, China has not surprisingly witnessed an inequality in educational expenditures across regions as well as insufficiency of funds for education in poor areas. It is wrongly thought that what happens is due to the decentralized financing system of basic education. This essay attempts to demonstrate that such a decentralized system is capable of encouraging local governments to improve the quality and efficiency of basic education. This is possible if the central government is involved in designing specific countervailing policies to reduce the negative impact of unequal access to education and sorting phenomenon on human capital accumulation for lowincome families. This has particular significance for growth in a country that has a massive labor-intensive sector.
Computationally efficient algorithm for Gaussian Process regression in case of structured samples
Belyaev, M.; Burnaev, E.; Kapushev, Y.
2016-04-01
Surrogate modeling is widely used in many engineering problems. Data sets often have Cartesian product structure (for instance factorial design of experiments with missing points). In such case the size of the data set can be very large. Therefore, one of the most popular algorithms for approximation-Gaussian Process regression-can be hardly applied due to its computational complexity. In this paper a computationally efficient approach for constructing Gaussian Process regression in case of data sets with Cartesian product structure is presented. Efficiency is achieved by using a special structure of the data set and operations with tensors. Proposed algorithm has low computational as well as memory complexity compared to existing algorithms. In this work we also introduce a regularization procedure allowing to take into account anisotropy of the data set and avoid degeneracy of regression model.
Global structure search for molecules on surfaces: Efficient sampling with curvilinear coordinates.
Krautgasser, Konstantin; Panosetti, Chiara; Palagin, Dennis; Reuter, Karsten; Maurer, Reinhard J
2016-08-28
Efficient structure search is a major challenge in computational materials science. We present a modification of the basin hopping global geometry optimization approach that uses a curvilinear coordinate system to describe global trial moves. This approach has recently been shown to be efficient in structure determination of clusters [C. Panosetti et al., Nano Lett. 15, 8044-8048 (2015)] and is here extended for its application to covalent, complex molecules and large adsorbates on surfaces. The employed automatically constructed delocalized internal coordinates are similar to molecular vibrations, which enhances the generation of chemically meaningful trial structures. By introducing flexible constraints and local translation and rotation of independent geometrical subunits, we enable the use of this method for molecules adsorbed on surfaces and interfaces. For two test systems, trans-β-ionylideneacetic acid adsorbed on a Au(111) surface and methane adsorbed on a Ag(111) surface, we obtain superior performance of the method compared to standard optimization moves based on Cartesian coordinates.
Global structure search for molecules on surfaces: Efficient sampling with curvilinear coordinates
Krautgasser, Konstantin; Panosetti, Chiara; Palagin, Dennis; Reuter, Karsten; Maurer, Reinhard J.
2016-08-01
Efficient structure search is a major challenge in computational materials science. We present a modification of the basin hopping global geometry optimization approach that uses a curvilinear coordinate system to describe global trial moves. This approach has recently been shown to be efficient in structure determination of clusters [C. Panosetti et al., Nano Lett. 15, 8044-8048 (2015)] and is here extended for its application to covalent, complex molecules and large adsorbates on surfaces. The employed automatically constructed delocalized internal coordinates are similar to molecular vibrations, which enhances the generation of chemically meaningful trial structures. By introducing flexible constraints and local translation and rotation of independent geometrical subunits, we enable the use of this method for molecules adsorbed on surfaces and interfaces. For two test systems, trans-β-ionylideneacetic acid adsorbed on a Au(111) surface and methane adsorbed on a Ag(111) surface, we obtain superior performance of the method compared to standard optimization moves based on Cartesian coordinates.
Power-efficient high-speed parallel-sampling adcs for broadband multi-carrier systems
Lin, Yu; Doris, Kostas; van Roermund, Arthur H M
2015-01-01
This book addresses the challenges of designing high performance analog-to-digital converters (ADCs) based on the “smart data converters” concept, which implies context awareness, on-chip intelligence and adaptation. Readers will learn to exploit various information either a-priori or a-posteriori (obtained from devices, signals, applications or the ambient situations, etc.) for circuit and architecture optimization during the design phase or adaptation during operation, to enhance data converters performance, flexibility, robustness and power-efficiency. The authors focus on exploiting the a-priori knowledge of the system/application to develop enhancement techniques for ADCs, with particular emphasis on improving the power efficiency of high-speed and high-resolution ADCs for broadband multi-carrier systems.
F Brindani
2013-02-01
Full Text Available Norovirus is the most prevalent causative agent of foodborne diseases. However, the detection of this virus in foods other than shellfish is often time-consuming and unsuccessful. The objective of this study is to compare PEG and ultrafiltration techniques for viral concentration in bivalve molluscs. An experiment with Coxsackie B5 and feline Calicivirus strain F is conduct to determine the efficiency of each virus concentration. Ultrafiltration technique is the most indicated.
Snyder-Mackler, Noah; Majoros, William H; Yuan, Michael L; Shaver, Amanda O; Gordon, Jacob B; Kopp, Gisela H; Schlebusch, Stephen A; Wall, Jeffrey D; Alberts, Susan C; Mukherjee, Sayan; Zhou, Xiang; Tung, Jenny
2016-06-01
Research on the genetics of natural populations was revolutionized in the 1990s by methods for genotyping noninvasively collected samples. However, these methods have remained largely unchanged for the past 20 years and lag far behind the genomics era. To close this gap, here we report an optimized laboratory protocol for genome-wide capture of endogenous DNA from noninvasively collected samples, coupled with a novel computational approach to reconstruct pedigree links from the resulting low-coverage data. We validated both methods using fecal samples from 62 wild baboons, including 48 from an independently constructed extended pedigree. We enriched fecal-derived DNA samples up to 40-fold for endogenous baboon DNA and reconstructed near-perfect pedigree relationships even with extremely low-coverage sequencing. We anticipate that these methods will be broadly applicable to the many research systems for which only noninvasive samples are available. The lab protocol and software ("WHODAD") are freely available at www.tung-lab.org/protocols-and-software.html and www.xzlab.org/software.html, respectively. Copyright © 2016 by the Genetics Society of America.
Macchia, Marco; Bertini, Simone; Mori, Claudio; Orlando, Caterina; Papi, Chiara; Placanica, Giorgio
2004-03-01
In this paper, an HPLC method is proposed for a routine, rapid and simple analysis of heroin samples confiscated from the illicit market, based on a new type of packing for HPLC columns (monolithic silica). Acetonitrile and pH 3.5 phosphate buffer solution were used under both isocratic and gradient conditions. Under our analytical conditions, all the components of a typical mixture of an illicit heroin sample proved to be fully separated into well-resolved peaks in 7 min. Analytical linearity and accuracy of the method were also studied for all analytes using tetracaine hydrochloride as the internal standard.
Sample-efficient Strategies for Learning in the Presence of Noise
Cesa-Bianchi, N.; Dichterman, E.; Fischer, Paul
1999-01-01
to logarithmic factors) examples are necessary for PAC learning any target class of {#123;0,1}#125;-valued functions of VC dimension d, where &egr; is the desired accuracy and &eegr; = &egr;/(1 + &egr;) - &Dgr; the malicious noise rate (it is well known that any nontrivial target class cannot be PAC learned...... intervals on the real line. This is especialy interesting as we can also show that the popular minimum disagreement strategy needs samples of size d &egr;/&Dgr;2, hence is not optimal with respect to sample size. We then discuss the use of randomized hypotheses. For these the bound &egr;/(1 + &egr...
Caffò, Alessandro O; Lopez, Antonella; Spano, Giuseppina; Saracino, Giuseppe; Stasolla, Fabrizio; Ciriello, Giuseppe; Grattagliano, Ignazio; Lancioni, Giulio E; Bosco, Andrea
2016-12-01
Models of cognitive reserve in aging suggest that individual's life experience (education, working activity, and leisure) can exert a neuroprotective effect against cognitive decline and may represent an important contribution to successful aging. The objective of the present study is to investigate the role of cognitive reserve, pre-morbid intelligence, age, and education level, in predicting cognitive efficiency in a sample of healthy aged individuals and with probable mild cognitive impairment. Two hundred and eight aging participants recruited from the provincial region of Bari (Apulia, Italy) took part in the study. A battery of standardized tests was administered to them to measure cognitive reserve, pre-morbid intelligence, and cognitive efficiency. Protocols for 10 participants were excluded since they did not meet inclusion criteria, and statistical analyses were conducted on data from the remaining 198 participants. A path analysis was used to test the following model: age, education level, and intelligence directly influence cognitive reserve and cognitive efficiency; cognitive reserve mediates the influence of age, education level, and intelligence on cognitive efficiency. Cognitive reserve fully mediates the relationship between pre-morbid intelligence and education level and cognitive efficiency, while age maintains a direct effect on cognitive efficiency. Cognitive reserve appears to exert a protective effect regarding cognitive decline in normal and pathological populations, thus masking, at least in the early phases of neurodegeneration, the decline of memory, orientation, attention, language, and reasoning skills. The assessment of cognitive reserve may represent a useful evaluation supplement in neuropsychological screening protocols of cognitive decline.
An efficient, robust, and inexpensive grinding device for herbal samples like Cinchona bark
Hansen, Steen Honoré; Holmfred, Else Skovgaard; Cornett, Claus;
2015-01-01
An effective, robust, and inexpensive grinding device for the grinding of herb samples like bark and roots was developed by rebuilding a commercially available coffee grinder. The grinder was constructed to be able to provide various particle sizes, to be easy to clean, and to have a minimum of d...
Vårdal, Linda; Gjelstad, Astrid; Huang, Chuixiu
2017-01-01
AIM: For the first time, extracts obtained from human plasma samples by electromembrane extraction (EME) were investigated comprehensively with particular respect to phospholipids using ultra-high-performance liquid chromatography tandem mass spectrometry (UHPLC-MS/MS). Thhe purpose was to invest...
Nicola, Victor F.; Zaburnenko, Tatiana S.
2006-01-01
In this paper we propose a state-dependent importance sampling heuristic to estimate the probability of population overﬂow in feed-forward networks. This heuristic attempts to approximate the “optimal” state-dependent change of measure without the need for difficult analysis or costly optimization i
Kim, Stephan D.; Luo, Jiajun; Buchholz, D. Bruce; Chang, R. P. H.; Grayson, M.
2016-09-01
A modular time division multiplexer (MTDM) device is introduced to enable parallel measurement of multiple samples with both fast and slow decay transients spanning from millisecond to month-long time scales. This is achieved by dedicating a single high-speed measurement instrument for rapid data collection at the start of a transient, and by multiplexing a second low-speed measurement instrument for slow data collection of several samples in parallel for the later transients. The MTDM is a high-level design concept that can in principle measure an arbitrary number of samples, and the low cost implementation here allows up to 16 samples to be measured in parallel over several months, reducing the total ensemble measurement duration and equipment usage by as much as an order of magnitude without sacrificing fidelity. The MTDM was successfully demonstrated by simultaneously measuring the photoconductivity of three amorphous indium-gallium-zinc-oxide thin films with 20 ms data resolution for fast transients and an uninterrupted parallel run time of over 20 days. The MTDM has potential applications in many areas of research that manifest response times spanning many orders of magnitude, such as photovoltaics, rechargeable batteries, amorphous semiconductors such as silicon and amorphous indium-gallium-zinc-oxide.
Geostatistics for Mapping Leaf Area Index over a Cropland Landscape: Efficiency Sampling Assessment
Javier Garcia-Haro
2010-11-01
Full Text Available This paper evaluates the performance of spatial methods to estimate leaf area index (LAI fields from ground-based measurements at high-spatial resolution over a cropland landscape. Three geostatistical model variants of the kriging technique, the ordinary kriging (OK, the collocated cokriging (CKC and kriging with an external drift (KED are used. The study focused on the influence of the spatial sampling protocol, auxiliary information, and spatial resolution in the estimates. The main advantage of these models lies in the possibility of considering the spatial dependence of the data and, in the case of the KED and CKC, the auxiliary information for each location used for prediction purposes. A high-resolution NDVI image computed from SPOT TOA reflectance data is used as an auxiliary variable in LAI predictions. The CKC and KED predictions have proven the relevance of the auxiliary information to reproduce the spatial pattern at local scales, proving the KED model to be the best estimator when a non-stationary trend is observed. Advantages and limitations of the methods in LAI field predictions for two systematic and two stratified spatial samplings are discussed for high (20 m, medium (300 m and coarse (1 km spatial scales. The KED has exhibited the best observed local accuracy for all the spatial samplings. Meanwhile, the OK model provides comparable results when a well stratified sampling scheme is considered by land cover.
Rare-event simulation for tandem queues: A simple and efficient importance sampling scheme
Miretskiy, D.; Scheinhardt, W.; Mandjes, M.
2009-01-01
This paper focuses on estimating the rare event of overflow in the downstream queue of a tandem Jackson queue, relying on importance sampling. It is known that in this setting ‘traditional’ state-independent schemes perform poorly. More sophisticated state-dependent schemes yield asymptotic efficien
Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models
Debasish Saha; Armen R. Kemanian; Benjamin M. Rau; Paul R. Adler; Felipe Montes
2017-01-01
Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (...
Large prospective cohorts originally assembled to study environmental risk factors are increasingly exploited to study gene-environment interactions. Given the cost of genetic studies in large numbers of subjects, being able to select a sub-sample for genotyping that contains most of the information...
Genova, Alessandro; Pavanello, Michele
2015-12-01
In order to approximately satisfy the Bloch theorem, simulations of complex materials involving periodic systems are made {{n}\\text{k}} times more complex by the need to sample the first Brillouin zone at {{n}\\text{k}} points. By combining ideas from Kohn-Sham density-functional theory (DFT) and orbital-free DFT, for which no sampling is needed due to the absence of waves, subsystem DFT offers an interesting middle ground capable of sizable theoretical speedups against Kohn-Sham DFT. By splitting the supersystem into interacting subsystems, and mapping their quantum problem onto separate auxiliary Kohn-Sham systems, subsystem DFT allows an optimal topical sampling of the Brillouin zone. We elucidate this concept with two proof of principle simulations: a water bilayer on Pt[1 1 1]; and a complex system relevant to catalysis—a thiophene molecule physisorbed on a molybdenum sulfide monolayer deposited on top of an α-alumina support. For the latter system, a speedup of 300% is achieved against the subsystem DTF reference by using an optimized Brillouin zone sampling (600% against KS-DFT).
Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models
Saha, Debasish; Kemanian, Armen R.; Rau, Benjamin M.; Adler, Paul R.; Montes, Felipe
2017-04-01
Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (corn-soybean rotation), College Station, TX (corn-vetch rotation), Fort Collins, CO (irrigated corn), and Pullman, WA (winter wheat), representing diverse agro-ecoregions of the United States. Fertilization source, rate, and timing were site-specific. These simulated fluxes surrogated daily measurements in the analysis. We ;sampled; the fluxes using a fixed interval (1-32 days) or a rule-based (decision tree-based) sampling method. Two types of decision trees were built: a high-input tree (HI) that included soil inorganic nitrogen (SIN) as a predictor variable, and a low-input tree (LI) that excluded SIN. Other predictor variables were identified with Random Forest. The decision trees were inverted to be used as rules for sampling a representative number of members from each terminal node. The uncertainty of the annual N2O flux estimation increased along with the fixed interval length. A 4- and 8-day fixed sampling interval was required at College Station and Ames, respectively, to yield ±20% accuracy in the flux estimate; a 12-day interval rendered the same accuracy at Fort Collins and Pullman. Both the HI and the LI rule-based methods provided the same accuracy as that of fixed interval method with up to a 60% reduction in sampling events, particularly at locations with greater temporal flux variability. For instance, at Ames, the HI rule-based and the fixed interval methods required 16 and 91 sampling events, respectively, to achieve the same absolute bias of 0.2 kg N ha-1 yr-1 in estimating cumulative N2O flux. These results suggest that using simulation models along with decision trees can reduce
CASP10-BCL::Fold efficiently samples topologies of large proteins.
Heinze, Sten; Putnam, Daniel K; Fischer, Axel W; Kohlmann, Tim; Weiner, Brian E; Meiler, Jens
2015-03-01
During CASP10 in summer 2012, we tested BCL::Fold for prediction of free modeling (FM) and template-based modeling (TBM) targets. BCL::Fold assembles the tertiary structure of a protein from predicted secondary structure elements (SSEs) omitting more flexible loop regions early on. This approach enables the sampling of conformational space for larger proteins with more complex topologies. In preparation of CASP11, we analyzed the quality of CASP10 models throughout the prediction pipeline to understand BCL::Fold's ability to sample the native topology, identify native-like models by scoring and/or clustering approaches, and our ability to add loop regions and side chains to initial SSE-only models. The standout observation is that BCL::Fold sampled topologies with a GDT_TS score > 33% for 12 of 18 and with a topology score > 0.8 for 11 of 18 test cases de novo. Despite the sampling success of BCL::Fold, significant challenges still exist in clustering and loop generation stages of the pipeline. The clustering approach employed for model selection often failed to identify the most native-like assembly of SSEs for further refinement and submission. It was also observed that for some β-strand proteins model refinement failed as β-strands were not properly aligned to form hydrogen bonds removing otherwise accurate models from the pool. Further, BCL::Fold samples frequently non-natural topologies that require loop regions to pass through the center of the protein. © 2015 Wiley Periodicals, Inc.
Improved importance sampling technique for efficient simulation of digital communication systems
Lu, Dingqing; Yao, Kung
1988-01-01
A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.
Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based Search
Guez, Arthur; Dayan, Peter
2012-01-01
Bayesian model-based reinforcement learning is a formally elegant approach to learning optimal behaviour under model uncertainty. In this setting, a Bayes-optimal policy captures the ideal trade-off between exploration and exploitation. Unfortunately, finding Bayes-optimal policies is notoriously taxing due to the enormous search space in the augmented belief-state MDP. In this paper we exploit recent advances in sample-based planning, based on Monte-Carlo tree search, to introduce a tractable method for approximate Bayes-optimal planning. Unlike prior work in this area, we avoid expensive applications of Bayes rule within the search tree, by lazily sampling models from the current beliefs. Our approach outperformed prior Bayesian model-based RL algorithms by a significant margin on several well-known benchmark problems.
Improved importance sampling technique for efficient simulation of digital communication systems
Lu, Dingqing; Yao, Kung
1988-01-01
A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.
Asymptotic Conservativeness and Efficiency of Kruskal-Wallis Test for K Dependent Samples.
1980-05-01
EF I{EN C!s (1 DeGroot , Morris H., Feder, Paul 1. and Goel, Prem K. (1971), "Matchmaking", Ann. Math. Statist. 42, 578-593. (’) Esary, J. D. and...270). However, if the sample is broken (c.f. DeGroot , Feder, and Goel (1971)), all the test procedures for testing H0 are no longer valid. It is
Wright, Louise B; Walsh, Tiffany R
2013-04-07
Harnessing the properties of biomolecules, such as peptides, adsorbed on inorganic surfaces is of interest to many cross-disciplinary areas of science, ranging from biomineralisation to nanomedicine. Key to advancing research in this area is determination of the peptide conformation(s) in its adsorbed state, at the aqueous interface. Molecular simulation is one such approach for accomplishing this goal. In this respect, use of temperature-based replica-exchange molecular dynamics (T-REMD) can yield enhanced sampling of the interfacial conformations, but does so at great computational expense, chiefly because of the need to include an explicit representation of water at the interface. Here, we investigate a number of more economical variations on REMD, chiefly those based on Replica Exchange with Solvent Tempering (REST), using the aqueous quartz-binding peptide S1-(100) α-quartz interfacial system as a benchmark. We also incorporate additional implementation details specifically targeted at improving sampling of biomolecules at interfaces. We find the REST-based variants yield configurational sampling of the peptide-surface system comparable with T-REMD, at a fraction of the computational time and resource. Our findings also deliver novel insights into the binding behaviour of the S1 peptide at the quartz (100) surface that are consistent with available experimental data.
Glauber Renato Stürmer
2014-06-01
Full Text Available An experiment was conducted to compare the collecting capacity of three types of beating cloth in sampling for soybeans caterpillars and stink bugs in different row spacing and cultivars. The experiment was conducted in a completely randomized design with six replications in a 2x3x3 factorial, using two cultivars (BMX Potencia RR and Fundacep 53 RR, three row spacing (0.4, 0.5 and 0.6 m and three types of beating cloth (beating cloth, wide beating cloth and vertical beat sheet. To determinate the collecting capacity of caterpillars, samples were taken at V9, V11 and R1 stages and for stink bugs, the samplings were performed at R5.3 and R5.5 stages. Results showed no interaction between the factors, indicating independence among them. Caterpillars had no preference between cultivars, however, stink bugs had a higher population density on Fundacep 53 RR. In the three evaluation dates, the density of larvae was higher when soybean was sown with reduced spacing. For stink bugs, higher infestation was observed on soybean sown with 0.4 m row spacing in the first assessment (R5.3, difference not observed in the evaluation at R5.5. The wide beating cloth and vertical beating cloth showed greater collecting capacity for caterpillars and bugs over beating cloth.
Ghasemi, Ensieh; Sillanpää, Mika
2015-01-01
A novel type of magnetic nanosorbent, hydroxyapatite-coated Fe2O3 nanoparticles was synthesized and used for the adsorption and removal of nitrite and nitrate ions from environmental samples. The properties of synthesized magnetic nanoparticles were characterized by scanning electron microscopy, Fourier transform infrared spectroscopy, and X-ray powder diffraction. After the adsorption process, the separation of γ-Fe2O3@hydroxyapatite nanoparticles from the aqueous solution was simply achieved by applying an external magnetic field. The effects of different variables on the adsorption efficiency were studied simultaneously using an experimental design. The variables of interest were amount of magnetic hydroxyapatite nanoparticles, sample volume, pH, stirring rate, adsorption time, and temperature. The experimental parameters were optimized using a Box-Behnken design and response surface methodology after a Plackett-Burman screening design. Under the optimum conditions, the adsorption efficiencies of magnetic hydroxyapatite nanoparticles adsorbents toward NO3(-) and NO2(-) ions (100 mg/L) were in the range of 93-101%. The results revealed that the magnetic hydroxyapatite nanoparticles adsorbent could be used as a simple, efficient, and cost-effective material for the removal of nitrate and nitrite ions from environmental water and soil samples.
ON VECTOR NETWORK EQUILIBRIUM PROBLEMS
Guangya CHEN
2005-01-01
In this paper we define a concept of weak equilibrium for vector network equilibrium problems.We obtain sufficient conditions of weak equilibrium points and establish relation with vector network equilibrium problems and vector variational inequalities.
Inverse problems with non-trivial priors: efficient solution through sequential Gibbs sampling
Hansen, Thomas Mejer; Cordua, Knud Skou; Mosegaard, Klaus
2012-01-01
Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample solutions to non-linear inverse problems. In principle, these methods allow incorporation of prior information of arbitrary complexity. If an analytical closed form description of the pri...... also reduce the computation time for the inversion dramatically. The method works for any statistical model for which sequential simulation can be used to generate realizations. This applies to most algorithms developed in the geostatistical community.......Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample solutions to non-linear inverse problems. In principle, these methods allow incorporation of prior information of arbitrary complexity. If an analytical closed form description of the prior...... for applying the sequential Gibbs sampler and illustrate how it works. Through two case studies, we demonstrate the application of the method to a linear image restoration problem and to a non-linear cross-borehole inversion problem. We demonstrate how prior information can reduce the complexity of an inverse...
Brignole, Esteban Alberto
2013-01-01
Traditionally, the teaching of phase equilibria emphasizes the relationships between the thermodynamic variables of each phase in equilibrium rather than its engineering applications. This book changes the focus from the use of thermodynamics relationships to compute phase equilibria to the design and control of the phase conditions that a process needs. Phase Equilibrium Engineering presents a systematic study and application of phase equilibrium tools to the development of chemical processes. The thermodynamic modeling of mixtures for process development, synthesis, simulation, design and
Hartwig, Carla Andrade; Pereira, Rodrigo Mendes; Novo, Diogo La Rosa; Oliveira, Dirce Taina Teixeira; Mesko, Marcia Foster
2017-11-01
Responding to the need for green and efficient methods to determine catalyst residues with suitable precision and accuracy in samples with high fat content, the present work evaluates a microwave-assisted ultraviolet digestion (MW-UV) system for margarines and subsequent determination of Ni, Pd and Pt using inductively coupled plasma mass spectrometry (ICP-MS). It was possible to digest up to 500mg of margarine using only 10mL of 4molL(-1) HNO3 with a digestion efficiency higher than 98%. This allowed the determination of catalyst residues using the ICP-MS and free of interferences. For this purpose, the following experimental parameters were evaluated: concentration of digestion solution, sample mass and microwave irradiation program. The residual carbon content was used as a parameter to evaluate the efficiency of digestion and to select the most suitable experimental conditions. The accuracy evaluation was performed by recovery tests using a standard solution and certified reference material, and recoveries ranging from 94% to 99% were obtained for all analytes. The limits of detection for Ni, Pd and Pt using the proposed method were 35.6, 0.264 and 0.302ngg(-1), respectively. When compared to microwave-assisted digestion (MW-AD) in closed vessels using concentrated HNO3 (used as a reference method for sample digestion), the proposed MW-UV could be considered an excellent alternative for the digestion of margarine, as this method requires only a diluted nitric acid solution for efficient digestion. In addition, MW-UV provides appropriate solutions for further ICP-MS determination with suitable precision (relative standard deviation < 7%) and accuracy for all evaluated analytes. The proposed method was applied to margarines from different brands produced in Brazil, and the concentration of catalyst residues was in agreement with the current legislation or recommendations. Copyright © 2017 Elsevier B.V. All rights reserved.
Mohammad Osama
2014-06-01
Full Text Available Pleurotus ostreatus, a white rot fungus, is capable of bioremediating a wide range of organic contaminants including Polycyclic Aromatic Hydrocarbons (PAHs. Ergosterol is produced by living fungal biomass and used as a measure of fungal biomass. The first part of this work deals with the extraction and quantification of PAHs from contaminated sediments by Lipid Extraction Method (LEM. The second part consists of the development of a novel extraction method (Ergosterol Extraction Method (EEM, quantification and bioremediation. The novelty of this method is the simultaneously extraction and quantification of two different types of compounds, sterol (ergosterol and PAHs and is more efficient than LEM. EEM has been successful in extracting ergosterol from the fungus grown on barley in the concentrations of 17.5-39.94 µg g-1 ergosterol and the PAHs are much more quantified in numbers and amounts as compared to LEM. In addition, cholesterol usually found in animals, has also been detected in the fungus, P. ostreatus at easily detectable levels.
Rivero, M Jordana; Alomar, Daniel; Valderrama, Ximena; Le Cozler, Yannick; Velásquez, Alejandro; Haines, Deborah
2016-08-01
The objective of this study was to compare the prediction efficiency of IgG concentration in bovine colostrum by NIRS, using liquid and dried (Dry-Extract Spectroscopy for Infrared Reflectance, DESIR) samples by transflectance and reflectance modes, respectively. Colostrum samples (157), obtained from 2 commercial Holstein dairy farms, were collected within the first hour after calving and kept at -20 °C until analysis. After thawing and homogenisation, a subsample of 500 mg of liquid colostrum was placed in an aluminium mirror transflectance cell (0·1 mm path length), in duplicate, to collect the spectrum. A glass fiber filter disc was infused with another subsample of 500 mg of colostrum, in duplicate, and dried in a forced-air oven at 60 °C for 20 min. The samples were placed in cells for dry samples to collect the spectra. The spectra in the VIS-NIR region (400-2500 nm) were obtained with a NIRSystems 6500 monochromator. Mathematical treatments, scatter correction treatments and number of cross-validation groups were tested to obtain prediction equations for both techniques. Reference analysis for IgG content was performed by radial immunodiffusion. The DESIR technique showed a higher variation in the spectral regions associated with water absorption bands, compared with liquid samples. The best equation for transflectance method (liquid samples) obtained a higher coefficient of determination for calibration (0·95 vs. 0·94, respectively) and cross validation (0·94 vs. 0·91, respectively), and a lower error of cross validation (9·03 vs. 11·5, respectively) than the best equation for reflectance method (DESIR samples). In final, both methods showed excellent capacity for quantitative analysis, with residual predictive deviations above 3. It is concluded that, regarding accuracy of prediction and time for obtaining results of IgG from bovine colostrum, NIRS analysis of liquid samples (transflectance) is recommended over dried samples (DESIR technique by
Efficient calculation of SAMPL4 hydration free energies using OMEGA, SZYBKI, QUACPAC, and Zap TK.
Ellingson, Benjamin A; Geballe, Matthew T; Wlodek, Stanislaw; Bayly, Christopher I; Skillman, A Geoffrey; Nicholls, Anthony
2014-03-01
Several submissions for the SAMPL4 hydration free energy set were calculated using OpenEye tools, including many that were among the top performing submissions. All of our best submissions used AM1BCC charges and Poisson-Boltzmann solvation. Three submissions used a single conformer for calculating the hydration free energy and all performed very well with mean unsigned errors ranging from 0.94 to 1.08 kcal/mol. These calculations were very fast, only requiring 0.5-2.0 s per molecule. We observed that our two single-conformer methodologies have different types of failure cases and that these differences could be exploited for determining when the methods are likely to have substantial errors.
Cognitive efficiency on a match to sample task decreases at the onset of puberty in children.
McGivern, Robert F; Andersen, Julie; Byrd, Desiree; Mutter, Kandis L; Reilly, Judy
2002-10-01
Electrocortical evidence indicates that a wave of synaptic proliferation occurs in the frontal lobes around the age of puberty onset. To study its potential influence on cognition, we examined 246 children (10-17 years) and 49 young adults (18-22 years) using a match-to-sample type of task to measure reaction times to assess emotionally related information. Based upon the instruction set, subjects made a yes/no decision about the emotion expressed in a face, a word, or a face/word combination presented tachistoscopically for 100 ms. The faces were images of a single individual with a happy, angry, sad or neutral expression. The words were 'happy,' 'angry,' 'sad,' or 'neutral,' In the combined stimulus condition, subjects were asked to decide if the face and word matched for the same emotion. Results showed that compared to the previous year, reaction times were significantly slower for making a correct decision at 11 and 12 years of age in girls and boys, the approximate ages of puberty onset. The peripubertal rise in reaction time declined slowly over the following 2-3 years and stabilized by 15 years of age. Analyses of the performance of 15-17 year olds revealed significantly longer reaction times in females to process both faces and words compared to males. However, this sex difference in late puberty appeared to be transient since it was not present in 18-22 year olds. Given the match-to-sample nature of the task employed, the puberty related increases in reaction time may reflect a relative inefficiency in frontal circuitry prior to the pruning of excess synaptic contacts.
Non-Equilibrium Properties from Equilibrium Free Energy Calculations
Pohorille, Andrew; Wilson, Michael A.
2012-01-01
Calculating free energy in computer simulations is of central importance in statistical mechanics of condensed media and its applications to chemistry and biology not only because it is the most comprehensive and informative quantity that characterizes the eqUilibrium state, but also because it often provides an efficient route to access dynamic and kinetic properties of a system. Most of applications of equilibrium free energy calculations to non-equilibrium processes rely on a description in which a molecule or an ion diffuses in the potential of mean force. In general case this description is a simplification, but it might be satisfactorily accurate in many instances of practical interest. This hypothesis has been tested in the example of the electrodiffusion equation . Conductance of model ion channels has been calculated directly through counting the number of ion crossing events observed during long molecular dynamics simulations and has been compared with the conductance obtained from solving the generalized Nernst-Plank equation. It has been shown that under relatively modest conditions the agreement between these two approaches is excellent, thus demonstrating the assumptions underlying the diffusion equation are fulfilled. Under these conditions the electrodiffusion equation provides an efficient approach to calculating the full voltage-current dependence routinely measured in electrophysiological experiments.
Boulyga, Sergei F; Heumann, Klaus G
2006-01-01
A method by inductively coupled plasma mass spectrometry (ICP-MS) was developed which allows the measurement of (236)U at concentration ranges down to 3 x 10(-14)g g(-1) and extremely low (236)U/(238)U isotope ratios in soil samples of 10(-7). By using the high-efficiency solution introduction system APEX in connection with a sector-field ICP-MS a sensitivity of more than 5,000 counts fg(-1) uranium was achieved. The use of an aerosol desolvating unit reduced the formation rate of uranium hydride ions UH(+)/U(+) down to a level of 10(-6). An abundance sensitivity of 3 x 10(-7) was observed for (236)U/(238)U isotope ratio measurements at mass resolution 4000. The detection limit for (236)U and the lowest detectable (236)U/(238)U isotope ratio were improved by more than two orders of magnitude compared with corresponding values by alpha spectrometry. Determination of uranium in soil samples collected in the vicinity of Chernobyl nuclear power plant (NPP) resulted in that the (236)U/(238)U isotope ratio is a much more sensitive and accurate marker for environmental contamination by spent uranium in comparison to the (235)U/(238)U isotope ratio. The ICP-MS technique allowed for the first time detection of irradiated uranium in soil samples even at distances more than 200 km to the north of Chernobyl NPP (Mogilev region). The concentration of (236)U in the upper 0-10 cm soil layers varied from 2 x 10(-9)g g(-1) within radioactive spots close to the Chernobyl NPP to 3 x 10(-13)g g(-1) on a sampling site located by >200 km from Chernobyl.
Efficient Sample Delay Calculation for 2-D and 3-D Ultrasound Imaging.
Ibrahim, Aya; Hager, Pascal A; Bartolini, Andrea; Angiolini, Federico; Arditi, Marcel; Thiran, Jean-Philippe; Benini, Luca; De Micheli, Giovanni
2017-08-01
Ultrasound imaging is a reference medical diagnostic technique, thanks to its blend of versatility, effectiveness, and moderate cost. The core computation of all ultrasound imaging methods is based on simple formulae, except for those required to calculate acoustic propagation delays with high precision and throughput. Unfortunately, advanced three-dimensional (3-D) systems require the calculation or storage of billions of such delay values per frame, which is a challenge. In 2-D systems, this requirement can be four orders of magnitude lower, but efficient computation is still crucial in view of low-power implementations that can be battery-operated, enabling usage in numerous additional scenarios. In this paper, we explore two smart designs of the delay generation function. To quantify their hardware cost, we implement them on FPGA and study their footprint and performance. We evaluate how these architectures scale to different ultrasound applications, from a low-power 2-D system to a next-generation 3-D machine. When using numerical approximations, we demonstrate the ability to generate delay values with sufficient throughput to support 10 000-channel 3-D imaging at up to 30 fps while using 63% of a Virtex 7 FPGA, requiring 24 MB of external memory accessed at about 32 GB/s bandwidth. Alternatively, with similar FPGA occupation, we show an exact calculation method that reaches 24 fps on 1225-channel 3-D imaging and does not require external memory at all. Both designs can be scaled to use a negligible amount of resources for 2-D imaging in low-power applications and for ultrafast 2-D imaging at hundreds of frames per second.
Efendiev, Y.
2009-11-01
The Markov chain Monte Carlo (MCMC) is a rigorous sampling method to quantify uncertainty in subsurface characterization. However, the MCMC usually requires many flow and transport simulations in evaluating the posterior distribution and can be computationally expensive for fine-scale geological models. We propose a methodology that combines coarse- and fine-scale information to improve the efficiency of MCMC methods. The proposed method employs off-line computations for modeling the relation between coarse- and fine-scale error responses. This relation is modeled using nonlinear functions with prescribed error precisions which are used in efficient sampling within the MCMC framework. We propose a two-stage MCMC where inexpensive coarse-scale simulations are performed to determine whether or not to run the fine-scale (resolved) simulations. The latter is determined on the basis of a statistical model developed off line. The proposed method is an extension of the approaches considered earlier where linear relations are used for modeling the response between coarse-scale and fine-scale models. The approach considered here does not rely on the proximity of approximate and resolved models and can employ much coarser and more inexpensive models to guide the fine-scale simulations. Numerical results for three-phase flow and transport demonstrate the advantages, efficiency, and utility of the method for uncertainty assessment in the history matching. Copyright 2009 by the American Geophysical Union.
Ucar, Murat; Guryildirim, Melike; Tokgoz, Nil; Kilic, Koray; Borcek, Alp; Oner, Yusuf; Akkan, Koray; Tali, Turgut
2014-01-01
...) high-sampling-efficiency technique (sampling perfection with application optimized contrast using different flip angle evolutions [SPACE]) and T2-weighted (T2W) two-dimensional (2D) turbo spin echo (TSE...
Computationally efficient algorithm for high sampling-frequency operation of active noise control
Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati
2015-05-01
In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.
Budka, Marcin; Gabrys, Bogdan
2013-01-01
Estimation of the generalization ability of a classification or regression model is an important issue, as it indicates the expected performance on previously unseen data and is also used for model selection. Currently used generalization error estimation procedures, such as cross-validation (CV) or bootstrap, are stochastic and, thus, require multiple repetitions in order to produce reliable results, which can be computationally expensive, if not prohibitive. The correntropy-inspired density-preserving sampling (DPS) procedure proposed in this paper eliminates the need for repeating the error estimation procedure by dividing the available data into subsets that are guaranteed to be representative of the input dataset. This allows the production of low-variance error estimates with an accuracy comparable to 10 times repeated CV at a fraction of the computations required by CV. This method can also be used for model ranking and selection. This paper derives the DPS procedure and investigates its usability and performance using a set of public benchmark datasets and standard classifiers.
Equilibrium and Sudden Events in Chemical Evolution
Weinberg, David H; Freudenburg, Jenna
2016-01-01
We present new analytic solutions for one-zone (fully mixed) chemical evolution models and explore their implications. In contrast to existing analytic models, we incorporate a realistic delay time distribution for Type Ia supernovae (SNIa) and can therefore track the separate evolution of $\\alpha$-elements produced by core collapse supernovae (CCSNe) and iron peak elements synthesized in both CCSNe and SNIa. In generic cases, $\\alpha$ and iron abundances evolve to an equilibrium at which element production is balanced by metal consumption and gas dilution, instead of continuing to increase over time. The equilibrium absolute abundances depend principally on supernova yields and the outflow mass loading parameter $\\eta$, while the equilibrium abundance ratio [$\\alpha$/Fe] depends mainly on yields and secondarily on star formation history. A stellar population can be metal-poor either because it has not yet evolved to equilibrium or because high outflow efficiency makes the equilibrium abundance itself low. Sy...
Yu, Yuqi; Wang, Jinan; Shao, Qiang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn; Zhu, Weiliang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn [ACS Key Laboratory of Receptor Research, Drug Discovery and Design Center, Shanghai Institute of Materia Medica, Chinese Academy of Sciences, 555 Zuchongzhi Road, Shanghai 201203 (China); Shi, Jiye, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn [UCB Pharma, 216 Bath Road, Slough SL1 4EN (United Kingdom)
2015-03-28
The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much less computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.
Thermodynamics and fluctuations far from equilibrium
Ross, John
2008-01-01
This book deals with the formulation of the thermodynamics of chemical and other systems far from equilibrium, including connections to fluctuations. It contains applications to non-equilibrium stationary states and approaches to such states, systems with multiple stationary states, stability and equi-stability conditions, reaction diffusion systems, transport properties, and electrochemical systems. The theoretical treatment is complemented by experimental results to substantiate the formulation. Dissipation and efficiency are analyzed in autonomous and externally forced reactions, including several biochemical systems.
Aberer, Andre J; Stamatakis, Alexandros; Ronquist, Fredrik
2016-01-01
Sampling tree space is the most challenging aspect of Bayesian phylogenetic inference. The sheer number of alternative topologies is problematic by itself. In addition, the complex dependency between branch lengths and topology increases the difficulty of moving efficiently among topologies. Current tree proposals are fast but sample new trees using primitive transformations or re-mappings of old branch lengths. This reduces acceptance rates and presumably slows down convergence and mixing. Here, we explore branch proposals that do not rely on old branch lengths but instead are based on approximations of the conditional posterior. Using a diverse set of empirical data sets, we show that most conditional branch posteriors can be accurately approximated via a [Formula: see text] distribution. We empirically determine the relationship between the logarithmic conditional posterior density, its derivatives, and the characteristics of the branch posterior. We use these relationships to derive an independence sampler for proposing branches with an acceptance ratio of ~90% on most data sets. This proposal samples branches between 2× and 3× more efficiently than traditional proposals with respect to the effective sample size per unit of runtime. We also compare the performance of standard topology proposals with hybrid proposals that use the new independence sampler to update those branches that are most affected by the topological change. Our results show that hybrid proposals can sometimes noticeably decrease the number of generations necessary for topological convergence. Inconsistent performance gains indicate that branch updates are not the limiting factor in improving topological convergence for the currently employed set of proposals. However, our independence sampler might be essential for the construction of novel tree proposals that apply more radical topology changes.
Chowdhury, Muhammed Alamgir Zaman; Jahan, Iffat; Karim, Nurul; Alam, Mohammad Khorshed; Rahman, Mohammad Abdur; Moniruzzaman, Mohammed; Gan, Siew Hua; Fakhruddin, Abu Naieum Muhammad
2014-01-01
In the present study, the residual pesticide levels were determined in eggplants (Solanum melongena) (n = 16), purchased from four different markets in Dhaka, Bangladesh. The carbamate and organophosphorus pesticide residual levels were determined by high performance liquid chromatography (HPLC), and the efficiency of gamma radiation on pesticide removal in three different types of vegetables was also studied. Many (50%) of the samples contained pesticides, and three samples had residual levels above the maximum residue levels determined by the World Health Organisation. Three carbamates (carbaryl, carbofuran, and pirimicarb) and six organophosphates (phenthoate, diazinon, parathion, dimethoate, phosphamidon, and pirimiphos-methyl) were detected in eggplant samples; the highest carbofuran level detected was 1.86 mg/kg, while phenthoate was detected at 0.311 mg/kg. Gamma radiation decreased pesticide levels proportionately with increasing radiation doses. Diazinon, chlorpyrifos, and phosphamidon were reduced by 40–48%, 35–43%, and 30–45%, respectively, when a radiation strength of 0.5 kGy was utilized. However, when the radiation dose was increased to 1.0 kGy, the levels of the pesticides were reduced to 85–90%, 80–91%, and 90–95%, respectively. In summary, our study revealed that pesticide residues are present at high amounts in vegetable samples and that gamma radiation at 1.0 kGy can remove 80–95% of some pesticides. PMID:24711991
Muhammed Alamgir Zaman Chowdhury
2014-01-01
Full Text Available In the present study, the residual pesticide levels were determined in eggplants (Solanum melongena (n=16, purchased from four different markets in Dhaka, Bangladesh. The carbamate and organophosphorus pesticide residual levels were determined by high performance liquid chromatography (HPLC, and the efficiency of gamma radiation on pesticide removal in three different types of vegetables was also studied. Many (50% of the samples contained pesticides, and three samples had residual levels above the maximum residue levels determined by the World Health Organisation. Three carbamates (carbaryl, carbofuran, and pirimicarb and six organophosphates (phenthoate, diazinon, parathion, dimethoate, phosphamidon, and pirimiphos-methyl were detected in eggplant samples; the highest carbofuran level detected was 1.86 mg/kg, while phenthoate was detected at 0.311 mg/kg. Gamma radiation decreased pesticide levels proportionately with increasing radiation doses. Diazinon, chlorpyrifos, and phosphamidon were reduced by 40–48%, 35–43%, and 30–45%, respectively, when a radiation strength of 0.5 kGy was utilized. However, when the radiation dose was increased to 1.0 kGy, the levels of the pesticides were reduced to 85–90%, 80–91%, and 90–95%, respectively. In summary, our study revealed that pesticide residues are present at high amounts in vegetable samples and that gamma radiation at 1.0 kGy can remove 80–95% of some pesticides.
Berty, J.M.; Krishnan, C.; Elliott, J.R. Jr. (Berty Reaction Engineers, Ltd. (USA))
1990-10-01
Methanol is synthesised catalytically from H{sub 2}, CO and CO{sub 2}. Equilibrium considerations dictated the use of high pressures until the advent of copper-based catalysts. But equilibrium problems still exist; single pass conversions of CO and H{sub 2} are low, typically 30-40%. A solvent methanol process (SMP) is proposed to overcome existing problems. A high-boiling inert solvent is introduced with the synthesis gas. The solvent selectively absorbs CH{sub 3}OH, thus shifting the equilibrium towards the product. The strongest solvent identified and tested is tetraethyleneglycol dimethyl ether (tetraglyme). 24 refs., 4 figs., 2 tabs.
Chemical Principles Revisited: Chemical Equilibrium.
Mickey, Charles D.
1980-01-01
Describes: (1) Law of Mass Action; (2) equilibrium constant and ideal behavior; (3) general form of the equilibrium constant; (4) forward and reverse reactions; (5) factors influencing equilibrium; (6) Le Chatelier's principle; (7) effects of temperature, changing concentration, and pressure on equilibrium; and (8) catalysts and equilibrium. (JN)
Thermodynamics "beyond" local equilibrium
Vilar, Jose; Rubi, Miguel
2002-03-01
Nonequilibrium thermodynamics has shown its applicability in a wide variety of different situations pertaining to fields such as physics, chemistry, biology, and engineering. As successful as it is, however, its current formulation considers only systems close to equilibrium, those satisfying the so-called local equilibrium hypothesis. Here we show that diffusion processes that occur far away from equilibrium can be viewed as at local equilibrium in a space that includes all the relevant variables in addition to the spatial coordinate. In this way, nonequilibrium thermodynamics can be used and the difficulties and ambiguities associated with the lack of a thermodynamic description disappear. We analyze explicitly the inertial effects in diffusion and outline how the main ideas can be applied to other situations. [J.M.G. Vilar and J.M. Rubi, Proc. Natl. Acad. Sci. USA 98, 11081-11084 (2001)].
Katalin Martinás
2007-02-01
Full Text Available A microeconomic, agent based framework to dynamic economics is formulated in a materialist approach. An axiomatic foundation of a non-equilibrium microeconomics is outlined. Economic activity is modelled as transformation and transport of commodities (materials owned by the agents. Rate of transformations (production intensity, and the rate of transport (trade are defined by the agents. Economic decision rules are derived from the observed economic behaviour. The non-linear equations are solved numerically for a model economy. Numerical solutions for simple model economies suggest that the some of the results of general equilibrium economics are consequences only of the equilibrium hypothesis. We show that perfect competition of selfish agents does not guarantee the stability of economic equilibrium, but cooperativity is needed, too.
Response reactions: equilibrium coupling.
Hoffmann, Eufrozina A; Nagypal, Istvan
2006-06-01
It is pointed out and illustrated in the present paper that if a homogeneous multiple equilibrium system containing k components and q species is composed of the reactants actually taken and their reactions contain only k + 1 species, then we have a unique representation with (q - k) stoichiometrically independent reactions (SIRs). We define these as coupling reactions. All the other possible combinations with k + 1 species are the coupled reactions that are in equilibrium when the (q - k) SIRs are in equilibrium. The response of the equilibrium state for perturbation is determined by the coupling and coupled equilibria. Depending on the circumstances and the actual thermodynamic data, the effect of coupled equilibria may overtake the effect of the coupling ones, leading to phenomena that are in apparent contradiction with Le Chatelier's principle.
Nascimento, Mariana A; Magri, Maria E; Schissi, Camila D; Barardi, Célia Rm
2015-02-22
In Brazil, ordinance no. 2,914/2011 of the Ministry of Health requires the absence of total coliforms and Escherichia coli (E. coli) in treated water. However it is essential that water treatment is effective against all pathogens. Disinfection in Water Treatment Plants (WTP) is commonly performed with chlorine. The recombinant adenovirus (rAdV), which expresses green fluorescent protein (GFP) when cultivated in HEK 293A cells, was chosen as a model to evaluate the efficiency of chlorine for human adenovirus (HAdV) inactivation in filtered water samples from two WTPs: Lagoa do Peri (pH 6.9) and Morro dos Quadros (pH 6.5). Buffered demand free (BDF) water (pH 6.9 and 8.0) was used as control. The samples were previously submitted to physicochemical characterization, and bacteriological analysis. Two free chlorine concentrations and two temperatures were assayed for all samples (0.2 mg/L, 0.5 mg/L, and 15°C, and 20°C). Fluorescence microscopy (FM) was used to check viral infectivity in vitro and qPCR as a molecular method to determine viral genome copies. Real treated water samples from the WTP (at the output of WTP and the distribution network) were also evaluated for total coliforms, E. coli and HAdV. The time required to inactivate 4log₁₀ of rAdV was less than 1 min, when analyzed by FM, except for BDF pH 8.0 (up to 2.5 min for 4log₁₀). The pH had a significant influence on the efficiency of disinfection. The qPCR assay was not able to provide information regarding rAdV inactivation. The data were modeled (Chick-Watson), and the observed Ct values were comparable with the values reported in the literature and smaller than the values recommended by the EPA. In the treated water samples, HAdV was detected in the distribution network of the WTP Morro dos Quadros (2.75 × 10(3) PFU/L). The Chick-Watson model proved to have adjusted well to the experimental conditions used, and it was possible to prove that the adenoviruses were rapidly inactivated in the
Heredia, Norma; Solís-Soto, Luisa; Venegas, Fabiola; Bartz, Faith E; de Aceituno, Anna Fabiszewski; Jaykus, Lee-Ann; Leon, Juan S; García, Santos
2015-03-01
Several methods have been described to prepare fresh produce samples for microbiological analysis, each with its own advantages and disadvantages. The aim of this study was to compare the performance of a novel combined rinse and membrane filtration method to two alternative sample preparation methods for the quantification of indicator microorganisms from fresh produce. Decontaminated cantaloupe melons and jalapeño peppers were surface inoculated with a cocktail containing 10(6) CFU/ml Escherichia coli, Salmonella Typhimurium, and Enterococcus faecalis. Samples were processed using a rinse and filtration method, homogenization by stomacher, or a sponge-rubbing method, followed by quantification of bacterial load using culture methods. Recovery efficiencies of the three methods were compared. On inoculated cantaloupes, the rinse and filtration method had higher recovery of coliforms (0.95 log CFU/ml higher recovery, P = 0.0193) than the sponge-rubbing method. Similarly, on inoculated jalapeños, the rinse and filtration method had higher recovery for coliforms (0.84 log CFU/ml higher, P = 0.0130) and E. coli (1.46 log CFU/ml higher, P filtration method outperformed the homogenization method for all three indicators (0.79 to 1.71 log CFU/ml higher, P values ranging from 0.0075 to 0.0002). The precision of the three methods was also compared. The precision of the rinse and filtration method was similar to that of the other methods for recovery of two of three indicators from cantaloupe (E. coli P = 0.7685, E. faecalis P = 0.1545) and was more precise for recovery of two of three indicators from jalapeño (coliforms P = 0.0026, E. coli P = 0.0243). Overall, the rinse and filtration method performed equivalent to, and sometimes better than, either of the compared methods. The rinse and filtration method may have logistical advantages when processing large numbers of samples, improving sampling efficiency and facilitating microbial detection.
Vorng, Jean-Luc; Kotowska, Anna M; Passarelli, Melissa K; West, Andrew; Marshall, Peter S; Havelund, Rasmus; Seah, Martin P; Dollery, Colin T; Rakowska, Paulina D; Gilmore, Ian S
2016-11-15
There is an increasing need in the pharmaceutical industry to reduce drug failure at late stage and thus reduce the cost of developing a new medicine. Since most drug targets are intracellular, this requires a better understanding of the drug disposition within a cell. Secondary ion mass spectrometry has been identified as a potentially important technique to do this, as it is label-free and allows imaging in 3D with subcellular resolution and recent studies have shown promise for amiodarone. An important analytical parameter is sensitivity, and we measure this in a bovine liver homogenate reference sample for 20 drugs representing important class types relevant to the pharmaceutical industry. We also measure the sensitivity for pure drug and show, for the first time, that the secondary ion mass spectrometry (SIMS) positive ionization efficiency for small molecules is a simple power-law relationship to the log P value. This discovery will be important for advancing the understanding of the SIMS ionization process in small molecules that has, until now, been elusive. This simple relationship is found to hold true for drug doped in the bovine liver homogenate reference sample, except for fluticasone, nicardipine, and sorafenib which suffer from severe matrix suppression. This relationship provides a simple semiempirical method to determine drug sensitivity for positive secondary ions. Furthermore, we show, on chosen models, how the use of different solvents during sample preparation can affect the ionization of analytes.
Equilibrium statistical mechanics
Mayer, J E
1968-01-01
The International Encyclopedia of Physical Chemistry and Chemical Physics, Volume 1: Equilibrium Statistical Mechanics covers the fundamental principles and the development of theoretical aspects of equilibrium statistical mechanics. Statistical mechanical is the study of the connection between the macroscopic behavior of bulk matter and the microscopic properties of its constituent atoms and molecules. This book contains eight chapters, and begins with a presentation of the master equation used for the calculation of the fundamental thermodynamic functions. The succeeding chapters highlight t
Yang Hsin-Chou
2012-07-01
accuracies and a reduced number of selected markers in AIM panels. Conclusions Integrative analysis of SNP and GE markers provides high-accuracy and/or cost-effective classification results for assigning samples from closely related or distantly related ancestral lineages to their original ancestral populations. User-friendly BIASLESS (Biomarkers Identification and Samples Subdivision software was developed as an efficient tool for selecting key SNP and/or GE markers and then building models for sample subdivision. BIASLESS was programmed in R and R-GUI and is available online at http://www.stat.sinica.edu.tw/hsinchou/genetics/prediction/BIASLESS.htm.
Zhao, Y.; Aarnink, A.J.A.; Doornenbal, P.; Huynh, T.T.T.; Groot Koerkamp, P.W.G.; Jong, de M.C.M.; Landman, W.J.M.
2011-01-01
By sampling aerosolized microorganisms, the efficiency of a bioaerosol sampler can be calculated depending on its ability both to collect microorganisms and to preserve their culturability during a sampling process. However, those culturability losses in the non-sampling processes should not be coun
Shahla Elhami
2014-07-01
Full Text Available A fast and efficient method has been developed for removal of Alizarin Yellow dye using modified nanoclay. Montmorillonite (MMT was modified by a facile and one-step procedure with diethylenetriamine (DETA and was used as an adsorbent. The effects of pH value of the dye solution, adsorbent dose, adsorption time and the initial dye concentration on the Alizarin Yellow adsorption onto the composite were investigated. The DETA-MMT had a high uptake capacity in room temperature and could remove Alizarin Yellow dye of about 85 % with 6 g/L of adsorbent, in only 2 min. Langmuir and Freundlich isotherms were employed for the study of the adsorption of Alizarin Yellow dye onto DETA-MMT. The method was applied to the removal of Alizarin Yellow in different tap water, river water and industrial wastewater samples.
Jafari, Mohammad T; Saraji, Mohammad; Mossaddegh, Mehdi
2016-09-30
Two well-known microextraction methods, dispersive liquid-liquid microextraction (DLLME) and solid-phase microextraction (SPME), were combined, resulting in as an encouraging method. The method, named DLLME-SPME, was performed based on total vaporization technique. For the DLLME step, 1,1,2,2-tetrachloroethane and acetonitrile were used as extraction and disperser solvents, respectively. Halloysite nanotubes-titanium dioxide was used as the fiber coating in the SPME step. The method was applied for the extraction of diazinon and parathion (as the test compounds) in environmental water samples and fruit juices, and gas chromatography-corona discharge ion mobility spectrometry was used as the determination apparatus. Desorption temperature and time, extraction temperature and time, and the volume of the extracting solvent in the DLLME step were optimized as the effective parameters on the extraction efficiency. The relative standard deviations (RSDs) of intra-day were found to be 4-7% and 6-8% for diazinon and parathion, respectively. Also, the RSDs of inter-day were 7-9% and 8-10% for diazinon and parathion, respectively. The limits of quantification and detection were obtained to be 0.015 and 0.005μgL(-1) for diazinon, and 0.020 and 0.007μgL(-1) for parathion. A good linearity range (r(2)˃0.993) was obtained in the range of 0.015-3.000 and 0.020-3.000μgL(-1) for diazinon and parathion, respectively. The high enrichment factors were obtained as 3150 and 2965 for diazinon and parathion, respectively. This method showed high sensitivity with good recovery values (between 87 and 99%) for the extraction of target analytes in the real samples. Overall, the results revealed that the developed DLLME-SPME method had better extraction efficiency than DLLME and SPME alone.
Local equilibrium in bird flocks
Mora, Thierry; Walczak, Aleksandra M.; Del Castello, Lorenzo; Ginelli, Francesco; Melillo, Stefania; Parisi, Leonardo; Viale, Massimiliano; Cavagna, Andrea; Giardina, Irene
2016-12-01
The correlated motion of flocks is an example of global order emerging from local interactions. An essential difference with respect to analogous ferromagnetic systems is that flocks are active: animals move relative to each other, dynamically rearranging their interaction network. This non-equilibrium characteristic has been studied theoretically, but its impact on actual animal groups remains to be fully explored experimentally. Here, we introduce a novel dynamical inference technique, based on the principle of maximum entropy, which accommodates network rearrangements and overcomes the problem of slow experimental sampling rates. We use this method to infer the strength and range of alignment forces from data of starling flocks. We find that local bird alignment occurs on a much faster timescale than neighbour rearrangement. Accordingly, equilibrium inference, which assumes a fixed interaction network, gives results consistent with dynamical inference. We conclude that bird orientations are in a state of local quasi-equilibrium over the interaction length scale, providing firm ground for the applicability of statistical physics in certain active systems.
Li, Angsheng; Zhang, Xiaohui; Pan, Yicheng; Peng, Pan
2014-12-01
It seems a universal phenomenon of networks that the attacks on a small number of nodes by an adversary player Alice may generate a global cascading failure of the networks. It has been shown (Li et al., 2013) that classic scale-free networks (Barabási and Albert, 1999, Barabási, 2009) are insecure against attacks of as small as O(logn) many nodes. This poses a natural and fundamental question: Can we introduce a second player Bob to prevent Alice from global cascading failure of the networks? We proposed a game in networks. We say that a network has an equilibrium game if the second player Bob has a strategy to balance the cascading influence of attacks by the adversary player Alice. It was shown that networks of the preferential attachment model (Barabási and Albert, 1999) fail to have equilibrium games, that random graphs of the Erdös-Rényi model (Erdös and Rényi, 1959, Erdös and Rényi, 1960) have, for which randomness is the mechanism, and that homophyly networks (Li et al., 2013) have equilibrium games, for which homophyly and preferential attachment are the underlying mechanisms. We found that some real networks have equilibrium games, but most real networks fail to have. We anticipate that our results lead to an interesting new direction of network theory, that is, equilibrium games in networks.
McParland, S; Berry, D P
2016-05-01
Knowledge of animal-level and herd-level energy intake, energy balance, and feed efficiency affect day-to-day herd management strategies; information on these traits at an individual animal level is also useful in animal breeding programs. A paucity of data (especially at the individual cow level), of feed intake in particular, hinders the inclusion of such attributes in herd management decision-support tools and breeding programs. Dairy producers have access to an individual cow milk sample at least once daily during lactation, and consequently any low-cost phenotyping strategy should consider exploiting measureable properties in this biological sample, reflecting the physiological status and performance of the cow. Infrared spectroscopy is the study of the interaction of an electromagnetic wave with matter and it is used globally to predict milk quality parameters on routinely acquired individual cow milk samples and bulk tank samples. Thus, exploiting infrared spectroscopy in next-generation phenotyping will ensure potentially rapid application globally with a negligible additional implementation cost as the infrastructure already exists. Fourier-transform infrared spectroscopy (FTIRS) analysis is already used to predict milk fat and protein concentrations, the ratio of which has been proposed as an indicator of energy balance. Milk FTIRS is also able to predict the concentration of various fatty acids in milk, the composition of which is known to change when body tissue is mobilized; that is, when the cow is in negative energy balance. Energy balance is mathematically very similar to residual energy intake (REI), a suggested measure of feed efficiency. Therefore, the prediction of energy intake, energy balance, and feed efficiency (i.e., REI) from milk FTIRS seems logical. In fact, the accuracy of predicting (i.e., correlation between predicted and actual values; root mean square error in parentheses) energy intake, energy balance, and REI from milk FTIRS in
Eberl, Gérard
2016-08-01
The classical model of immunity posits that the immune system reacts to pathogens and injury and restores homeostasis. Indeed, a century of research has uncovered the means and mechanisms by which the immune system recognizes danger and regulates its own activity. However, this classical model does not fully explain complex phenomena, such as tolerance, allergy, the increased prevalence of inflammatory pathologies in industrialized nations and immunity to multiple infections. In this Essay, I propose a model of immunity that is based on equilibrium, in which the healthy immune system is always active and in a state of dynamic equilibrium between antagonistic types of response. This equilibrium is regulated both by the internal milieu and by the microbial environment. As a result, alteration of the internal milieu or microbial environment leads to immune disequilibrium, which determines tolerance, protective immunity and inflammatory pathology.
Neuman, M.W.
1982-01-01
The conundrum of blood undersaturation with respect to bone mineralization and its supersaturation with respect to bone's homeostatic function has acquired a new equation. On the supply side, Ca/sup 2 +/ is pumped in across bone cells to provide the needed Ca/sup 2 +/ x P/sub i/ for brushite precipitation. On the demand side, blood is in equilibrium with bone fluid, which is in equilibrium with a mineral more soluble than apatite. The function of potassium in this equation is yet to be found.
An analytical model of crater count equilibrium
Hirabayashi, Masatoshi; Minton, David A.; Fassett, Caleb I.
2017-06-01
Crater count equilibrium occurs when new craters form at the same rate that old craters are erased, such that the total number of observable impacts remains constant. Despite substantial efforts to understand this process, there remain many unsolved problems. Here, we propose an analytical model that describes how a heavily cratered surface reaches a state of crater count equilibrium. The proposed model formulates three physical processes contributing to crater count equilibrium: cookie-cutting (simple, geometric overlap), ejecta-blanketing, and sandblasting (diffusive erosion). These three processes are modeled using a degradation parameter that describes the efficiency for a new crater to erase old craters. The flexibility of our newly developed model allows us to represent the processes that underlie crater count equilibrium problems. The results show that when the slope of the production function is steeper than that of the equilibrium state, the power law of the equilibrium slope is independent of that of the production function slope. We apply our model to the cratering conditions in the Sinus Medii region and at the Apollo 15 landing site on the Moon and demonstrate that a consistent degradation parameterization can successfully be determined based on the empirical results of these regions. Further developments of this model will enable us to better understand the surface evolution of airless bodies due to impact bombardment.
Riesgo Ana
2012-11-01
generated a tractable catalogue of annotated genes (or gene fragments and protein families for ten newly sequenced non-model organisms, some of commercial importance (i.e., Octopus vulgaris. These comprehensive sets of genes can be readily used for phylogenetic analysis, gene expression profiling, developmental analysis, and can also be a powerful resource for gene discovery. The characterization of the transcriptomes of such a diverse array of animal species permitted the comparison of sequencing depth, functional annotation, and efficiency of genomic sampling using the same pipelines, which proved to be similar for all considered species. In addition, the datasets revealed their potential as a resource for paralogue detection, a recurrent concern in various aspects of biological inquiry, including phylogenetics, molecular evolution, development, and cellular biochemistry.
Riesgo, Ana; Andrade, Sónia C S; Sharma, Prashant P; Novo, Marta; Pérez-Porro, Alicia R; Vahtera, Varpu; González, Vanessa L; Kawauchi, Gisele Y; Giribet, Gonzalo
2012-11-29
fragments) and protein families for ten newly sequenced non-model organisms, some of commercial importance (i.e., Octopus vulgaris). These comprehensive sets of genes can be readily used for phylogenetic analysis, gene expression profiling, developmental analysis, and can also be a powerful resource for gene discovery. The characterization of the transcriptomes of such a diverse array of animal species permitted the comparison of sequencing depth, functional annotation, and efficiency of genomic sampling using the same pipelines, which proved to be similar for all considered species. In addition, the datasets revealed their potential as a resource for paralogue detection, a recurrent concern in various aspects of biological inquiry, including phylogenetics, molecular evolution, development, and cellular biochemistry.
Edmonds, Jason M
2009-10-01
The recovery operations following the 2001 attacks with Bacillus anthracis spores were complicated due to the unprecedented need for large-area surface sampling and decontamination protocols. Since this event, multiple reports have been published describing recovery efficiencies of several surface sampling materials. These materials include fibrous swabs of various compositions, cloth wipes, vacuum socks, and adhesive tapes. These materials have reported recovery efficiencies ranging from approximately 20% to 90% due to the many variations in their respective studies including sampling material, composition of surface sampled, concentration of contaminant, and even the method of deposition and sample processing. Additionally, the term recovery efficiency is crudely defined and could be better constructed to incorporate variations in contaminated surface composition and end user needs. While significant efforts in devising protocols for large-area surface sampling have been undertaken in the years since the anthrax attacks, there is still a general lack of consensus in optimal sampling materials and the methodology in which they are evaluated. Fortunately, sampling efforts are continuing to be supported, and the knowledge gaps in our procedures, methodology, and general understanding of sampling mechanisms are being investigated which will leave us better prepared for the future.
Problems in equilibrium theory
Aliprantis, Charalambos D
1996-01-01
In studying General Equilibrium Theory the student must master first the theory and then apply it to solve problems. At the graduate level there is no book devoted exclusively to teaching problem solving. This book teaches for the first time the basic methods of proof and problem solving in General Equilibrium Theory. The problems cover the entire spectrum of difficulty; some are routine, some require a good grasp of the material involved, and some are exceptionally challenging. The book presents complete solutions to two hundred problems. In searching for the basic required techniques, the student will find a wealth of new material incorporated into the solutions. The student is challenged to produce solutions which are different from the ones presented in the book.
Bounded Computational Capacity Equilibrium
Hernandez, Penelope
2010-01-01
We study repeated games played by players with bounded computational power, where, in contrast to Abreu and Rubisntein (1988), the memory is costly. We prove a folk theorem: the limit set of equilibrium payoffs in mixed strategies, as the cost of memory goes to 0, includes the set of feasible and individually rational payoffs. This result stands in sharp contrast to Abreu and Rubisntein (1988), who proved that when memory is free, the set of equilibrium payoffs in repeated games played by players with bounded computational power is a strict subset of the set of feasible and individually rational payoffs. Our result emphasizes the role of memory cost and of mixing when players have bounded computational power.
Claassen, Shantelle; du Toit, Elloise; Kaba, Mamadou; Moodley, Clinton; Zar, Heather J; Nicol, Mark P
2013-08-01
Differences in the composition of the gut microbiota have been associated with a range of diseases using culture-independent methods. Reliable extraction of nucleic acid is a key step in identifying the composition of the faecal microbiota. Five widely used commercial deoxyribonucleic acid (DNA) extraction kits (QIAsymphony® Virus/Bacteria Midi Kit (kit QS), ZR Fecal DNA MiniPrep™ (kit Z), QIAamp® DNA Stool Mini Kit (kit QA), Ultraclean® Fecal DNA Isolation Kit (kit U) and PowerSoil® DNA Isolation Kit (kit P)) were evaluated, using human faecal samples. Yield, purity and integrity of total genomic DNA were compared spectrophotometrically and using gel electrophoresis. Three bacteria, commonly found in human faeces were quantified using real time polymerase chain reaction (qPCR) and total bacterial diversity was studied using denaturing gradient gel electrophoresis (DGGE) as well as terminal restriction fragment length polymorphism (T-RFLP). The measurements of DNA yield and purity exhibited variations between the five kits tested in this study. Automated kit QS exhibited the best quality and highest quantity of DNA. All kits were shown to be reproducible with CV values≤0.46 for DNA extraction. qPCR results showed that all kits were uniformly efficient for extracting DNA from the selected target bacteria. DGGE and T-RFLP produced the highest diversity scores for DNA extracted using kit Z (H'=2.30 and 1.27) and kit QS (H'=2.16 and 0.94), which also extracted the highest DNA yields compared to the other kits assessed. Copyright © 2013 Elsevier B.V. All rights reserved.
General Search Market Equilibrium
Albrecht, James W.; Axell, Bo
1982-01-01
In this paper we extend models of “search market equilibrium” to incorporate general equilibrium considerations. The model we treat is one with a single product market and a single labor market. Imperfectly informed individuals follow optimal strategies in searching for a suitably low price and high wage. For any distribution of price and wage offers across firms these optimal strategies generate product demand and labor supply schedules. Firms then choose prices and wages to maximize expecte...
Equilibrium statistical mechanics
Jackson, E Atlee
2000-01-01
Ideal as an elementary introduction to equilibrium statistical mechanics, this volume covers both classical and quantum methodology for open and closed systems. Introductory chapters familiarize readers with probability and microscopic models of systems, while additional chapters describe the general derivation of the fundamental statistical mechanics relationships. The final chapter contains 16 sections, each dealing with a different application, ordered according to complexity, from classical through degenerate quantum statistical mechanics. Key features include an elementary introduction t
Bollerslev, Tim; Sizova, Natalia; Tauchen, George
Stock market volatility clusters in time, carries a risk premium, is fractionally inte- grated, and exhibits asymmetric leverage effects relative to returns. This paper develops a first internally consistent equilibrium based explanation for these longstanding empirical facts. The model is cast......, and the dynamic cross-correlations of the volatility measures with the returns calculated from actual high-frequency intra-day data on the S&P 500 aggregate market and VIX volatility indexes....
Efficient estimation of rare-event kinetics
Trendelkamp-Schroer, Benjamin
2014-01-01
The efficient calculation of rare-event kinetics in complex dynamical systems, such as the rate and pathways of ligand dissociation from a protein, is a generally unsolved problem. Markov state models can systematically integrate ensembles of short simulations and thus effectively parallelize the computational effort, but the rare events of interest still need to be spontaneously sampled in the data. Enhanced sampling approaches, such as parallel tempering or umbrella sampling, can accelerate the computation of equilibrium expectations massively - but sacrifice the ability to compute dynamical expectations. In this work we establish a principle to combine knowledge of the equilibrium distribution with kinetics from fast "downhill" relaxation trajectories using reversible Markov models. This approach is general as it does not invoke any specific dynamical model, and can provide accurate estimates of the rare event kinetics. Large gains in sampling efficiency can be achieved whenever one direction of the proces...
Ashok Sahai
2016-02-01
Full Text Available This paper addresses the issue of finding the most efficient estimator of the normal population mean when the population “Coefficient of Variation (C. V.” is ‘Rather-Very-Large’ though unknown, using a small sample (sample-size ≤ 30. The paper proposes an “Efficient Iterative Estimation Algorithm exploiting sample “C. V.” for an efficient Normal Mean estimation”. The MSEs of the estimators per this strategy have very intricate algebraic expression depending on the unknown values of population parameters, and hence are not amenable to an analytical study determining the extent of gain in their relative efficiencies with respect to the Usual Unbiased Estimator (sample mean ~ Say ‘UUE’. Nevertheless, we examine these relative efficiencies of our estimators with respect to the Usual Unbiased Estimator, by means of an illustrative simulation empirical study. MATLAB 7.7.0.471 (R2008b is used in programming this illustrative ‘Simulated Empirical Numerical Study’.DOI: 10.15181/csat.v4i1.1091
Non-equilibrium quantum heat machines
Alicki, Robert; Gelbwaser-Klimovsky, David
2015-11-01
Standard heat machines (engine, heat pump, refrigerator) are composed of a system (working fluid) coupled to at least two equilibrium baths at different temperatures and periodically driven by an external device (piston or rotor) sometimes called the work reservoir. The aim of this paper is to go beyond this scheme by considering environments which are stationary but cannot be decomposed into a few baths at thermal equilibrium. Such situations are important, for example in solar cells, chemical machines in biology, various realizations of laser cooling or nanoscopic machines driven by laser radiation. We classify non-equilibrium baths depending on their thermodynamic behavior and show that the efficiency of heat machines powered by them is limited by the generalized Carnot bound.
Module description of TOKAMAK equilibrium code MEUDAS
Suzuki, Masaei; Hayashi, Nobuhiko; Matsumoto, Taro; Ozeki, Takahisa [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment
2002-01-01
The analysis of an axisymmetric MHD equilibrium serves as a foundation of TOKAMAK researches, such as a design of devices and theoretical research, the analysis of experiment result. For this reason, also in JAERI, an efficient MHD analysis code has been developed from start of TOKAMAK research. The free boundary equilibrium code ''MEUDAS'' which uses both the DCR method (Double-Cyclic-Reduction Method) and a Green's function can specify the pressure and the current distribution arbitrarily, and has been applied to the analysis of a broad physical subject as a code having rapidity and high precision. Also the MHD convergence calculation technique in ''MEUDAS'' has been built into various newly developed codes. This report explains in detail each module in ''MEUDAS'' for performing convergence calculation in solving the MHD equilibrium. (author)
Noncompact Equilibrium Points and Applications
Zahra Al-Rumaih
2012-01-01
Full Text Available We prove an equilibrium existence result for vector functions defined on noncompact domain and we give some applications in optimization and Nash equilibrium in noncooperative game.
Aerospace Applications of Non-Equilibrium Plasma
Blankson, Isaiah M.
2016-01-01
Nonequilibrium plasma/non-thermal plasma/cold plasmas are being used in a wide range of new applications in aeronautics, active flow control, heat transfer reduction, plasma-assisted ignition and combustion, noise suppression, and power generation. Industrial applications may be found in pollution control, materials surface treatment, and water purification. In order for these plasma processes to become practical, efficient means of ionization are necessary. A primary challenge for these applications is to create a desired non-equilibrium plasma in air by preventing the discharge from transitioning into an arc. Of particular interest is the impact on simulations and experimental data with and without detailed consideration of non-equilibrium effects, and the consequences of neglecting non-equilibrium. This presentation will provide an assessment of the presence and influence of non-equilibrium phenomena for various aerospace needs and applications. Specific examples to be considered will include the forward energy deposition of laser-induced non-equilibrium plasmoids for sonic boom mitigation, weakly ionized flows obtained from pulsed nanosecond discharges for an annular Hall type MHD generator duct for turbojet energy bypass, and fundamental mechanisms affecting the design and operation of novel plasma-assisted reactive systems in dielectric liquids (water purification, in-pipe modification of fuels, etc.).
Sabadini, Edvaldo; Silva, Marcelo Alves da [Universidade Estadual de Campinas (UNICAMP), SP (Brazil); Ziglio, Claudio Marcos; Carvalho, Carlos Henrique Monteiro de; Rocha, Nelson de Oliveira [PETROBRAS, Rio de Janeiro, RJ (Brazil). Centro de Pesquisas (CENPES)
2008-07-01
In this work the efficiency of five commercial additives which produce drag reduction in petroleum was determined and compared. The studies were carried out in a rheometer using samples of petroleum from Bacia de Campos diluted in 50% of toluene. For such purpose the rheometer acts as a 'torquemeter', in which the magnitude of the drag reduction promoted by the additive is directly proportional to the difference in torque applied to maintain the sample in a specific flow rate. The obtained results have shown excellent capability of the additives to promote drag reduction (up to 20%) and small difference of efficiency among the additives was detectable. (author)
Partition Equilibrium Always Exists in Resource Selection Games
Anshelevich, Elliot; Caskurlu, Bugra; Hate, Ameya
We consider the existence of Partition Equilibrium in Resource Selection Games. Super-strong equilibrium, where no subset of players has an incentive to change their strategies collectively, does not always exist in such games. We show, however, that partition equilibrium (introduced in [4] to model coalitions arising in a social context) always exists in general resource selection games, as well as how to compute it efficiently. In a partition equilibrium, the set of players has a fixed partition into coalitions, and the only deviations considered are by coalitions that are sets in this partition. Our algorithm to compute a partition equilibrium in any resource selection game (i.e., load balancing game) settles the open question from [4] about existence of partition equilibrium in general resource selection games. Moreover, we show how to always find a partition equilibrium which is also a Nash equilibrium. This implies that in resource selection games, we do not need to sacrifice the stability of individual players when forming solutions stable against coalitional deviations. In addition, while super-strong equilibrium may not exist in resource selection games, we show that its existence can be decided efficiently, and how to find one if it exists.
Jensen Jørgen
2002-07-01
Full Text Available Abstract Background Chlamydia pneumoniae infection has been detected by serological methods, but PCR is gaining more interest. A number of different PCR assays have been developed and some are used in combination with serology for diagnosis. Real-time PCR could be an attractive new PCR method; therefore it must be evaluated and compared to conventional PCR methods. Results We compared the performance of a newly developed real-time PCR with a conventional PCR method for detection of C. pneumoniae. The PCR methods were tested on reference samples containing C. pneumoniae DNA and on 136 nasopharyngeal samples from patients with a chronic cough. We found the same detection limit for the two methods and that clinical performance was equal for the real-time PCR and for the conventional PCR method, although only three samples tested positive. To investigate whether the low prevalence of C. pneumoniae among patients with a chronic cough was caused by suboptimal PCR efficiency in the samples, PCR efficiency was determined based on the real-time PCR. Seventeen of twenty randomly selected clinical samples had a similar PCR efficiency to samples containing pure genomic C. pneumoniae DNA. Conclusions These results indicate that the performance of real-time PCR is comparable to that of conventional PCR, but that needs to be confirmed further. Real-time PCR can be used to investigate the PCR efficiency which gives a rough estimate of how well the real-time PCR assay works in a specific sample type. Suboptimal PCR efficiency of PCR is not a likely explanation for the low positivity rate of C. pneumoniae in patients with a chronic cough.
Extended Mixed Vector Equilibrium Problems
Mijanur Rahaman
2014-01-01
Full Text Available We study extended mixed vector equilibrium problems, namely, extended weak mixed vector equilibrium problem and extended strong mixed vector equilibrium problem in Hausdorff topological vector spaces. Using generalized KKM-Fan theorem (Ben-El-Mechaiekh et al.; 2005, some existence results for both problems are proved in noncompact domain.
Non-equilibrium thermodynamics
De Groot, Sybren Ruurds
1984-01-01
The study of thermodynamics is especially timely today, as its concepts are being applied to problems in biology, biochemistry, electrochemistry, and engineering. This book treats irreversible processes and phenomena - non-equilibrium thermodynamics.S. R. de Groot and P. Mazur, Professors of Theoretical Physics, present a comprehensive and insightful survey of the foundations of the field, providing the only complete discussion of the fluctuating linear theory of irreversible thermodynamics. The application covers a wide range of topics: the theory of diffusion and heat conduction, fluid dyn
Optimal resource allocation in General Cournot-competitive equilibrium
Sommerfelt Ervik, Inger; Soegaard, Christian
2013-01-01
Conventional economic theory stipulates that output in Cournot competition is too low relative to that which is attained in perfect competition. We revisit this result in a General Cournot-competitive Equilibrium model with two industries that differ only in terms of productivity. We show that in general equilibrium, the more efficient industry produces too little and the less efficient industry produces too much compared to an optimal scenario with perfect competition.
Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea
2014-03-15
To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.
Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea
2014-01-01
To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289
Maia, Alex S C; Nascimento, Sheila T; Nascimento, Carolina C N; Gebremedhin, Kifle G
2016-05-01
The effects of air temperature and relative humidity on thermal equilibrium of goats in a tropical region was evaluated. Nine non-pregnant Anglo Nubian nanny goats were used in the study. An indirect calorimeter was designed and developed to measure oxygen consumption, carbon dioxide production, methane production and water vapour pressure of the air exhaled from goats. Physiological parameters: rectal temperature, skin temperature, hair-coat temperature, expired air temperature and respiratory rate and volume as well as environmental parameters: air temperature, relative humidity and mean radiant temperature were measured. The results show that respiratory and volume rates and latent heat loss did not change significantly for air temperature between 22 and 26°C. In this temperature range, metabolic heat was lost mainly by convection and long-wave radiation. For temperature greater than 30°C, the goats maintained thermal equilibrium mainly by evaporative heat loss. At the higher air temperature, the respiratory and ventilation rates as well as body temperatures were significantly elevated. It can be concluded that for Anglo Nubian goats, the upper limit of air temperature for comfort is around 26°C when the goats are protected from direct solar radiation.
Zhao, Man; Zhang, Cong; Zhang, Ying; Guo, Xianzhi; Yan, Husheng; Zhang, Huiqi
2014-02-28
A facile and highly efficient approach to obtain narrowly dispersed hydrophilic and magnetic molecularly imprinted polymer microspheres with molecular recognition ability in a real biological sample as good as what they show in the organic solvent-based media is described for the first time.
Nanostructured energy devices equilibrium concepts and kinetics
Bisquert, Juan
2014-01-01
Due to the pressing needs of society, low cost materials for energy devices have experienced an outstanding development in recent times. In this highly multidisciplinary area, chemistry, material science, physics, and electrochemistry meet to develop new materials and devices that perform required energy conversion and storage processes with high efficiency, adequate capabilities for required applications, and low production cost. Nanostructured Energy Devices: Equilibrium Concepts and Kinetics introduces the main physicochemical principles that govern the operation of energy devices. It inclu
Salime Jafari
2012-10-01
Full Text Available Background and Aim: Due to limitation of standardized tests for Persian-speakers with language disorders, spontaneous language sampling collection is an important part of assessment of languageprotocol. Therefore, selection of a language sampling method, which will provide information of linguistic competence in a short time, is important. Therefore, in this study, we compared the languagesamples elicited with picture description and storytelling methods in order to determine the effectiveness of the two methods.Methods: In this study 30 first-grade elementary school girls were selected with simple sampling. To investigate picture description method, we used two illustrated stories with four pictures. Languagesamples were collected through storytelling by telling a famous children’s story. To determine the effectiveness of these two methods the two indices of duration of sampling and mean length ofutterance (MLU were compared.Results: There was no significant difference between MLU in description and storytelling methods(p>0.05. However, duration of sampling was shorter in the picture description method than the storytelling method (p<0.05.Conclusion: Findings show that, the two methods of picture description and storytelling have the same potential in language sampling. Since, picture description method can provide language samples with the same complexity in a shorter time than storytelling, it can be used as a beneficial method forclinical purposes.
Daigle, Courtney L; Siegford, Janice M
2014-03-01
Continuous observation is the most accurate way to determine animals' actual time budget and can provide a 'gold standard' representation of resource use, behavior frequency, and duration. Continuous observation is useful for capturing behaviors that are of short duration or occur infrequently. However, collecting continuous data is labor intensive and time consuming, making multiple individual or long-term data collection difficult. Six non-cage laying hens were video recorded for 15 h and behavioral data collected every 2 s were compared with data collected using scan sampling intervals of 5, 10, 15, 30, and 60 min and subsamples of 2 second observations performed for 10 min every 30 min, 15 min every 1 h, 30 min every 1.5 h, and 15 min every 2 h. Three statistical approaches were used to provide a comprehensive analysis to examine the quality of the data obtained via different sampling methods. General linear mixed models identified how the time budget from the sampling techniques differed from continuous observation. Correlation analysis identified how strongly results from the sampling techniques were associated with those from continuous observation. Regression analysis identified how well the results from the sampling techniques were associated with those from continuous observation, changes in magnitude, and whether a sampling technique had bias. Static behaviors were well represented with scan and time sampling techniques, while dynamic behaviors were best represented with time sampling techniques. Methods for identifying an appropriate sampling strategy based upon the type of behavior of interest are outlined and results for non-caged laying hens are presented.
Statistical physics ""Beyond equilibrium
Ecke, Robert E [Los Alamos National Laboratory
2009-01-01
The scientific challenges of the 21st century will increasingly involve competing interactions, geometric frustration, spatial and temporal intrinsic inhomogeneity, nanoscale structures, and interactions spanning many scales. We will focus on a broad class of emerging problems that will require new tools in non-equilibrium statistical physics and that will find application in new material functionality, in predicting complex spatial dynamics, and in understanding novel states of matter. Our work will encompass materials under extreme conditions involving elastic/plastic deformation, competing interactions, intrinsic inhomogeneity, frustration in condensed matter systems, scaling phenomena in disordered materials from glasses to granular matter, quantum chemistry applied to nano-scale materials, soft-matter materials, and spatio-temporal properties of both ordinary and complex fluids.
US Fish and Wildlife Service, Department of the Interior — Winter waterfowl surveys have been conducted across much of the United States since 1935. Aerial surveys conducted using stratified random sampling have the...
Pontikos, Nikolas; Smyth, Deborah J; Schuilenburg, Helen; Howson, Joanna M M; Walker, Neil M; Burren, Oliver S; Guo, Hui; Onengut-Gumuscu, Suna; Chen, Wei-Min; Concannon, Patrick; Rich, Stephen S; Jayaraman, Jyothi; Jiang, Wei; Traherne, James A; Trowsdale, John; Todd, John A; Wallace, Chris
2014-01-01
.... Quantitative Polymerase Chain Reaction (qPCR) assays address this issue. However, their cost is prohibitive at the sample sizes required for detecting effects typically observed in complex genetic diseases...
Chipeta, Michael G.; McCann, Robert S.; Phiri, Kamija S.; van Vugt, Michèle; Takken, Willem; Diggle, Peter; Terlouw, Anja D.
2017-01-01
Introduction In the context of malaria elimination, interventions will need to target high burden areas to further reduce transmission. Current tools to monitor and report disease burden lack the capacity to continuously detect fine-scale spatial and temporal variations of disease distribution exhibited by malaria. These tools use random sampling techniques that are inefficient for capturing underlying heterogeneity while health facility data in resource-limited settings are inaccurate. Continuous community surveys of malaria burden provide real-time results of local spatio-temporal variation. Adaptive geostatistical design (AGD) improves prediction of outcome of interest compared to current random sampling techniques. We present findings of continuous malaria prevalence surveys using an adaptive sampling design. Methods We conducted repeated cross sectional surveys guided by an adaptive sampling design to monitor the prevalence of malaria parasitaemia and anaemia in children below five years old in the communities living around Majete Wildlife Reserve in Chikwawa district, Southern Malawi. AGD sampling uses previously collected data to sample new locations of high prediction variance or, where prediction exceeds a set threshold. We fitted a geostatistical model to predict malaria prevalence in the area. Findings We conducted five rounds of sampling, and tested 876 children aged 6–59 months from 1377 households over a 12-month period. Malaria prevalence prediction maps showed spatial heterogeneity and presence of hotspots—where predicted malaria prevalence was above 30%; predictors of malaria included age, socio-economic status and ownership of insecticide-treated mosquito nets. Conclusions Continuous malaria prevalence surveys using adaptive sampling increased malaria prevalence prediction accuracy. Results from the surveys were readily available after data collection. The tool can assist local managers to target malaria control interventions in areas with the
Dynamical Non-Equilibrium Molecular Dynamics
Giovanni Ciccotti
2013-12-01
Full Text Available In this review, we discuss the Dynamical approach to Non-Equilibrium Molecular Dynamics (D-NEMD, which extends stationary NEMD to time-dependent situations, be they responses or relaxations. Based on the original Onsager regression hypothesis, implemented in the nineteen-seventies by Ciccotti, Jacucci and MacDonald, the approach permits one to separate the problem of dynamical evolution from the problem of sampling the initial condition. D-NEMD provides the theoretical framework to compute time-dependent macroscopic dynamical behaviors by averaging on a large sample of non-equilibrium trajectories starting from an ensemble of initial conditions generated from a suitable (equilibrium or non-equilibrium distribution at time zero. We also discuss how to generate a large class of initial distributions. The same approach applies also to the calculation of the rate constants of activated processes. The range of problems treatable by this method is illustrated by discussing applications to a few key hydrodynamic processes (the “classical” flow under shear, the formation of convective cells and the relaxation of an interface between two immiscible liquids.
General equilibrium without utility functions
Balasko, Yves; Tvede, Mich
2010-01-01
How far can we go in weakening the assumptions of the general equilibrium model? Existence of equilibrium, structural stability and finiteness of equilibria of regular economies, genericity of regular economies and an index formula for the equilibria of regular economies have been known not to re......How far can we go in weakening the assumptions of the general equilibrium model? Existence of equilibrium, structural stability and finiteness of equilibria of regular economies, genericity of regular economies and an index formula for the equilibria of regular economies have been known...... and the diffeomorphism of the equilibrium manifold with a Euclidean space; (2) the diffeomorphism of the set of no-trade equilibria with a Euclidean space; (3) the openness and genericity of the set of regular equilibria as a subset of the equilibrium manifold; (4) for small trade vectors, the uniqueness, regularity...
Equilibrium models and variational inequalities
Konnov, Igor
2007-01-01
The concept of equilibrium plays a central role in various applied sciences, such as physics (especially, mechanics), economics, engineering, transportation, sociology, chemistry, biology and other fields. If one can formulate the equilibrium problem in the form of a mathematical model, solutions of the corresponding problem can be used for forecasting the future behavior of very complex systems and, also, for correcting the the current state of the system under control. This book presents a unifying look on different equilibrium concepts in economics, including several models from related sciences.- Presents a unifying look on different equilibrium concepts and also the present state of investigations in this field- Describes static and dynamic input-output models, Walras, Cassel-Wald, spatial price, auction market, oligopolistic equilibrium models, transportation and migration equilibrium models- Covers the basics of theory and solution methods both for the complementarity and variational inequality probl...
On the Local Equilibrium Principle
Hessling, H
2001-01-01
A physical system should be in a local equilibrium if it cannot be distinguished from a global equilibrium by ``infinitesimally localized measurements''. This seems to be a natural characterization of local equilibrium, however the problem is to give a precise meaning to the qualitative phrase ``infinitesimally localized measurements''. A solution is suggested in form of a {\\em Local Equilibrium Condition} (LEC) which can be applied to non-interacting quanta. The Unruh temperature of massless quanta is derived by applying LEC to an arbitrary point inside the Rindler Wedge. Massless quanta outside a hot sphere are analyzed. A stationary spherically symmetric local equilibrium does only exist according to LEC if the temperature is globally constant. Using LEC a non-trivial stationary local equilibrium is found for rotating massless quanta between two concentric cylinders of different temperatures. This shows that quanta may behave like a fluid with a B\\'enard instability.
Jokinen, Cassandra C; Koot, Jacqueline M; Carrillo, Catherine D; Gannon, Victor P J; Jardine, Claire M; Mutschall, Steven K; Topp, Edward; Taboada, Eduardo N
2012-12-01
Improved isolation techniques from environmental water and animal samples are vital to understanding Campylobacter epidemiology. In this study, the efficiency of selective enrichment in Bolton Broth (BB) followed by plating on charcoal cefoperazone deoxycholate agar (CCDA) (conventional method) was compared with an approach combining BB enrichment and passive filtration (membrane method) adapted from a method previously developed for testing of broiler meat, in the isolation of thermophilic campylobacters from surface water and animal fecal samples. The conventional method led to recoveries of Campylobacter from 36.7% of the water samples and 78.0% of the fecal samples and similar numbers, 38.3% and 76.0%, respectively, were obtained with the membrane method. To investigate the genetic diversity of Campylobacter jejuni and Campylobacter coli obtained by these two methods, isolates were analyzed using Comparative Genomic Fingerprinting, a high-resolution subtyping technique. The conventional and membrane methods yielded similar numbers of Campylobacter subtypes from water (25 and 28, respectively) and fecal (15 and 17, respectively) samples. Although there was no significant difference in recovery rates between the conventional and membrane methods, a significant improvement in isolation efficiency was obtained by using the membrane method, with a false-positive rate of 1.6% compared with 30.7% obtained using the conventional method. In conclusion, although the two methods are comparable in sensitivity, the membrane method had higher specificity, making it a cost-effective procedure for the enhanced isolation of C. jejuni and C. coli from water and animal fecal samples.
Parham, H; Zargar, B; Shiralipour, R
2012-02-29
Mercury in the lowest levels of concentrations is dangerous for human health due to its bioaccumulation in body and toxicity. This investigation shows the effective removal of mercury (II) ions from contaminated surface waters by modified magnetic iron oxide nanoparticles (M-MIONPs) with 2-mercaptobenzothiazole as an efficient adsorbent. The proposed method is fast, simple, cheap, effective and safe for treatment of mercury polluted waters. Preparation of adsorbent is easy and removal time is short. Non-modified magnetic iron oxide nanoparticles (MIONPs) can adsorb up to 43.47% of 50 ngmL(-1) of Hg (II) ions from polluted water, but modified magnetic ironoxide nanoparticles (M-MIONPs) improved the efficiency up to 98.6% for the same concentration. The required time for complete removal of mercury ions was 4 min. Variation of pH and high electrolyte concentration (NaCl) of the solution do not have considerable effect on the mercury removal efficiency. Loading capacity of adsorbent for Hg ions is obtained to be 590 μgg(-1). Copyright © 2011 Elsevier B.V. All rights reserved.
On the Efficient Generation of α-κ-μ and α-η-μ White Samples with Applications
Rausley Adriano Amaral de Souza
2015-01-01
Full Text Available This paper is concerned with a simple and highly efficient random sequence generator for uncorrelated α-κ-μ and α-η-μ variates. The algorithm may yield an efficiency of almost 100%, and this high efficiency can be reached for all special cases such as α-μ, κ-μ, η-μ, Nakagami-m, Nakagami-q, Weibull, Hoyt, Rayleigh, Rice, Exponential, and the One-Sided Gaussian. This generator is implemented via the rejection technique and allows for arbitrary fading parameters. The goodness-of-fit is measured using the Kolmogorov-Smirnov and Anderson-Darling tests. The maximum likelihood parameter estimation for the κ-μ distribution is proposed and verified against true values of the parameters chosen in the generator. We also provide two important applications for the random sequence generator, the first one dealing with the performance assessment of a digital communication system over the α-κ-μ and α-η-μ fading channels and the second one dealing with the performance assessment of the spectrum sensing with energy detection over special cases of these channels. Theoretical and simulation results are compared, validating again the accuracy of the generators.
A general method to study equilibrium partitioning of macromolecules
The distribution of macromolecules between a confined microscopic solution and a macroscopic bulk solution plays an important role in understanding separation processes such as Size Exclusion Chromatography (SEC). In this study, we have developed an efficient computational algorithm for obtaining...... the equilibrium partition coefficient (pore-to-bulk concentration ratio) and the concentration profile inside the confining geometry. The algorithm involves two steps. First, certain characteristic structure properties of the studied macromolecule are obtained by sampling its configuration space, and second those...... data are used for the computation of partition coefficient and concentration profile for any confinement size. Our algorithm is versatile to the model and type of the macromolecule studied, and is capable of handling three types of confining geometries (slit, rectangular channel and rectangular box...
Sérgio Luiz Gomes Antunes
2012-03-01
Full Text Available Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL. When acid-fast bacilli (AFB are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data.
Antunes, Sérgio Luiz Gomes; Chimelli, Leila; Jardim, Márcia Rodrigues; Vital, Robson Teixeira; Nery, José Augusto da Costa; Corte-Real, Suzana; Hacker, Mariana Andréa Vilas Boas; Sarno, Euzenir Nunes
2012-03-01
Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL). When acid-fast bacilli (AFB) are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies) to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN)] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres) than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data.
Phillips, Rob
2015-03-01
It has been said that the cell is the test tube of the twenty-first century. If so, the theoretical tools needed to quantitatively and predictively describe what goes on in such test tubes lag sorely behind the stunning experimental advances in biology seen in the decades since the molecular biology revolution began. Perhaps surprisingly, one of the theoretical tools that has been used with great success on problems ranging from how cells communicate with their environment and each other to the nature of the organization of proteins and lipids within the cell membrane is statistical mechanics. A knee-jerk reaction to the use of statistical mechanics in the description of cellular processes is that living organisms are so far from equilibrium that one has no business even thinking about it. But such reactions are probably too hasty given that there are many regimes in which, because of a separation of timescales, for example, such an approach can be a useful first step. In this article, we explore the power of statistical mechanical thinking in the biological setting, with special emphasis on cell signaling and regulation. We show how such models are used to make predictions and describe some recent experiments designed to test them. We also consider the limits of such models based on the relative timescales of the processes of interest.
Jovanović Filip P.
2016-01-01
Full Text Available This paper analyses the applicability of well-known risk management methodologies in energy efficiency projects in the industry. The possibilities of application of the selected risk management methodology are demonstrated within the project of the plants for injecting pulverized coal into blast furnaces nos. 1 and 2, implemented by the company US STEEL SERBIA d.o.o. in Smederevo. The aim of the project was to increase energy efficiency through the reduction of the quantity of coke, whose production requires large amounts of energy, reduction of harmful exhaust emission and increase productivity of blast furnaces through the reduction of production costs. The project was complex and had high costs, so that it was necessary to predict risk events and plan responses to identified risks at an early stage of implementation, in the course of the project design, in order to minimise losses and implement the project in accordance with the defined time and cost limitations. [Projekat Ministarstva nauke Republike Srbije, br. 179081: Researching contemporary tendencies of strategic management using specialized management disciplines in function of competitiveness of Serbian economy
Application of non-equilibrium plasmas in medicine
Mojsilović S.
2012-01-01
Full Text Available We review the potential of plasma medical applications, the connections to nanotechnologies and the results obtained by our group. A special issue in plasma medicine is the development of the plasma sources that would achieve non-equilibrium at atmospheric pressure in atmospheric gas mixture with no or only marginal heating of the gas, and with desired properties and mechanisms that may be controlled. Our studies have shown that control of radicals or chemically active products of the discharge such as ROS (reactive oxygen species and/or NO may be used to control the growth of the seeds. At the same time specially designed plasma needle and other sources were shown to be efficient to sterilize not only colonies of bacteria but also planctonic samples (microorganisms protected by water or bio films. Finally we have shown that plasma may induce differentiation of stem cells. Non-equilibrium plasmas may be used in detection of different specific markers in medicine. For example proton transfer mass spectroscopy may be employed in detection of volatile organic compounds without their dissociation and thus as a technique for instantaneous measurement of the presence of markers for numerous diseases. [Projekat Ministarstva nauke Republike Srbije, br. ON171037 i br. III41011
Fundamental functions in equilibrium thermodynamics
Horst, H.J. ter
1987-01-01
In the standard presentations of the principles of Gibbsian equilibrium thermodynamics one can find several gaps in the logic. For a subject that is as widely used as equilibrium thermodynamics, it is of interest to clear up such questions of mathematical rigor. In this paper it is shown that using
Rapid-Equilibrium Enzyme Kinetics
Alberty, Robert A.
2008-01-01
Rapid-equilibrium rate equations for enzyme-catalyzed reactions are especially useful because if experimental data can be fit by these simpler rate equations, the Michaelis constants can be interpreted as equilibrium constants. However, for some reactions it is necessary to use the more complicated steady-state rate equations. Thermodynamics is…
Jonathan A Scolnick
Full Text Available Fusion genes are known to be key drivers of tumor growth in several types of cancer. Traditionally, detecting fusion genes has been a difficult task based on fluorescent in situ hybridization to detect chromosomal abnormalities. More recently, RNA sequencing has enabled an increased pace of fusion gene identification. However, RNA-Seq is inefficient for the identification of fusion genes due to the high number of sequencing reads needed to detect the small number of fusion transcripts present in cells of interest. Here we describe a method, Single Primer Enrichment Technology (SPET, for targeted RNA sequencing that is customizable to any target genes, is simple to use, and efficiently detects gene fusions. Using SPET to target 5701 exons of 401 known cancer fusion genes for sequencing, we were able to identify known and previously unreported gene fusions from both fresh-frozen and formalin-fixed paraffin-embedded (FFPE tissue RNA in both normal tissue and cancer cells.
Scolnick, Jonathan A; Dimon, Michelle; Wang, I-Ching; Huelga, Stephanie C; Amorese, Douglas A
2015-01-01
Fusion genes are known to be key drivers of tumor growth in several types of cancer. Traditionally, detecting fusion genes has been a difficult task based on fluorescent in situ hybridization to detect chromosomal abnormalities. More recently, RNA sequencing has enabled an increased pace of fusion gene identification. However, RNA-Seq is inefficient for the identification of fusion genes due to the high number of sequencing reads needed to detect the small number of fusion transcripts present in cells of interest. Here we describe a method, Single Primer Enrichment Technology (SPET), for targeted RNA sequencing that is customizable to any target genes, is simple to use, and efficiently detects gene fusions. Using SPET to target 5701 exons of 401 known cancer fusion genes for sequencing, we were able to identify known and previously unreported gene fusions from both fresh-frozen and formalin-fixed paraffin-embedded (FFPE) tissue RNA in both normal tissue and cancer cells.
Reichenbach, Jürgen R.
2016-01-01
This work’s aim was to minimize the acquisition time of a radial 3D ultra-short echo-time (UTE) sequence and to provide fully automated, gradient delay compensated, and therefore artifact free, reconstruction. The radial 3D UTE sequence (echo time 60 μs) was implemented as single echo acquisition with center-out readouts and improved time efficient spoiling on a clinical 3T scanner without hardware modifications. To assess the sequence parameter dependent gradient delays each acquisition contained a quick calibration scan and utilized the phase of the readouts to detect the actual k-space center. This calibration scan does not require any user interaction. To evaluate the robustness of this automatic delay estimation phantom experiments were performed and 19 in vivo imaging data of the head, tibial cortical bone, feet and lung were acquired from 6 volunteers. As clinical application of this fast 3D UTE acquisition single breath-hold lung imaging is demonstrated. The proposed sequence allowed very short repetition times (TR~1ms), thus reducing total acquisition time. The proposed, fully automated k-phase based gradient delay calibration resulted in accurate delay estimations (difference to manually determined optimal delay −0.13 ± 0.45 μs) and allowed unsupervised reconstruction of high quality images for both phantom and in vivo data. The employed fast spoiling scheme efficiently suppressed artifacts caused by incorrectly refocused echoes. The sequence proved to be quite insensitive to motion, flow and susceptibility artifacts and provides oversampling protection against aliasing foldovers in all directions. Due to the short TR, acquisition times are attractive for a wide range of clinical applications. For short T2* mapping this sequence provides free choice of the second TE, usually within less scan time as a comparable dual echo UTE sequence. PMID:26975051
Herrmann, Karl-Heinz; Krämer, Martin; Reichenbach, Jürgen R
2016-01-01
This work's aim was to minimize the acquisition time of a radial 3D ultra-short echo-time (UTE) sequence and to provide fully automated, gradient delay compensated, and therefore artifact free, reconstruction. The radial 3D UTE sequence (echo time 60 μs) was implemented as single echo acquisition with center-out readouts and improved time efficient spoiling on a clinical 3T scanner without hardware modifications. To assess the sequence parameter dependent gradient delays each acquisition contained a quick calibration scan and utilized the phase of the readouts to detect the actual k-space center. This calibration scan does not require any user interaction. To evaluate the robustness of this automatic delay estimation phantom experiments were performed and 19 in vivo imaging data of the head, tibial cortical bone, feet and lung were acquired from 6 volunteers. As clinical application of this fast 3D UTE acquisition single breath-hold lung imaging is demonstrated. The proposed sequence allowed very short repetition times (TR~1ms), thus reducing total acquisition time. The proposed, fully automated k-phase based gradient delay calibration resulted in accurate delay estimations (difference to manually determined optimal delay -0.13 ± 0.45 μs) and allowed unsupervised reconstruction of high quality images for both phantom and in vivo data. The employed fast spoiling scheme efficiently suppressed artifacts caused by incorrectly refocused echoes. The sequence proved to be quite insensitive to motion, flow and susceptibility artifacts and provides oversampling protection against aliasing foldovers in all directions. Due to the short TR, acquisition times are attractive for a wide range of clinical applications. For short T2* mapping this sequence provides free choice of the second TE, usually within less scan time as a comparable dual echo UTE sequence.
Equilibrium and Sudden Events in Chemical Evolution
Weinberg, David H.; Andrews, Brett H.; Freudenburg, Jenna
2017-03-01
We present new analytic solutions for one-zone (fully mixed) chemical evolution models that incorporate a realistic delay time distribution for Type Ia supernovae (SNe Ia) and can therefore track the separate evolution of α-elements produced by core collapse supernovae (CCSNe) and iron peak elements synthesized in both CCSNe and SNe Ia. Our solutions allow constant, exponential, or linear–exponential ({{te}}-t/{τ {sfh}}) star formation histories, or combinations thereof. In generic cases, α and iron abundances evolve to an equilibrium at which element production is balanced by metal consumption and gas dilution, instead of continuing to increase over time. The equilibrium absolute abundances depend principally on supernova yields and the outflow mass loading parameter η, while the equilibrium abundance ratio [α /{Fe}] depends mainly on yields and secondarily on star formation history. A stellar population can be metal-poor either because it has not yet evolved to equilibrium or because high outflow efficiency makes the equilibrium abundance itself low. Systems with ongoing gas accretion develop metallicity distribution functions (MDFs) that are sharply peaked, while “gas starved” systems with rapidly declining star formation, such as the conventional “closed box” model, have broadly peaked MDFs. A burst of star formation that consumes a significant fraction of a system’s available gas and retains its metals can temporarily boost [α /{Fe}] by 0.1–0.3 dex, a possible origin for rare, α-enhanced stars with intermediate age and/or high metallicity. Other sudden transitions in system properties can produce surprising behavior, including backward evolution of a stellar population from high to low metallicity.
Non-equilibrium phase transitions
Henkel, Malte; Lübeck, Sven
2009-01-01
This book describes two main classes of non-equilibrium phase-transitions: (a) static and dynamics of transitions into an absorbing state, and (b) dynamical scaling in far-from-equilibrium relaxation behaviour and ageing. The first volume begins with an introductory chapter which recalls the main concepts of phase-transitions, set for the convenience of the reader in an equilibrium context. The extension to non-equilibrium systems is made by using directed percolation as the main paradigm of absorbing phase transitions and in view of the richness of the known results an entire chapter is devoted to it, including a discussion of recent experimental results. Scaling theories and a large set of both numerical and analytical methods for the study of non-equilibrium phase transitions are thoroughly discussed. The techniques used for directed percolation are then extended to other universality classes and many important results on model parameters are provided for easy reference.
A Multiperiod Equilibrium Pricing Model
Minsuk Kwak
2014-01-01
Full Text Available We propose an equilibrium pricing model in a dynamic multiperiod stochastic framework with uncertain income. There are one tradable risky asset (stock/commodity, one nontradable underlying (temperature, and also a contingent claim (weather derivative written on the tradable risky asset and the nontradable underlying in the market. The price of the contingent claim is priced in equilibrium by optimal strategies of representative agent and market clearing condition. The risk preferences are of exponential type with a stochastic coefficient of risk aversion. Both subgame perfect strategy and naive strategy are considered and the corresponding equilibrium prices are derived. From the numerical result we examine how the equilibrium prices vary in response to changes in model parameters and highlight the importance of our equilibrium pricing principle.
Equilibrium with arbitrary market structure
Grodal, Birgit; Vind, Karl
2005-01-01
Fifty years ago Arrow [1] introduced contingent commodities and Debreu [4] observed that this reinterpretation of a commodity was enough to apply the existing general equilibrium theory to uncertainty and time. This interpretation of general equilibrium theory is the Arrow-Debreu model. The compl......Fifty years ago Arrow [1] introduced contingent commodities and Debreu [4] observed that this reinterpretation of a commodity was enough to apply the existing general equilibrium theory to uncertainty and time. This interpretation of general equilibrium theory is the Arrow-Debreu model....... The complete market predicted by this theory is clearly unrealistic, and Radner [10] formulated and proved existence of equilibrium in a multiperiod model with incomplete markets. In this paper the Radner result is extended. Radner assumed a specific structure of markets, independence of preferences...
Equilibrium time correlation functions in open systems
Zhu, Jinglong; Agarwal, Animesh; Site, Luigi Delle
2014-01-01
We study equilibrium time correlation functions for liquid water at room temperature employing the Molecular Dynamics (MD) adaptive resolution method AdResS in its Grand Canonical formulation (GC-AdResS). This study introduces two technical innovations: the employment of a local thermostat that acts only in the reservoir and the consequent construction of an "ideal" Grand Canonical reservoir of particles and energy. As a consequence the artificial action of a thermostat in the calculation of equilibrium time correlation functions of standard NVT simulations is efficiently removed. The success of the technical innovation provides the basis for formulating a profound conceptual problem, that is the extension of Liouville theorem to open systems (Grand Canonical ensemble), a question, so far, treated neither in MD nor (in general) in statistical physics.
İrvem, Arzu; Özdil, Kamil; Çalışkan, Zuhal; Yücel, Muhterem
2016-01-01
Background: E. histolytica is among the common causes of acute gastroenteritis. The pathogenic species E. histolytica and the nonpathogenic species E. dispar cannot be morphologically differentiated, although correct identification of these protozoans is important for treatment and public health. In many laboratories, the screening of leukocytes, erythrocytes, amoebic cysts, trophozoites and parasite eggs is performed using Native-Lugol’s iodine for pre-diagnosis. Aims: In this study, we aimed to investigate the frequency of E. histolytica in stool samples collected from 788 patients residing in the Anatolian region of İstanbul who presented with gastrointestinal complaints. We used the information obtained to evaluate the effectiveness of microscopic examinations when used in combination with the E. histolytica adhesin antigen test. Study Design: Retrospective cross-sectional study Methods: Preparations of stool samples stained with Native-Lugol’s iodine were evaluated using the E. histolytica adhesin test and examined using standard light microscopy at ×40 magnification. Pearson’s Chi-square and Fisher’s exact tests were used for statistical analysis. Logistic regression analysis was used for multivariate analysis. Results: Of 788 samples, 38 (4.8%) were positive for E. histolytica adhesin antigens. When evaluated together with the presences of erythrocytes, leukocytes, cysts, and trophozoites, respectively, using logistic regression analysis, leukocyte positivity was significantly higher. The odds ratio of leukocyte positivity increased adhesin test-positivity by 2,530-fold (95% CI=1.01–6.330). Adhesin test-positivity was significant (p=0.047). Conclusion: In line with these findings, the consistency between the presence of cysts and erythrocytes and adhesin test-positivity was found to be highly significant, but that of higher levels of leukocytes was found to be discordant. It was concluded that leukocytes and trophozoites were easily misjudged
Horney, Jennifer; Zotti, Marianne E; Williams, Amy; Hsia, Jason
2012-01-01
Women of reproductive age, in particular women who are pregnant or fewer than 6 months postpartum, are uniquely vulnerable to the effects of natural disasters, which may create stressors for caregivers, limit access to prenatal/postpartum care, or interrupt contraception. Traditional approaches (e.g., newborn records, community surveys) to survey women of reproductive age about unmet needs may not be practical after disasters. Finding pregnant or postpartum women is especially challenging because fewer than 5% of women of reproductive age are pregnant or postpartum at any time. From 2009 to 2011, we conducted three pilots of a sampling strategy that aimed to increase the proportion of pregnant and postpartum women of reproductive age who were included in postdisaster reproductive health assessments in Johnston County, North Carolina, after tornadoes, Cobb/Douglas Counties, Georgia, after flooding, and Bertie County, North Carolina, after hurricane-related flooding. Using this method, the percentage of pregnant and postpartum women interviewed in each pilot increased from 0.06% to 21%, 8% to 19%, and 9% to 17%, respectively. Two-stage cluster sampling with referral can be used to increase the proportion of pregnant and postpartum women included in a postdisaster assessment. This strategy may be a promising way to assess unmet needs of pregnant and postpartum women in disaster-affected communities. Published by Elsevier Inc.
Di, Sheng; Berrocal, Eduardo; Cappello, Franck
2015-01-01
The silent data corruption (SDC) problem is attracting more and more attentions because it is expected to have a great impact on exascale HPC applications. SDC faults are hazardous in that they pass unnoticed by hardware and can lead to wrong computation results. In this work, we formulate SDC detection as a runtime one-step-ahead prediction method, leveraging multiple linear prediction methods in order to improve the detection results. The contributions are twofold: (1) we propose an error feedback control model that can reduce the prediction errors for different linear prediction methods, and (2) we propose a spatial-data-based even-sampling method to minimize the detection overheads (including memory and computation cost). We implement our algorithms in the fault tolerance interface, a fault tolerance library with multiple checkpoint levels, such that users can conveniently protect their HPC applications against both SDC errors and fail-stop errors. We evaluate our approach by using large-scale traces from well-known, large-scale HPC applications, as well as by running those HPC applications on a real cluster environment. Experiments show that our error feedback control model can improve detection sensitivity by 34-189% for bit-flip memory errors injected with the bit positions in the range [20,30], without any degradation on detection accuracy. Furthermore, memory size can be reduced by 33% with our spatial-data even-sampling method, with only a slight and graceful degradation in the detection sensitivity.
T.J. Akingbade
2014-09-01
Full Text Available This research work compares the one-stage sampling technique (Simple Random Sampling and two-stage sampling technique for estimating the population total of Nigerians using the 2006 census result of Nigerians. A sample size of twenty (20 states was selected out of a population of thirty six (36 states at the Primary Sampling Unit (PSU and one-third of each state selected at the PSU was sample at the Secondary Sampling Unit (SSU and analyzed. The result shows that, with the same sample size at the PSU, one-stage sampling technique (Simple Random Sampling is more efficient than two-stage sampling technique and hence, recommended.
Equilibrium Statistics: Monte Carlo Methods
Kröger, Martin
Monte Carlo methods use random numbers, or ‘random’ sequences, to sample from a known shape of a distribution, or to extract distribution by other means. and, in the context of this book, to (i) generate representative equilibrated samples prior being subjected to external fields, or (ii) evaluate high-dimensional integrals. Recipes for both topics, and some more general methods, are summarized in this chapter. It is important to realize, that Monte Carlo should be as artificial as possible to be efficient and elegant. Advanced Monte Carlo ‘moves’, required to optimize the speed of algorithms for a particular problem at hand, are outside the scope of this brief introduction. One particular modern example is the wavelet-accelerated MC sampling of polymer chains [406].
Farkas, Zsuzsa; Trevisani, Marcello; Horváth, Zsuzsanna; Serraino, Andrea; Szabó, István J; Kerekes, Kata; Szeitzné-Szabó, Mária; Ambrus, Arpád
2014-01-01
Aflatoxin M₁ (AFM1) contamination in 21,969 milk samples taken in Italy during 2005-08 and 2010 provided the basis for designing an early warning self-control plan. Additionally, 4148 AFM1 data points from the mycotoxin crisis (2003-04) represented the worst case. No parametric function provided a good fit for the skewed and scattered AFM1 concentrations. The acceptable reference values, reflecting the combined uncertainty of AFM1 measured in consignments consisting of milk from one to six farms, ranged from 40 to 16.7 ng kg(-1), respectively. Asymmetric control charts with these reference values, 40 and 50 ng kg(-1) warning and action limits are recommended to assess immediately the distribution of AFM1 concentration in incoming consignments. The moving window method, presented as a worked example including 5 days with five samples/day, enabled verification of compliance of production with the legal limit in 98% of the consignments at a 94% probability level. The sampling plan developed assumes consecutive analyses of samples taken from individual farms, which makes early detection of contamination possible and also immediate corrective actions if the AFM1 concentration in a consignment exceeds the reference value. In the latter case different control plans with increased sampling frequency should be applied depending on the level and frequency of contamination. As aflatoxin B₁ increases in feed at about the same time, therefore a coordinated sampling programme performed by the milk processing plants operating in a confined geographic area is more effective and economical then the individual ones. The applicability of the sample size calculation based on binomial theorem and the fast response rate resulting from the recommended sampling plan were verified by taking 1000-10,000 random samples with replacement from the experimental databases representing the normal, moderately and highly contaminated periods. The efficiency of the control plan could be
C. Fountoukis
2007-09-01
Full Text Available This study presents ISORROPIA II, a thermodynamic equilibrium model for the K+–Ca2+–Mg2+–NH4+–Na+–SO42−–NO3−–Cl−–H2O aerosol system. A comprehensive evaluation of its performance is conducted against water uptake measurements for laboratory aerosol and predictions of the SCAPE2 thermodynamic module over a wide range of atmospherically relevant conditions. The two models agree well, to within 13% for aerosol water content and total PM mass, 16% for aerosol nitrate and 6% for aerosol chloride and ammonium. Largest discrepancies were found under conditions of low RH, primarily from differences in the treatment of water uptake and solid state composition. In terms of computational speed, ISORROPIA II was more than an order of magnitude faster than SCAPE2, with robust and rapid convergence under all conditions. The addition of crustal species does not slow down the thermodynamic calculations (compared to the older ISORROPIA code because of optimizations in the activity coefficient calculation algorithm. Based on its computational rigor and performance, ISORROPIA II appears to be a highly attractive alternative for use in large scale air quality and atmospheric transport models.
Equilibrium relationships for non-equilibrium chemical dependencies
Yablonsky, Gregory S.; Constales, Denis; Marin, Guy B.
2010-01-01
In contrast to common opinion, it is shown that equilibrium constants determine the time-dependent behavior of particular ratios of concentrations for any system of reversible first-order reactions. Indeed, some special ratios actually coincide with the equilibrium constant at any moment in time. This is established for batch reactors, and similar relations hold for steady-state plug-flow reactors, replacing astronomic time by residence time. Such relationships can be termed time invariants o...
The Dynamical Equilibrium of Galaxy Clusters
Carlberg, R. G.; Yee, H. K. C.; Ellingson, E.; Morris, S. L.; Abraham, R.; Gravel, P.; Pritchet, C. J.; Smecker-Hane, T.; Hartwick, F. D. A.; Hesser, J. E.; Hutchings, J. B.; Oke, J. B.
1997-02-01
If a galaxy cluster is effectively in dynamical equilibrium, then all galaxy populations within the cluster must have distributions in velocity and position that individually reflect the same underlying mass distribution, although the derived virial masses can be quite different. Specifically, within the Canadian Network for Observational Cosmology cluster sample, the virial radius of the red galaxy population is, on the average, a factor of 2.05 +/- 0.34 smaller than that of the blue population. The red galaxies also have a smaller rms velocity dispersion, a factor of 1.31 +/- 0.13 within our sample. Consequently, the virial mass calculated from the blue galaxies is 3.5 +/- 1.3 times larger than from the red galaxies. However, applying the Jeans equation of stellar hydrodynamic equilibrium to the red and blue subsamples separately gives statistically identical cluster mass profiles. This is strong evidence that these clusters are effectively equilibrium systems and therefore demonstrates empirically that the masses in the virialized region are reliably estimated using dynamical techniques.
Hao, Ming; Wang, Yanli; Bryant, Stephen H
2014-01-02
It is common that imbalanced datasets are often generated from high-throughput screening (HTS). For a given dataset without taking into account the imbalanced nature, most classification methods tend to produce high predictive accuracy for the majority class, but significantly poor performance for the minority class. In this work, an efficient algorithm, GLMBoost, coupled with Synthetic Minority Over-sampling TEchnique (SMOTE) is developed and utilized to overcome the problem for several imbalanced datasets from PubChem BioAssay. By applying the proposed combinatorial method, those data of rare samples (active compounds), for which usually poor results are generated, can be detected apparently with high balanced accuracy (Gmean). As a comparison with GLMBoost, Random Forest (RF) combined with SMOTE is also adopted to classify the same datasets. Our results show that the former (GLMBoost+SMOTE) not only exhibits higher performance as measured by the percentage of correct classification for the rare samples (Sensitivity) and Gmean, but also demonstrates greater computational efficiency than the latter (RF+SMOTE). Therefore, we hope that the proposed combinatorial algorithm based on GLMBoost and SMOTE could be extensively used to tackle the imbalanced classification problem. Published by Elsevier B.V.
Aliakbarpour, H; Rawi, Che Salmah Md
2010-06-01
Thrips cause considerable economic loss to mango, Mangifera indica L., in Penang, Malaysia. Three nondestructive sampling techniques--shaking mango panicles over a moist plastic tray, washing the panicles with ethanol, and immobilization of thrips by using CO2--were evaluated for their precision to determine the most effective technique to capture mango flower thrips (Thysanoptera: Thripidae) in an orchard located at Balik Pulau, Penang, Malaysia, during two flowering seasons from December 2008 to February 2009 and from August to September 2009. The efficiency of each of the three sampling techniques was compared with absolute population counts on whole panicles as a reference. Diurnal flight activity of thrips species was assessed using yellow sticky traps. All three sampling methods and sticky traps were used at two hourly intervals from 0800 to 1800 hours to get insight into diurnal periodicity of thrips abundance in the orchard. Based on pooled data for the two seasons, the CO2 method was the most efficient procedure extracting 80.7% adults and 74.5% larvae. The CO2 method had the lowest relative variation and was the most accurate procedure compared with the absolute method as shown by regression analysis. All collection techniques showed that the numbers of all thrips species in mango panicles increased after 0800 hours, reaching a peak between 1200 and 1400 hours. Adults thrips captured on the sticky traps were the most abundant between 0800-1000 and 1400-1600 hours. According to results of this study, the CO2 method is recommended for sampling of thrips in the field. It is a nondestructive sampling procedure that neither damages flowers nor diminishes fruit production. Management of thrips populations in mango orchards with insecticides would be more effectively carried out during their peak population abundance on the flower panicles at midday to 1400 hours.
[The Carnot efficiency and plant photosystems].
Jennings, R C; Santabarbara, S; Belgio, E; Zucchelli, G
2014-01-01
The concept that the Carnot efficiency places an upper limit of 0.60-0.75 on the thermodynamic efficiency of photosynthetic primary photochemistry is examined using the PSI-LHCI preparation. The maximal quantum efficiency was determined approximately 0.99 which yielded a thermodynamic efficiency of 0.96, a value far above that predicted on the basis of the Carnot efficiency. The commonly presented reasoning leading to the Carnot efficiency idea was therefore critically examined. It is concluded that the crucial assumption that the pigment system, under illumination, is in equilibrium with the incident light field, at a black body temperature of Tr, is erroneous, as the temperature of the excited state pigments was experimentally shown to be that of the sample solvent (thermal bath), 280 K in this case. It is concluded that the classical reasoning used to describe the thermodynamics of heat systems is not applicable to "photonic" systems such as plant photosystems.
On Generalized Vector Equilibrium Problems
An-hua Wan; Jun-yi Fu; Wei-hua Mao
2006-01-01
A new generalized vector equilibrium problem involving set-valued mappings and the proper quasi-concavity of set-valued mappings in topological vector spaces are introduced; its existence theorems and the convexity of the solution sets are established.
Equilibrium and Orientation in Cephalopods.
Budelmann, Bernd-Ulrich
1980-01-01
Describes the structure of the equilibrium receptor system in cephalopods, comparing it to the vertebrate counterpart--the vestibular system. Relates the evolution of this complex system to the competition of cephalopods with fishes. (CS)
Equilibrium Electro-osmotic Instability
Rubinstein, Isaak
2014-01-01
Since its prediction fifteen years ago, electro-osmotic instability has been attributed to non-equilibrium electro-osmosis related to the extended space charge which develops at the limiting current in the course of concentration polarization at a charge-selective interface. This attribution had a double basis. Firstly, it has been recognized that equilibrium electro-osmosis cannot yield instability for a perfectly charge-selective solid. Secondly, it has been shown that non-equilibrium electro-osmosis can. First theoretical studies in which electro-osmotic instability was predicted and analyzed employed the assumption of perfect charge-selectivity for the sake of simplicity and so did the subsequent numerical studies of various time-dependent and nonlinear features of electro-osmotic instability. In this letter, we show that relaxing the assumption of perfect charge-selectivity (tantamount to fixing the electrochemical potential in the solid) allows for equilibrium electro-osmotic instability. Moreover, we s...
Gabriel J. Turbay
2011-03-01
Full Text Available The strategic equilibrium of an N-person cooperative game with transferable utility is a system composed of a cover collection of subsets of N and a set of extended imputations attainable through such equilibrium cover. The system describes a state of coalitional bargaining stability where every player has a bargaining alternative against any other player to support his corresponding equilibrium claim. Any coalition in the sable system may form and divide the characteristic value function of the coalition as prescribed by the equilibrium payoffs. If syndicates are allowed to form, a formed coalition may become a syndicate using the equilibrium payoffs as disagreement values in bargaining for a part of the complementary coalition incremental value to the grand coalition when formed. The emergent well known-constant sum derived game in partition function is described in terms of parameters that result from incumbent binding agreements. The strategic-equilibrium corresponding to the derived game gives an equal value claim to all players. This surprising result is alternatively explained in terms of strategic-equilibrium based possible outcomes by a sequence of bargaining stages that when the binding agreements are in the right sequential order, von Neumann and Morgenstern (vN-M non-discriminatory solutions emerge. In these solutions a preferred branch by a sufficient number of players is identified: the weaker players syndicate against the stronger player. This condition is referred to as the stronger player paradox. A strategic alternative available to the stronger players to overcome the anticipated not desirable results is to voluntarily lower his bargaining equilibrium claim. In doing the original strategic equilibrium is modified and vN-M discriminatory solutions may occur, but also a different stronger player may emerge that has eventually will have to lower his equilibrium claim. A sequence of such measures converges to the equal
The canonical equilibrium of constrained molecular models
Echenique, Pablo; García-Risueño, Pablo
2011-01-01
In order to increase the efficiency of the computer simulation of biological molecules, it is very common to impose holonomic constraints on the fastest degrees of freedom; normally bond lengths, but also possibly bond angles. However, as any other element that affects the physical model, the imposition of constraints must be assessed from the point of view of accuracy: both the dynamics and the equilibrium statistical mechanics are model-dependent, and they will be changed if constraints are used. In this review, we investigate the accuracy of constrained models at the level of the equilibrium statistical mechanics distributions produced by the different dynamics. We carefully derive the canonical equilibrium distributions of both the constrained and unconstrained dynamics, comparing the two of them by means of a "stiff" approximation to the latter. We do so both in the case of flexible and hard constraints, i.e., when the value of the constrained coordinates depends on the conformation and when it is a cons...
Li, Changqiao; The ATLAS collaboration
2017-01-01
The $b$-tagging efficiency of the MV2c10 discriminant for track-jets and calorimeter-jets containing $b$-hadrons is measured using 36.5~fb$^{-1}$ of $pp$ collisions collected in 2015 and 2016 by ATLAS at $\\sqrt{s}$=13~TeV. The measurements are performed using a tag-and-probe method to select a control sample of jets enriched in $b$-jets, by keeping events with a final state consistent with the process $pp\\to t\\bar{t}\\to W^+bW^-\\bar{b} \\to e^\\pm \\mu^\\mp \
2016-01-01
NOTICE: This is the peer reviewed version of the following article: Elena Pazos, Manuel García-Algar, Cristina Penas, Moritz Nazarenus, Arnau Torruella, Nicolas Pazos-Perez, Luca Guerrini, M. Eugenio Vázquez, Eduardo Garcia-Rio*, José L. Macareñas* and Ramon A. Alvarez-Puebla* (2016), SERS Surface Selection Rules for the Proteomic Liquid Biopsy in Real Samples: Efficient Detection of the Oncoprotein c-MYC. J. Am. Chem. Soc., 138, 14206-14209 [DOI:10.1021/jacs.6b08957]. This article may be use...
Serra, Antonio; Monteduro, Anna Grazia; Padmanabhan, Sanosh Kunjalukkal; Licciulli, Antonio; Bonfrate, Valentina; Salvatore, Luca; Calcagnile, Lucio
2017-01-01
Mixed iron-manganese oxide nanoparticles, synthesized by a simple procedure, were used to remove nickel ion from aqueous solutions. Nanostructures, prepared by using different weight percents of manganese, were characterized by transmission electron microscopy, selected area diffraction, X-ray diffraction, Raman spectroscopy, and vibrating sample magnetometry. Adsorption/desorption isotherm curves demonstrated that manganese inclusions enhance the specific surface area three times and the pores volume ten times. This feature was crucial to decontaminate both aqueous samples and food extracts from nickel ion. Efficient removal of Ni2+ was highlighted by the well-known dimethylglyoxime test and by ICP-MS analysis and the possibility of regenerating the nanostructure was obtained by a washing treatment in disodium ethylenediaminetetraacetate solution. PMID:28804670
Li, Jun; He, Hao; Bi, Meihua; Hu, Weisheng
2014-05-01
We propose a physical-layer energy-efficient receiving method based on selective sampling in an orthogonal frequency division multiplexing access passive optical network (OFDMA-PON). By using the special designed frame head, the receiver within an optical network unit (ONU) can identify the destination of the incoming frame. The receiver only samples at the time when the destination is in agreement with the ONU, while it stays in standby during the rest of the time. We clarify its feasibility through an experiment and analyze the downstream traffic delay by simulation. The results indicate that under limited delay conditions, ˜60% energy can be saved compared with the traditional receiving method in the OFDMA-PON system with 512 ONUs.
An Equilibrium Analysis of Knaster’s Fair Division Procedure
Matt Van Essen
2013-01-01
Full Text Available In an incomplete information setting, we analyze the sealed bid auction proposed by Knaster (cf. Steinhaus (1948. This procedure was designed to efficiently and fairly allocate multiple indivisible items when participants report their valuations truthfully. In equilibrium, players do not follow truthful bidding strategies. We find that, ex-post, the equilibrium allocation is still efficient but may not be fair. However, on average, participants receive the same outcome they would have received if everyone had reported truthfully—i.e., the mechanism is ex-ante fair.
Mauri-Aucejo, Adela; Amorós, Pedro; Moragues, Alaina; Guillem, Carmen; Belenguer-Sapiña, Carolina
2016-08-15
Solid-phase extraction is one of the most important techniques for sample purification and concentration. A wide variety of solid phases have been used for sample preparation over time. In this work, the efficiency of a new kind of solid-phase extraction adsorbent, which is a microporous material made from modified cyclodextrin bounded to a silica network, is evaluated through an analytical method which combines solid-phase extraction with high-performance liquid chromatography to determine polycyclic aromatic hydrocarbons in water samples. Several parameters that affected the analytes recovery, such as the amount of solid phase, the nature and volume of the eluent or the sample volume and concentration influence have been evaluated. The experimental results indicate that the material possesses adsorption ability to the tested polycyclic aromatic hydrocarbons. Under the optimum conditions, the quantification limits of the method were in the range of 0.09-2.4μgL(-1) and fine linear correlations between peak height and concentration were found around 1.3-70μgL(-1). The method has good repeatability and reproducibility, with coefficients of variation under 8%. Due to the concentration results, this material may represent an alternative for trace analysis of polycyclic aromatic hydrocarbons in water trough solid-phase extraction.
Magiera, Sylwia; Kwietniowska, Ewelina
2016-11-15
In this study, an easy, simple and efficient method for the determination of naringenin enantiomers in fruit juices after salting-out-assisted liquid-liquid extraction (SALLE) and high-performance liquid chromatography (HPLC) with diode-array detection (DAD) was developed. The sample treatment is based on the use of water-miscible acetonitrile as the extractant and acetonitrile phase separation under high-salt conditions. After extraction, juice samples were incubated with hydrochloric acid in order to achieve hydrolysis of naringin to naringenin. The hydrolysis parameters were optimized by using a half-fraction factorial central composite design (CCD). After sample preparation, chromatographic separation was obtained on a Chiralcel® OJ-RH column using the mobile phase consisting of 10mM aqueous ammonium acetate:methanol:acetonitrile (50:30:20; v/v/v) with detection at 288nm. The average recovery of the analyzed compounds ranged from 85.6 to 97.1%. The proposed method was satisfactorily used for the determination of naringenin enantiomers in various fruit juices samples.
Chen, Wenfeng; Zeng, Jingbin; Chen, Jinmei; Huang, Xiaoli; Jiang, Yaqi; Wang, Yiru; Chen, Xi
2009-12-25
A novel solid-phase microextraction (SPME) fiber coated with multiwalled carbon nanotubes (MWCNTs)/Nafion was developed and applied for the extraction of polar aromatic compounds (PACs) in natural water samples. The characteristics and the application of this fiber were investigated. Electron microscope photographs indicated that the MWCNTs/Nafion coating with average thickness of 12.5microm was homogeneous and porous. The MWCNTs/Nafion coated fiber exhibited higher extraction efficiency towards polar aromatic compounds compared to an 85microm commercial PA fiber. SPME experimental conditions, such as fiber coating, extraction time, stirring rate, desorption temperature and desorption time, were optimized in order to improve the extraction efficiency. The calibration curves were linear from 0.01 to 10microgmL(-1) for five PACs studied except p-nitroaniline (from 0.005 to 10microgmL(-1)) and m-cresol (from 0.001 to 10microgmL(-1)), and detection limits were within the range of 0.03-0.57ngmL(-1). Single fiber and fiber-to-fiber reproducibility were less than 7.5 (n=7) and 10.0% (n=5), respectively. The recovery of the PACs spiked in natural water samples at 1microgmL(-1) ranged from 83.3 to 106.0%.
Nash equilibrium and multi criterion aerodynamic optimization
Tang, Zhili; Zhang, Lianhe
2016-06-01
Game theory and its particular Nash Equilibrium (NE) are gaining importance in solving Multi Criterion Optimization (MCO) in engineering problems over the past decade. The solution of a MCO problem can be viewed as a NE under the concept of competitive games. This paper surveyed/proposed four efficient algorithms for calculating a NE of a MCO problem. Existence and equivalence of the solution are analyzed and proved in the paper based on fixed point theorem. Specific virtual symmetric Nash game is also presented to set up an optimization strategy for single objective optimization problems. Two numerical examples are presented to verify proposed algorithms. One is mathematical functions' optimization to illustrate detailed numerical procedures of algorithms, the other is aerodynamic drag reduction of civil transport wing fuselage configuration by using virtual game. The successful application validates efficiency of algorithms in solving complex aerodynamic optimization problem.
Boulyga, Sergei F. [Institute of Inorganic Chemistry and Analytical Chemistry, Johannes Gutenberg-University Mainz, Duesbergweg 10-14, 55099 Mainz (Germany)]. E-mail: sergei.boulyga@univie.ac.at; Heumann, Klaus G. [Institute of Inorganic Chemistry and Analytical Chemistry, Johannes Gutenberg-University Mainz, Duesbergweg 10-14, 55099 Mainz (Germany)
2006-07-01
A method by inductively coupled plasma mass spectrometry (Icp-Ms) was developed which allows the measurement of {sup 236}U at concentration ranges down to 3 x 10{sup -14} g g{sup -1} and extremely low {sup 236}U/{sup 238}U isotope ratios in soil samples of 10{sup -7}. By using the high-efficiency solution introduction system APEX in connection with a sector-field ICP-MS a sensitivity of more than 5000 counts fg{sup -1} uranium was achieved. The use of an aerosol desolvating unit reduced the formation rate of uranium hydride ions UH{sup +}/U{sup +} down to a level of 10{sup -6}. An abundance sensitivity of 3 x 10{sup -7} was observed for {sup 236}U/{sup 238}U isotope ratio measurements at mass resolution 4000. The detection limit for {sup 236}U and the lowest detectable {sup 236}U/{sup 238}U isotope ratio were improved by more than two orders of magnitude compared with corresponding values by alpha spectrometry. Determination of uranium in soil samples collected in the vicinity of Chernobyl nuclear power plant (NPP) resulted in that the {sup 236}U/{sup 238}U isotope ratio is a much more sensitive and accurate marker for environmental contamination by spent uranium in comparison to the {sup 235}U/{sup 238}U isotope ratio. The ICP-MS technique allowed for the first time detection of irradiated uranium in soil samples even at distances more than 200 km to the north of Chernobyl NPP (Mogilev region). The concentration of {sup 236}U in the upper 0-10 cm soil layers varied from 2 x 10{sup -9} g g{sup -1} within radioactive spots close to the Chernobyl NPP to 3 x 10{sup -13} g g{sup -1} on a sampling site located by >200 km from Chernobyl.
Physicochemical Perturbations of Phase Equilibriums
Dobruskin, Vladimir Kh
2010-01-01
The alternative approach to the displacement of gas/liquid equilibrium is developed on the basis of the Clapeyron equation. The phase transition in the system with well-established properties is taken as a reference process to search for the parameters of phase transition in the perturbed equilibrium system. The main equation, derived in the framework of both classical thermodynamics and statistical mechanics, establishes a correlation between variations of enthalpies of evaporation, \\Delta (\\Delta H), which is induced by perturbations, and the equilibrium vapor pressures. The dissolution of a solute, changing the surface shape, and the effect of the external field of adsorbents are considered as the perturbing actions on the liquid phase. The model provides the unified method for studying (1) solutions, (2) membrane separations (3) surface phenomena, and (4) effect of the adsorption field; it leads to the useful relations between \\Delta (\\Delta H), on the one hand, and the osmotic pressures, the Donnan poten...
General equilibrium of an ecosystem.
Tschirhart, J
2000-03-07
Ecosystems and economies are inextricably linked: ecosystem models and economic models are not linked. Consequently, using either type of model to design policies for preserving ecosystems or improving economic performance omits important information. Improved policies would follow from a model that links the systems and accounts for the mutual feedbacks by recognizing how key ecosystem variables influence key economic variables, and vice versa. Because general equilibrium economic models already are widely used for policy making, the approach used here is to develop a general equilibrium ecosystem model which captures salient biological functions and which can be integrated with extant economic models. In the ecosystem model, each organism is assumed to be a net energy maximizer that must exert energy to capture biomass from other organisms. The exerted energies are the "prices" that are paid to biomass, and each organism takes the prices as signals over which it has no control. The maximization problem yields the organism's demand for and supply of biomass to other organisms as functions of the prices. The demands and supplies for each biomass are aggregated over all organisms in each species which establishes biomass markets wherein biomass prices are determined. A short-run equilibrium is established when all organisms are maximizing and demand equals supply in every biomass market. If a species exhibits positive (negative) net energy in equilibrium, its population increases (decreases) and a new equilibrium follows. The demand and supply forces in the biomass markets drive each species toward zero stored energy and a long-run equilibrium. Population adjustments are not based on typical Lotka-Volterra differential equations in which one entire population adjusts to another entire population thereby masking organism behavior; instead, individual organism behavior is central to population adjustments. Numerical simulations use a marine food web in Alaska to
Incentives in Supply Function Equilibrium
Vetter, Henrik
2014-01-01
The author analyses delegation in homogenous duopoly under the assumption that the firm-managers compete in supply functions. In supply function equilibrium, managers’ decisions are strategic complements. This reverses earlier findings in that the author finds that owners give managers incentives...... to act in an accommodating way. As a result, optimal delegation reduces per-firm output and increases profits to above-Cournot profits. Moreover, in supply function equilibrium the mode of competition is endogenous. This means that the author avoids results that are sensitive with respect to assuming...
Incentives in Supply Function Equilibrium
Vetter, Henrik
2014-01-01
to act in an accommodating way. As a result, optimal delegation reduces per-firm output and increases profits to above-Cournot profits. Moreover, in supply function equilibrium the mode of competition is endogenous. This means that the author avoids results that are sensitive with respect to assuming......The author analyses delegation in homogenous duopoly under the assumption that the firm-managers compete in supply functions. In supply function equilibrium, managers’ decisions are strategic complements. This reverses earlier findings in that the author finds that owners give managers incentives...
Equilibrium in a Production Economy
Chiarolla, Maria B., E-mail: maria.chiarolla@uniroma1.it [Universita di Roma ' La Sapienza' , Dipartimento di Metodi e Modelli per l' Economia, il Territorio e la Finanza, Facolta di Economia (Italy); Haussmann, Ulrich G., E-mail: uhaus@math.ubc.ca [University of British Columbia, Department of Mathematics (Canada)
2011-06-15
Consider a closed production-consumption economy with multiple agents and multiple resources. The resources are used to produce the consumption good. The agents derive utility from holding resources as well as consuming the good produced. They aim to maximize their utility while the manager of the production facility aims to maximize profits. With the aid of a representative agent (who has a multivariable utility function) it is shown that an Arrow-Debreu equilibrium exists. In so doing we establish technical results that will be used to solve the stochastic dynamic problem (a case with infinite dimensional commodity space so the General Equilibrium Theory does not apply) elsewhere.
Yousef Erfanifard; Joachim Saborowski; Kerstin Wiegand; Katrin M Meyer
2016-01-01
The efficiency of sample-based indices pro-posed to quantify the spatial distribution of trees is influ-enced by the structure of tree stands, environmental heterogeneity and degree of aggregation. We evaluated 10 commonly used distance-based and 10 density-based indices using two structurally different stands of wild pis-tachio trees in the Zagros woodlands, Iran, to assess the reliability of each in revealing stand structure in wood-lands. All trees were completely stem-mapped in a nearly pure (40 ha) and a mixed (45 ha) stand. First, the inho-mogeneous pair correlation function [g(r)] and the Clark–Evans index (CEI) were used as references to reveal the true spatial arrangement of all trees in these stands. The sampled data were then evaluated using the 20 indices. Sampling was undertaken in a grid based on a square lattice using square plots (30 m 9 30 m) and nearest neighbor distances at the sample points. The g(r) and CEI statistics showed that the wild pistachio trees were aggregated in both stands, although the degree of aggregation was markedly higher in the pure stand. Three distance-and six density-based indices statistically verified that the wild pistachio trees were aggregated in both stands. The dis-tance-based Hines and Hines statistic (ht) and the density-based standardised Morisita (Ip), patchiness (IP) and Cassie (CA) indices revealed aggregation of the trees in the two structurally different stands in the Zagros woodlands and the higher clumping in the pure stand, whereas the other indices were not sensitive enough.
A metastable equilibrium model for the relative abundances of microbial phyla in a hot spring.
Jeffrey M Dick
Full Text Available Many studies link the compositions of microbial communities to their environments, but the energetics of organism-specific biomass synthesis as a function of geochemical variables have rarely been assessed. We describe a thermodynamic model that integrates geochemical and metagenomic data for biofilms sampled at five sites along a thermal and chemical gradient in the outflow channel of the hot spring known as "Bison Pool" in Yellowstone National Park. The relative abundances of major phyla in individual communities sampled along the outflow channel are modeled by computing metastable equilibrium among model proteins with amino acid compositions derived from metagenomic sequences. Geochemical conditions are represented by temperature and activities of basis species, including pH and oxidation-reduction potential quantified as the activity of dissolved hydrogen. By adjusting the activity of hydrogen, the model can be tuned to closely approximate the relative abundances of the phyla observed in the community profiles generated from BLAST assignments. The findings reveal an inverse relationship between the energy demand to form the proteins at equal thermodynamic activities and the abundance of phyla in the community. The distance from metastable equilibrium of the communities, assessed using an equation derived from energetic considerations that is also consistent with the information-theoretic entropy change, decreases along the outflow channel. Specific divergences from metastable equilibrium, such as an underprediction of the relative abundances of phototrophic organisms at lower temperatures, can be explained by considering additional sources of energy and/or differences in growth efficiency. Although the metabolisms used by many members of these communities are driven by chemical disequilibria, the results support the possibility that higher-level patterns of chemotrophic microbial ecosystems are shaped by metastable equilibrium states that
Learning efficient correlated equilibria
Borowski, Holly P.
2014-12-15
The majority of distributed learning literature focuses on convergence to Nash equilibria. Correlated equilibria, on the other hand, can often characterize more efficient collective behavior than even the best Nash equilibrium. However, there are no existing distributed learning algorithms that converge to specific correlated equilibria. In this paper, we provide one such algorithm which guarantees that the agents\\' collective joint strategy will constitute an efficient correlated equilibrium with high probability. The key to attaining efficient correlated behavior through distributed learning involves incorporating a common random signal into the learning environment.
Miyashita, Shin-Ichi; Mitsuhashi, Hiroaki; Fujii, Shin-Ichiro; Takatsu, Akiko; Inagaki, Kazumi; Fujimoto, Toshiyuki
2017-02-01
In order to facilitate reliable and efficient determination of both the particle number concentration (PNC) and the size of nanoparticles (NPs) by single-particle ICP-MS (spICP-MS) without the need to correct for the particle transport efficiency (TE, a possible source of bias in the results), a total-consumption sample introduction system consisting of a large-bore, high-performance concentric nebulizer and a small-volume on-axis cylinder chamber was utilized. Such a system potentially permits a particle TE of 100 %, meaning that there is no need to include a particle TE correction when calculating the PNC and the NP size. When the particle TE through the sample introduction system was evaluated by comparing the frequency of sharp transient signals from the NPs in a measured NP standard of precisely known PNC to the particle frequency for a measured NP suspension, the TE for platinum NPs with a nominal diameter of 70 nm was found to be very high (i.e., 93 %), and showed satisfactory repeatability (relative standard deviation of 1.0 % for four consecutive measurements). These results indicated that employing this total consumption system allows the particle TE correction to be ignored when calculating the PNC. When the particle size was determined using a solution-standard-based calibration approach without an NP standard, the particle diameters of platinum and silver NPs with nominal diameters of 30-100 nm were found to agree well with the particle diameters determined by transmission electron microscopy, regardless of whether a correction was performed for the particle TE. Thus, applying the proposed system enables NP size to be accurately evaluated using a solution-standard-based calibration approach without the need to correct for the particle TE.
Financial Intermediation, Competition, and Risk : A General Equilibrium Exposition
Di Nicolo, G.; Lucchetta, M.
2010-01-01
We study a simple general equilibrium model in which investment in a risky technology is subject to moral hazard and banks can extract market power rents. We show that more bank competition results in lower economy-wide risk, lower bank capital ratios, more efficient production plans and Pareto-rank
Quantifying mixing using equilibrium reactions
Wheat, Philip M.; Posner, Jonathan D.
2009-03-01
A method of quantifying equilibrium reactions in a microchannel using a fluorometric reaction of Fluo-4 and Ca2+ ions is presented. Under the proper conditions, equilibrium reactions can be used to quantify fluid mixing without the challenges associated with constituent mixing measures such as limited imaging spatial resolution and viewing angle coupled with three-dimensional structure. Quantitative measurements of CaCl and calcium-indicating fluorescent dye Fluo-4 mixing are measured in Y-shaped microchannels. Reactant and product concentration distributions are modeled using Green's function solutions and a numerical solution to the advection-diffusion equation. Equilibrium reactions provide for an unambiguous, quantitative measure of mixing when the reactant concentrations are greater than 100 times their dissociation constant and the diffusivities are equal. At lower concentrations and for dissimilar diffusivities, the area averaged fluorescence signal reaches a maximum before the species have interdiffused, suggesting that reactant concentrations and diffusivities must be carefully selected to provide unambiguous, quantitative mixing measures. Fluorometric equilibrium reactions work over a wide range of pH and background concentrations such that they can be used for a wide variety of fluid mixing measures including industrial or microscale flows.
Understanding Thermal Equilibrium through Activities
Pathare, Shirish; Huli, Saurabhee; Nachane, Madhura; Ladage, Savita; Pradhan, Hemachandra
2015-01-01
Thermal equilibrium is a basic concept in thermodynamics. In India, this concept is generally introduced at the first year of undergraduate education in physics and chemistry. In our earlier studies (Pathare and Pradhan 2011 "Proc. episteme-4 Int. Conf. to Review Research on Science Technology and Mathematics Education" pp 169-72) we…
Financial equilibrium with career concerns
Amil Dasgupta
2006-03-01
Full Text Available What are the equilibrium features of a financial market where a sizeable proportion of traders face reputational concerns? This question is central to our understanding of financial markets, which are increasingly dominated by institutional investors. We construct a model of delegated portfolio management that captures key features of the US mutual fund industry and embed it in an asset pricing framework. We thus provide a formal model of financial equilibrium with career concerned agents. Fund managers differ in their ability to understand market fundamentals, and in every period investors choose a fund. In equilibrium, the presence of career concerns induces uninformed fund managers to churn, i.e., to engage in trading even when they face a negative expected return. Churners act as noise traders and enhance the level of trading volume. The equilibrium relationship between fund return and net fund flows displays a skewed shape that is consistent with stylized facts. The robustness of our core results is probed from several angles.
Equilibrium theory : A salient approach
Schalk, S.
1999-01-01
Whereas the neoclassical models in General Equilibrium Theory focus on the existence of separate commodities, this thesis regards 'bundles of trade' as the unit objects of exchange. Apart from commodities and commodity bundles in the neoclassical sense, the term `bundle of trade' includes, for
Concurrent fractional and equilibrium crystallisation
Sha, Lian-Kun
2012-06-01
This paper proposes the concept of concurrent fractional and equilibrium crystallisation (CFEC) in a multi-phase magmatic system in light of experimental results on diffusivities of elements and other species in minerals and melts. A group of equations are presented to describe how the concentrations of an element or isotope change in fractionated solid, equilibrated solid, melt, liquid, and gas phases, as well as in magma, as a function of distribution coefficients and mass fractions during the CFEC process. CFEC model is a generalised and unified formulation that is valid, not only for pure fractional crystallisation (FC) and perfect equilibrium crystallisation (EC) singly, as two of its limiting end-member cases, but also for the geologically more important process of concurrent fractional and equilibrium crystallisation. The concept that both fractional and equilibrium crystallisation can operate concurrently in a magmatic system, for a given element, among different minerals, and even within different-sized crystal grains of the very same mineral phase, is of fundamental importance in deepening our current understanding of magmatic differentiation processes. CFEC probably occurs more frequently in the natural world than either pure fractional or perfect equilibrium crystallisation alone, as a result of the interplay of varying diffusivities of elements under diverse physicochemical conditions, different residence time and growth rates of mineral phases in magmas, and varying grain sizes within each phase and among different phases. The marked systematic variations in trace element concentrations in the melts of the Bishop Tuff have long been perplexing and difficult to reconcile with existing models of differentiation. CFEC, which is able to better explain the scatter trends in a systematic way than fractional crystallisation, is considered to be the cause.
De Grazia, Selenia; Gionfriddo, Emanuela; Pawliszyn, Janusz
2017-05-15
The current work presents the optimization of a protocol enabling direct extraction of avocado samples by a new Solid Phase Microextraction matrix compatible coating. In order to further extend the coating life time, pre-desorption and post-desorption washing steps were optimized for solvent type, time, and degree of agitation employed. Using optimized conditions, lifetime profiles of the coating related to extraction of a group of analytes bearing different physical-chemical properties were obtained. Over 80 successive extractions were carried out to establish coating efficiency using PDMS/DVB 65µm commercial coating in comparison with the PDMS/DVB/PDMS. The PDMS/DVB coating was more prone to irreversible matrix attachment on its surface, with consequent reduction of its extractive performance after 80 consecutive extractions. Conversely, the PDMS/DVB/PDMS coating showed enhanced inertness towards matrix fouling due to its outer smooth PDMS layer. This work represents the first step towards the development of robust SPME methods for quantification of contaminants in avocado as well as other fatty-based matrices, with minimal sample pre-treatment prior to extraction. In addition, an evaluation of matrix components attachment on the coating surface and related artifacts created by desorption of the coating at high temperatures in the GC-injector port, has been performed by GCxGC-ToF/MS.
Zhao, Man; Chen, Xiaojing; Zhang, Hongtao; Yan, Husheng; Zhang, Huiqi
2014-05-12
A facile and highly efficient new approach (namely RAFT coupling chemistry) to obtain well-defined hydrophilic molecularly imprinted polymer (MIP) microspheres with excellent specific recognition ability toward small organic analytes in the real, undiluted biological samples is described. It involves the first synthesis of "living" MIP microspheres with surface-bound vinyl and dithioester groups via RAFT precipitation polymerization (RAFTPP) and their subsequent grafting of hydrophilic polymer brushes by the simple coupling reaction of hydrophilic macro-RAFT agents (i.e., hydrophilic polymers with a dithioester end group) with vinyl groups on the "living" MIP particles in the presence of a free radical initiator. The successful grafting of hydrophilic polymer brushes onto the obtained MIP particles was confirmed by SEM, FT-IR, static contact angle and water dispersion studies, elemental analyses, and template binding experiments. Well-defined MIP particles with densely grafted hydrophilic polymer brushes (∼1.8 chains/nm(2)) of desired chemical structures and molecular weights were readily obtained, which showed significantly improved surface hydrophilicity and could thus function properly in real biological media. The origin of the high grafting densities of the polymer brushes was clarified and the general applicability of the strategy was demonstrated. In particular, the well-defined characteristics of the resulting hydrophilic MIP particles allowed the first systematic study on the effects of various structural parameters of the grafted hydrophilic polymer brushes on their water-compatibility, which is of great importance for rationally designing more advanced real biological sample-compatible MIPs.
Neal, R M
2000-01-01
Markov chain sampling methods that automatically adapt to characteristics of the distribution being sampled can be constructed by exploiting the principle that one can sample from a distribution by sampling uniformly from the region under the plot of its density function. A Markov chain that converges to this uniform distribution can be constructed by alternating uniform sampling in the vertical direction with uniform sampling from the horizontal `slice' defined by the current vertical position, or more generally, with some update that leaves the uniform distribution over this slice invariant. Variations on such `slice sampling' methods are easily implemented for univariate distributions, and can be used to sample from a multivariate distribution by updating each variable in turn. This approach is often easier to implement than Gibbs sampling, and more efficient than simple Metropolis updates, due to the ability of slice sampling to adaptively choose the magnitude of changes made. It is therefore attractive f...
Automatic, optimized interface placement in forward flux sampling simulations
Kratzer, Kai; Allen, Rosalind J
2013-01-01
Forward flux sampling (FFS) provides a convenient and efficient way to simulate rare events in equilibrium or non-equilibrium systems. FFS ratchets the system from an initial state to a final state via a series of interfaces in phase space. The efficiency of FFS depends sensitively on the positions of the interfaces. We present two alternative methods for placing interfaces automatically and adaptively in their optimal locations, on-the-fly as an FFS simulation progresses, without prior knowledge or user intervention. These methods allow the FFS simulation to advance efficiently through bottlenecks in phase space by placing more interfaces where the probability of advancement is lower. The methods are demonstrated both for a single-particle test problem and for the crystallization of Yukawa particles. By removing the need for manual interface placement, our methods both facilitate the setting up of FFS simulations and improve their performance, especially for rare events which involve complex trajectories thr...
On generalized operator quasi-equilibrium problems
Kum, Sangho; Kim, Won Kyu
2008-09-01
In this paper, we will introduce the generalized operator equilibrium problem and generalized operator quasi-equilibrium problem which generalize the operator equilibrium problem due to Kazmi and Raouf [K.R. Kazmi, A. Raouf, A class of operator equilibrium problems, J. Math. Anal. Appl. 308 (2005) 554-564] into multi-valued and quasi-equilibrium problems. Using a Fan-Browder type fixed point theorem in [S. Park, Foundations of the KKM theory via coincidences of composites of upper semicontinuous maps, J. Korean Math. Soc. 31 (1994) 493-519] and an existence theorem of equilibrium for 1-person game in [X.-P. Ding, W.K. Kim, K.-K. Tan, Equilibria of non-compact generalized games with L*-majorized preferences, J. Math. Anal. Appl. 164 (1992) 508-517] as basic tools, we prove new existence theorems on generalized operator equilibrium problem and generalized operator quasi-equilibrium problem which includes operator equilibrium problems.
Non-equilibrium chemistry in the atmospheres of brown dwarfs
Saumon, D S; Freedman, R S; Lodders, K
2002-01-01
Carbon monoxide and ammonia have been detected in the spectrum of Gl 229B at abundances that differ substantially from those obtained from chemical equilibrium. Vertical mixing in the atmosphere is a mechanism that can drive slowly reacting species out of chemical equilibrium. We explore the effects of vertical mixing as a function of mixing efficiency and effective temperature on the chemical abundances in the atmospheres of brown dwarfs and on their spectra. The models compare favorably with the observational evidence and indicate that vertical mixing plays an important role in brown dwarf atmospheres.
Protonation Equilibrium of Linear Homopolyacids
Požar J.
2015-07-01
Full Text Available The paper presents a short summary of investigations dealing with protonation equilibrium of linear homopolyacids, in particularly those of high charge density. Apart from the review of experimental results which can be found in the literature, a brief description of theoretical models used in processing the dependence of protonation constants on monomer dissociation degree and ionic strength is given (cylindrical model based on Poisson-Boltzmann equation, cylindrical Stern model, the models according to Ising, Högfeldt, Mandel and Katchalsky. The applicability of these models regarding the polyion charge density, electrolyte concentration and counterion type is discussed. The results of Monte Carlo simulations of protonation equilibrium are also briefly mentioned. In addition, frequently encountered errors connected with calibration of of glass electrode and the related unreliability of determined protonation constants are pointed out.
Holding Costs and Equilibrium Arbitrage
Tuckman, Bruce; Vila, Jean-Luc
1993-01-01
This paper constructs a dynamic model of the equilibrium determination of relative prices when arbitragers face holding costs. The major findings are that 1) models based on riskless arbitrage arguments alone may not provide usefully tight bounds on observed prices, 2) arbitragers are often most effective in eliminating the mispricings of shorter-term assets, 3) arbitrage activity increases the mean reversion of changes in the mispricing process and reduces their conditional volatility, and 4...
Monetary policy as equilibrium selection
Gaetano Antinolfi; Costas Azariadis; Bullard, James B.
2007-01-01
Can monetary policy guide expectations toward desirable outcomes when equilibrium and welfare are sensitive to alternative, commonly held rational beliefs? This paper studies this question in an exchange economy with endogenous debt limits in which dynamic complementarities between dated debt limits support two Pareto-ranked steady states: a suboptimal, locally stable autarkic state and a constrained optimal, locally unstable trading state. The authors identify feedback policies that reverse ...
Korshunov instantons out of equilibrium
Titov, M.; Gutman, D. B.
2016-04-01
Zero-dimensional dissipative action possesses nontrivial minima known as Korshunov instantons. They have been known so far only for imaginary time representation that is limited to equilibrium systems. In this work we reconstruct and generalise Korshunov instantons using real-time Keldysh approach. This allows us to formulate the dissipative action theory for generic nonequilibrium conditions. Possible applications of the theory to transport in strongly biased quantum dots are discussed.
An introduction to equilibrium thermodynamics
Morrill, Bernard; Hartnett, James P; Hughes, William F
1973-01-01
An Introduction to Equilibrium Thermodynamics discusses classical thermodynamics and irreversible thermodynamics. It introduces the laws of thermodynamics and the connection between statistical concepts and observable macroscopic properties of a thermodynamic system. Chapter 1 discusses the first law of thermodynamics while Chapters 2 through 4 deal with statistical concepts. The succeeding chapters describe the link between entropy and the reversible heat process concept of entropy; the second law of thermodynamics; Legendre transformations and Jacobian algebra. Finally, Chapter 10 provides a
The Fisher Market Game: Equilibrium and Welfare
Branzei, Simina; Chen, Yiling; Deng, Xiaotie
2014-01-01
functions, which are three representative classes of utility functions in the important Constant Elasticity of Substitution (CES) family. Furthermore, to quantify the social efficiency, we prove Price of Anarchy bounds for the game when the utility functions of buyers fall into these three classes......The Fisher market model is one of the most fundamental resource allocation models in economics. In a Fisher market, the prices and allocations of goods are determined according to the preferences and budgets of buyers to clear the market. In a Fisher market game, however, buyers are strategic...... and report their preferences over goods; the market-clearing prices and allocations are then determined based on their reported preferences rather than their real preferences. We show that the Fisher market game always has a pure Nash equilibrium, for buyers with linear, Leontief, and Cobb-Douglas utility...
Multicomponent Equilibrium Models for Testing Geothermometry Approaches
Carl D. Palmer; Robert W. Smith; Travis L. McLing
2013-02-01
Geothermometry is an important tool for estimating deep reservoir temperature from the geochemical composition of shallower and cooler waters. The underlying assumption of geothermometry is that the waters collected from shallow wells and seeps maintain a chemical signature that reflects equilibrium in the deeper reservoir. Many of the geothermometers used in practice are based on correlation between water temperatures and composition or using thermodynamic calculations based a subset (typically silica, cations or cation ratios) of the dissolved constituents. An alternative approach is to use complete water compositions and equilibrium geochemical modeling to calculate the degree of disequilibrium (saturation index) for large number of potential reservoir minerals as a function of temperature. We have constructed several “forward” geochemical models using The Geochemist’s Workbench to simulate the change in chemical composition of reservoir fluids as they migrate toward the surface. These models explicitly account for the formation (mass and composition) of a steam phase and equilibrium partitioning of volatile components (e.g., CO2, H2S, and H2) into the steam as a result of pressure decreases associated with upward fluid migration from depth. We use the synthetic data generated from these simulations to determine the advantages and limitations of various geothermometry and optimization approaches for estimating the likely conditions (e.g., temperature, pCO2) to which the water was exposed in the deep subsurface. We demonstrate the magnitude of errors that can result from boiling, loss of volatiles, and analytical error from sampling and instrumental analysis. The estimated reservoir temperatures for these scenarios are also compared to conventional geothermometers. These results can help improve estimation of geothermal resource temperature during exploration and early development.
Mesoscopic non-equilibrium thermodynamics
Rubi, Jose' Miguel
2008-02-01
Full Text Available Basic concepts like energy, heat, and temperature have acquired a precise meaning after the development of thermodynamics. Thermodynamics provides the basis for understanding how heat and work are related and with the general rules that the macroscopic properties of systems at equilibrium follow. Outside equilibrium and away from macroscopic regimes most of those rules cannot be applied directly. In this paper we present recent developments that extend the applicability of thermodynamic concepts deep into mesoscopic and irreversible regimes. We show how the probabilistic interpretation of thermodynamics together with probability conservation laws can be used to obtain kinetic equations describing the evolution of the relevant degrees of freedom. This approach provides a systematic method to obtain the stochastic dynamics of a system directly from the knowledge of its equilibrium properties. A wide variety of situations can be studied in this way, including many that were thought to be out of reach of thermodynamic theories, such as non-linear transport in the presence of potential barriers, activated processes, slow relaxation phenomena, and basic processes in biomolecules, like translocation and stretching.
Molecular kinetic analysis of a local equilibrium Carnot cycle
Izumida, Yuki; Okuda, Koji
2017-07-01
We identify a velocity distribution function of ideal gas particles that is compatible with the local equilibrium assumption and the fundamental thermodynamic relation satisfying the endoreversibility. We find that this distribution is a Maxwell-Boltzmann distribution with a spatially uniform temperature and a spatially varying local center-of-mass velocity. We construct the local equilibrium Carnot cycle of an ideal gas, based on this distribution, and show that the efficiency of the present cycle is given by the endoreversible Carnot efficiency using the molecular kinetic temperatures of the gas. We also obtain an analytic expression of the efficiency at maximum power of our cycle under a small temperature difference. Our theory is also confirmed by a molecular dynamics simulation.
Xin, Jing; Yiqi, Zhuang; Hualian, Tang; Li, Dai; Yongqian, Du; Li, Zhang; Hongbo, Duan
2014-02-01
A power-efficient 12-bit 40-MS/s pipeline analog-to-digital converter (ADC) implemented in a 0.13 μm CMOS technology is presented. A novel CMOS bootstrapping switch, which offers a constant on-resistance over the entire input signal range, is used at the sample-and-hold front-end to enhance the dynamic performance of the pipelined ADC. By implementing with 2.5-bit-per-stage and a simplified amplifier sharing architecture between two successive pipeline stages, a very competitive power consumption and small die area can be achieved. Meanwhile, the substrate-biasing-effect attenuated T-type switches are introduced to reduce the crosstalk between the two opamp sharing successive stages. Moreover, a two-stage gain boosted recycling folded cascode (RFC) amplifier with hybrid frequency compensation is developed to further reduce the power consumption and maintain the ADC's performance simultaneously. The measured results imply that the ADC achieves a spurious-free dynamic range (SFDR) of 75.7 dB and a signal-to-noise-plus-distortion ratio (SNDR) of 62.74 dB with a 4.3 MHz input signal; the SNDR maintains over 58.25 dB for input signals up to 19.3MHz. The measured differential nonlinearity (DNL) and integral nonlinearity (INL) are -0.43 to +0.48 LSB and -1.62 to +1.89 LSB respectively. The prototype ADC consumes 28.4 mW under a 1.2-V nominal power supply and 40 MHz sampling rate, transferring to a figure-of-merit (FOM) of 0.63 pJ per conversion-step.
Ucar, Murat; Guryildirim, Melike; Tokgoz, Nil; Kilic, Koray; Borcek, Alp; Oner, Yusuf; Akkan, Koray; Tali, Turgut
2014-01-01
To compare the accuracy of diagnosing aqueductal patency and image quality between high spatial resolution three-dimensional (3D) high-sampling-efficiency technique (sampling perfection with application optimized contrast using different flip angle evolutions [SPACE]) and T2-weighted (T2W) two-dimensional (2D) turbo spin echo (TSE) at 3-T in patients with hydrocephalus. This retrospective study included 99 patients diagnosed with hydrocephalus. T2W 3D-SPACE was added to the routine sequences which consisted of T2W 2D-TSE, 3D-constructive interference steady state (CISS), and cine phase-contrast MRI (PC-MRI). Two radiologists evaluated independently the patency of cerebral aqueduct and image quality on the T2W 2D-TSE and T2W 3D-SPACE. PC-MRI and 3D-CISS were used as the reference for aqueductal patency and image quality, respectively. Inter-observer agreement was calculated using kappa statistics. The evaluation of the aqueductal patency by T2W 3D-SPACE and T2W 2D-TSE were in agreement with PC-MRI in 100% (99/99; sensitivity, 100% [83/83]; specificity, 100% [16/16]) and 83.8% (83/99; sensitivity, 100% [67/83]; specificity, 100% [16/16]), respectively (p dimensional-SPACE is superior to 2D-TSE for the evaluation of aqueductal patency in hydrocephalus. T2W 3D-SPACE may hold promise as a highly accurate alternative treatment to PC-MRI for the physiological and morphological evaluation of aqueductal patency.
Piepel, Gregory F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hutchison, Janine R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Deatherage Kaiser, Brooke L [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Amidan, Brett G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sydor, Michael A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Barrett, Christopher A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-03-31
The performance of a macrofoam-swab sampling method was evaluated using Bacillus anthracis Sterne (BAS) and Bacillus atrophaeus Nakamura (BG) spores applied at nine low target amounts (2-500 spores) to positive-control plates and test coupons (2 in. × 2 in.) of four surface materials (glass, stainless steel, vinyl tile, and plastic). Test results from cultured samples were used to evaluate the effects of surrogate, surface concentration, and surface material on recovery efficiency (RE), false negative rate (FNR), and limit of detection. For RE, surrogate and surface material had statistically significant effects, but concentration did not. Mean REs were the lowest for vinyl tile (50.8% with BAS, 40.2% with BG) and the highest for glass (92.8% with BAS, 71.4% with BG). FNR values ranged from 0 to 0.833 for BAS and 0 to 0.806 for BG, with values increasing as concentration decreased in the range tested (0.078 to 19.375 CFU/cm^{2}, where CFU denotes ‘colony forming units’). Surface material also had a statistically significant effect. A FNR-concentration curve was fit for each combination of surrogate and surface material. For both surrogates, the FNR curves tended to be the lowest for glass and highest for vinyl title. The FNR curves for BG tended to be higher than for BAS at lower concentrations, especially for glass. Results using a modified Rapid Viability-Polymerase Chain Reaction (mRV-PCR) analysis method were also obtained. The mRV-PCR results and comparisons to the culture results will be discussed in a subsequent report.
Morphodynamic equilibrium of alluvial estuaries
Tambroni, Nicoletta; Bolla Pittaluga, Michele; Canestrelli, Alberto; Lanzoni, Stefano; Seminara, Giovanni
2014-05-01
The evolution of the longitudinal bed profile of an estuary, with given plan-form configuration, subject to given tidal forcing at the mouth and prescribed values of water and sediment supply from the river is investigated numerically. Our main goal is to ascertain whether, starting from some initial condition, the bed evolution tends to reach a unique equilibrium configuration asymptotically in time. Also, we investigate the morphological response of an alluvial estuary to changes in the tidal range and hydrologic forcing (flow and sediment supply). Finally, the solution helps characterizing the transition between the fluvially dominated region and the tidally dominated region of the estuary. All these issues play an important role also in interpreting how the facies changes along the estuary, thus helping to make correct paleo-environmental and sequence-stratigraphic interpretations of sedimentary successions (Dalrymple and Choi, 2007). Results show that the model is able to describe a wide class of settings ranging from tidally dominated estuaries to fluvially dominated estuaries. In the latter case, the solution is found to compare satisfactory with the analytical asymptotic solution recently derived by Seminara et al. (2012), under the hypothesis of fairly 'small' tidal oscillations. Simulations indicate that the system always moves toward an equilibrium configuration in which the net sediment flux in a tidal cycle is constant throughout the estuary and equal to the constant sediment flux discharged from the river. For constant width, the bed equilibrium profile of the estuarine channel is characterized by two distinct regions: a steeper reach seaward, dominated by the tide, and a less steep upstream reach, dominated by the river and characterized by the undisturbed bed slope. Although the latter reach, at equilibrium, is not directly affected by the tidal wave, however starting from an initial uniform stream with the constant 'fluvial' slope, the final
Niu, Hui; Yang, Yaqiong; Zhang, Huiqi
2015-12-15
Efficient one-pot synthesis of hydrophilic and fluorescent molecularly imprinted polymer (MIP) nanoparticles and their application as optical chemosensor for direct drug quantification in real, undiluted biological samples are described. The general principle was demonstrated by preparing tetracycline (Tc, a broad-spectrum antibiotic)-imprinted fluorescent polymer nanoparticles bearing hydrophilic polymer brushes via poly(2-hydroxyethyl methacrylate) (PHEMA) macromolecular chain transfer agent-mediated reversible addition-fragmentation chain transfer (RAFT) precipitation polymerization in the presence of a fluorescent monomer. The introduction of hydrophilic PHEMA brushes and fluorescence labeling onto/into the MIP nanoparticles proved to not only significantly improve their surface hydrophilicity and lead to their obvious specific binding and high selectivity toward Tc in the undiluted bovine serum, but also impart them with strong fluorescent properties. In particular, significant fluorescence quenching was observed upon their binding with Tc in such complex biological milieu, which makes these Tc-MIP nanoparticles useful optical chemosensor with a detection limit of 0.26 μM. Furthermore, such advanced functional MIP nanoparticles-based chemosensor was also successfully utilized for the direct, sensitive, and accurate determination of Tc in another biological medium (i.e., the undiluted pig serum) with average recoveries ranging from 98% to 102%, even in the presence of several interfering drugs.
Rucci, Michael; Hardie, Russell C; Barnard, Kenneth J
2014-05-01
In this paper, we present a computationally efficient video restoration algorithm to address both blur and noise for a Nyquist sampled imaging system. The proposed method utilizes a temporal Kalman filter followed by a correlation-model based spatial adaptive Wiener filter (AWF). The Kalman filter employs an affine background motion model and novel process-noise variance estimate. We also propose and demonstrate a new multidelay temporal Kalman filter designed to more robustly treat local motion. The AWF is a spatial operation that performs deconvolution and adapts to the spatially varying residual noise left in the Kalman filter stage. In image areas where the temporal Kalman filter is able to provide significant noise reduction, the AWF can be aggressive in its deconvolution. In other areas, where less noise reduction is achieved with the Kalman filter, the AWF balances the deconvolution with spatial noise reduction. In this way, the Kalman filter and AWF work together effectively, but without the computational burden of full joint spatiotemporal processing. We also propose a novel hybrid system that combines a temporal Kalman filter and BM3D processing. To illustrate the efficacy of the proposed methods, we test the algorithms on both simulated imagery and video collected with a visible camera.
Correlated Fluctuations in Strongly Coupled Binary Networks Beyond Equilibrium
David Dahmen
2016-08-01
Full Text Available Randomly coupled Ising spins constitute the classical model of collective phenomena in disordered systems, with applications covering glassy magnetism and frustration, combinatorial optimization, protein folding, stock market dynamics, and social dynamics. The phase diagram of these systems is obtained in the thermodynamic limit by averaging over the quenched randomness of the couplings. However, many applications require the statistics of activity for a single realization of the possibly asymmetric couplings in finite-sized networks. Examples include reconstruction of couplings from the observed dynamics, representation of probability distributions for sampling-based inference, and learning in the central nervous system based on the dynamic and correlation-dependent modification of synaptic connections. The systematic cumulant expansion for kinetic binary (Ising threshold units with strong, random, and asymmetric couplings presented here goes beyond mean-field theory and is applicable outside thermodynamic equilibrium; a system of approximate nonlinear equations predicts average activities and pairwise covariances in quantitative agreement with full simulations down to hundreds of units. The linearized theory yields an expansion of the correlation and response functions in collective eigenmodes, leads to an efficient algorithm solving the inverse problem, and shows that correlations are invariant under scaling of the interaction strengths.
Correlated Fluctuations in Strongly Coupled Binary Networks Beyond Equilibrium
Dahmen, David; Bos, Hannah; Helias, Moritz
2016-07-01
Randomly coupled Ising spins constitute the classical model of collective phenomena in disordered systems, with applications covering glassy magnetism and frustration, combinatorial optimization, protein folding, stock market dynamics, and social dynamics. The phase diagram of these systems is obtained in the thermodynamic limit by averaging over the quenched randomness of the couplings. However, many applications require the statistics of activity for a single realization of the possibly asymmetric couplings in finite-sized networks. Examples include reconstruction of couplings from the observed dynamics, representation of probability distributions for sampling-based inference, and learning in the central nervous system based on the dynamic and correlation-dependent modification of synaptic connections. The systematic cumulant expansion for kinetic binary (Ising) threshold units with strong, random, and asymmetric couplings presented here goes beyond mean-field theory and is applicable outside thermodynamic equilibrium; a system of approximate nonlinear equations predicts average activities and pairwise covariances in quantitative agreement with full simulations down to hundreds of units. The linearized theory yields an expansion of the correlation and response functions in collective eigenmodes, leads to an efficient algorithm solving the inverse problem, and shows that correlations are invariant under scaling of the interaction strengths.
On static equilibrium and balance puzzler
Dey, Samrat; Saikia, Dipankar; Kalita, Deepjyoti; Debbarma, Anamika; Wahab, Shaheen Akhtar; Sarma, Saurabh
2012-01-01
The principles of static equilibrium are of special interest to civil engineers. For a rigid body to be in static equilibrium the condition is that net force and net torque acting on the body should be zero. That clearly signifies that if equal weights are placed on either sides of a balance, the balance should be in equilibrium, even if its beam is not horizontal (we have considered the beam to be straight and have no thickness, an ideal case). Thus, although the weights are equal, they will appear different which is puzzling. This also shows that the concept of equilibrium is confusing, especially neutral equilibrium is confused to be stable equilibrium. The study not only throws more light on the concept of static equilibrium, but also clarifies that a structure need not be firm and steady even if it is in static equilibrium.
Equilibrium thermodynamics - Callen’s postulational approach
Jongschaap, Robert J.J.; Öttinger, Hans Christian
2001-01-01
In order to provide the background for nonequilibrium thermodynamics, we outline the fundamentals of equilibrium thermodynamics. Equilibrium thermodynamics must not only be obtained as a special case of any acceptable nonequilibrium generalization but, through its shining example, it also elucidates
Simulating rare events in equilibrium or nonequilibrium stochastic systems
Allen, R.J.; Frenkel, D.; Wolde, P.R. ten
2006-01-01
We present three algorithms for calculating rate constants and sampling transition paths for rare events in simulations with stochastic dynamics. The methods do not require a priori knowledge of the phase-space density and are suitable for equilibrium or nonequilibrium systems in stationary state. A
The geometry of finite equilibrium sets
Balasko, Yves; Tvede, Mich
2009-01-01
We investigate the geometry of finite datasets defined by equilibrium prices, income distributions, and total resources. We show that the equilibrium condition imposes no restrictions if total resources are collinear, a property that is robust to small perturbations. We also show that the set...... of equilibrium datasets is pathconnected when the equilibrium condition does impose restrictions on datasets, as for example when total resources are widely noncollinear....
Cao, Yupin; Deng, Biyang; Yan, Lizhen; Huang, Hongli
2017-05-15
An environmentally friendly and highly efficient gas pressure-assisted sample introduction system (GPASIS) was developed for inductively-coupled plasma mass spectrometry. A GPASIS consisting of a gas-pressure control device, a customized nebulizer, and a custom-made spray chamber was fabricated. The advantages of this GPASIS derive from its high nebulization efficiencies, small sample volume requirements, low memory effects, good precision, and zero waste emission. A GPASIS can continuously, and stably, nebulize 10% NaCl solution for more than an hour without clogging. Sensitivity, detection limits, precision, long-term stability, double charge and oxide ion levels, nebulization efficiencies, and matrix effects of the sample introduction system were evaluated. Experimental results indicated that the performance of this GPASIS, was equivalent to, or better than, those obtained by conventional sample introduction systems. This GPASIS was successfully used to determine Cd and Pb by ICP-MS in human plasma. Copyright © 2017 Elsevier B.V. All rights reserved.
Open problems in non-equilibrium physics
Kusnezov, D.
1997-09-22
The report contains viewgraphs on the following: approaches to non-equilibrium statistical mechanics; classical and quantum processes in chaotic environments; classical fields in non-equilibrium situations: real time dynamics at finite temperature; and phase transitions in non-equilibrium conditions.
The concept of equilibrium in organization theory
Gazendam, Henk W.M.
1997-01-01
Many organization theories consist of an interpretation frame and an idea about the ideal equilibrium state. This article explains how the equilibrium concept is used in four organization theories: the theories of Fayol, Mintzberg, Morgan, and Volberda. Equilibrium can be defined as balance, fit or
The concept of equilibrium in organization theory
Gazendam, Henk W.M.
1998-01-01
Many organization theories consist of an interpretation frame and an idea about the ideal equilibrium state. This article explains how the equilibrium concept is used in four organization theories: the theories of Fayol, Mintzberg, Morgan, and Volberda. Equilibrium can be defined as balance, fit or
Equilibrium figures of dwarf planets
Rambaux, Nicolas; Chambat, Frederic; Castillo-Rogez, Julie; Baguet, Daniel
2016-10-01
Dwarf planets including transneptunian objects (TNO) and Ceres are >500 km large and display a spheroidal shape. These protoplanets are left over from the formation of the solar System about 4.6 billion years ago and their study could improve our knowledge of the early solar system. They could be formed in-situ or migrated to their current positions as a consequence of large-scale solar system dynamical evolution. Quantifying their internal composition would bring constraints on their accretion environment and migration history. That information may be inferred from studying their global shapes from stellar occultations or thermal infrared imaging. Here we model the equilibrium shapes of isolated dwarf planets under the assumption of hydrostatic equilibrium that forms the basis for interpreting shape data in terms of interior structure. Deviations from hydrostaticity can shed light on the thermal and geophysical history of the bodies. The dwarf planets are generally fast rotators spinning in few hours, so their shape modeling requires numerically integration with Clairaut's equations of rotational equilibrium expanded up to third order in a small parameter m, the geodetic parameter, to reach an accuracy better than a few kilometers depending on the spin velocity and mean density. We also show that the difference between a 500-km radius homogeneous model described by a MacLaurin ellipsoid and a stratified model assuming silicate and ice layers can reach several kilometers in the long and short axes, which could be measurable. This type of modeling will be instrumental in assessing hydrostaticity and thus detecting large non-hydrostatic contributions in the observed shapes.
Risk premia in general equilibrium
Posch, Olaf
This paper shows that non-linearities can generate time-varying and asymmetric risk premia over the business cycle. These (empirical) key features become relevant and asset market implications improve substantially when we allow for non-normalities in the form of rare disasters. We employ explicit...... solutions of dynamic stochastic general equilibrium models, including a novel solution with endogenous labor supply, to obtain closed-form expressions for the risk premium in production economies. We find that the curvature of the policy functions affects the risk premium through controlling the individual......'s effective risk aversion....
Ucar, Murat; Guryildirim, Melike; Tokgoz, Nil; Kilic, Koray; Borcek, Alp; Oner, Yusuf; Akkan, Koray; Tali, Turgut [School of Medicine, Gazi University, Ankara (Turkey)
2014-12-15
To compare the accuracy of diagnosing aqueductal patency and image quality between high spatial resolution three-dimensional (3D) high-sampling-efficiency technique (sampling perfection with application optimized contrast using different flip angle evolutions [SPACE]) and T2-weighted (T2W) two-dimensional (2D) turbo spin echo (TSE) at 3-T in patients with hydrocephalus. This retrospective study included 99 patients diagnosed with hydrocephalus. T2W 3D-SPACE was added to the routine sequences which consisted of T2W 2D-TSE, 3D-constructive interference steady state (CISS), and cine phase-contrast MRI (PC-MRI). Two radiologists evaluated independently the patency of cerebral aqueduct and image quality on the T2W 2D-TSE and T2W 3D-SPACE. PC-MRI and 3D-CISS were used as the reference for aqueductal patency and image quality, respectively. Inter-observer agreement was calculated using kappa statistics. The evaluation of the aqueductal patency by T2W 3D-SPACE and T2W 2D-TSE were in agreement with PC-MRI in 100% (99/99; sensitivity, 100% [83/83]; specificity, 100% [16/16]) and 83.8% (83/99; sensitivity, 100% [67/83]; specificity, 100% [16/16]), respectively (p < 0.001). No significant difference in image quality between T2W 2D-TSE and T2W 3D-SPACE (p = 0.056) occurred. The kappa values for inter-observer agreement were 0.714 for T2W 2D-TSE and 0.899 for T2W 3D-SPACE. Three-dimensional-SPACE is superior to 2D-TSE for the evaluation of aqueductal patency in hydrocephalus. T2W 3D-SPACE may hold promise as a highly accurate alternative treatment to PC-MRI for the physiological and morphological evaluation of aqueductal patency.
Silverberg, Lee J.; Raff, Lionel M.
2015-01-01
Thermodynamic spontaneity-equilibrium criteria require that in a single-reaction system, reactions in either the forward or reverse direction at equilibrium be nonspontaneous. Conversely, the concept of dynamic equilibrium holds that forward and reverse reactions both occur at equal rates at equilibrium to the extent allowed by kinetic…
Pre-equilibrium plasma dynamics
Heinz, U.
1986-01-01
Approaches towards understanding and describing the pre-equilibrium stage of quark-gluon plasma formation in heavy-ion collisions are reviewed. Focus is on a kinetic theory approach to non-equilibrium dynamics, its extension to include the dynamics of color degrees of freedom when applied to the quark-gluon plasma, its quantum field theoretical foundations, and its relationship to both the particle formation stage at the very beginning of the nuclear collision and the hydrodynamic stage at late collision times. The usefulness of this approach to obtain the transport coefficients in the quark-gluon plasma and to derive the collective mode spectrum and damping rates in this phase are discussed. Comments are made on the general difficulty to find appropriated initial conditions to get the kinetic theory started, and a specific model is given that demonstrates that, once given such initial conditions, the system can be followed all the way through into the hydrodynamical regime. 39 refs., 7 figs. (LEW)
Non-Equilibrium Effects on Hypersonic Turbulent Boundary Layers
Kim, Pilbum
Understanding non-equilibrium effects of hypersonic turbulent boundary layers is essential in order to build cost efficient and reliable hypersonic vehicles. It is well known that non-equilibrium effects on the boundary layers are notable, but our understanding of the effects are limited. The overall goal of this study is to improve the understanding of non-equilibrium effects on hypersonic turbulent boundary layers. A new code has been developed for direct numerical simulations of spatially developing hypersonic turbulent boundary layers over a flat plate with finite-rate reactions. A fifth-order hybrid weighted essentially non-oscillatory scheme with a low dissipation finite-difference scheme is utilized in order to capture stiff gradients while resolving small motions in turbulent boundary layers. The code has been validated by qualitative and quantitative comparisons of two different simulations of a non-equilibrium flow and a spatially developing turbulent boundary layer. With the validated code, direct numerical simulations of four different hypersonic turbulent boundary layers, perfect gas and non-equilibrium flows of pure oxygen and nitrogen, have been performed. In order to rule out uncertainties in comparisons, the same inlet conditions are imposed for each species, and then mean and turbulence statistics as well as near-wall turbulence structures are compared at a downstream location. Based on those comparisons, it is shown that there is no direct energy exchanges between internal and turbulent kinetic energies due to thermal and chemical non-equilibrium processes in the flow field. Instead, these non-equilibria affect turbulent boundary layers by changing the temperature without changing the main characteristics of near-wall turbulence structures. This change in the temperature induces the changes in the density and viscosity and the mean flow fields are then adjusted to satisfy the conservation laws. The perturbation fields are modified according to
Sampling free energy surfaces as slices by combining umbrella sampling and metadynamics.
Awasthi, Shalini; Kapil, Venkat; Nair, Nisanth N
2016-06-15
Metadynamics (MTD) is a very powerful technique to sample high-dimensional free energy landscapes, and due to its self-guiding property, the method has been successful in studying complex reactions and conformational changes. MTD sampling is based on filling the free energy basins by biasing potentials and thus for cases with flat, broad, and unbound free energy wells, the computational time to sample them becomes very large. To alleviate this problem, we combine the standard Umbrella Sampling (US) technique with MTD to sample orthogonal collective variables (CVs) in a simultaneous way. Within this scheme, we construct the equilibrium distribution of CVs from biased distributions obtained from independent MTD simulations with umbrella potentials. Reweighting is carried out by a procedure that combines US reweighting and Tiwary-Parrinello MTD reweighting within the Weighted Histogram Analysis Method (WHAM). The approach is ideal for a controlled sampling of a CV in a MTD simulation, making it computationally efficient in sampling flat, broad, and unbound free energy surfaces. This technique also allows for a distributed sampling of a high-dimensional free energy surface, further increasing the computational efficiency in sampling. We demonstrate the application of this technique in sampling high-dimensional surface for various chemical reactions using ab initio and QM/MM hybrid molecular dynamics simulations. Further, to carry out MTD bias reweighting for computing forward reaction barriers in ab initio or QM/MM simulations, we propose a computationally affordable approach that does not require recrossing trajectories. © 2016 Wiley Periodicals, Inc.
Investment Irreversibility and Precautionary Savings in General Equilibrium
Ejarque, João
Partial equilibrium models suggest that when uncertainty increases, agents increase savings and at the same time reduce investment in irreversible goods. This paper characterizes this problem in general equilibrium with technology shocks, additive output shocks and shocks to the marginal efficiency...... if the shocks affect the marginal efficiency of investment. For all types of shocks, when concavity of the utility function is moderate or high, the irreversibility constraint never binds and the increase in variance has a negligible impact. Persistence in the shock process induces precautionary savings rather...... than irreversibility effects. If shocks are idiosyncratic and affect a cross section of agents over capital, an increase in their variance may induce an increase in aggregate investment even if all agents have an incentive to invest less, because zero investment is now an active lower bound for part...
Mukhopadhyay, Nitai D. [Department of Biostatistics, Virginia Commonwealth University, Richmond, VA 23298 (United States); Sampson, Andrew J. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, VA 23298 (United States); Deniz, Daniel; Alm Carlsson, Gudrun [Department of Radiation Physics, Faculty of Health Sciences, Linkoeping University, SE 581 85 (Sweden); Williamson, Jeffrey [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, VA 23298 (United States); Malusek, Alexandr, E-mail: malusek@ujf.cas.cz [Department of Radiation Physics, Faculty of Health Sciences, Linkoeping University, SE 581 85 (Sweden); Department of Radiation Dosimetry, Nuclear Physics Institute AS CR v.v.i., Na Truhlarce 39/64, 180 86 Prague (Czech Republic)
2012-01-15
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed.
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed.
Equilibrium calculations of firework mixtures
Hobbs, M.L. [Sandia National Labs., Albuquerque, NM (United States); Tanaka, Katsumi; Iida, Mitsuaki; Matsunaga, Takehiro [National Inst. of Materials and Chemical Research, Tsukuba, Ibaraki (Japan)
1994-12-31
Thermochemical equilibrium calculations have been used to calculate detonation conditions for typical firework components including three report charges, two display charges, and black powder which is used as a fuse or launch charge. Calculations were performed with a modified version of the TIGER code which allows calculations with 900 gaseous and 600 condensed product species at high pressure. The detonation calculations presented in this paper are thought to be the first report on the theoretical study of firework detonation. Measured velocities for two report charges are available and compare favorably to predicted detonation velocities. However, the measured velocities may not be true detonation velocities. Fast deflagration rather than an ideal detonation occurs when reactants contain significant amounts of slow reacting constituents such as aluminum or titanium. Despite such uncertainties in reacting pyrotechnics, the detonation calculations do show the complex nature of condensed phase formation at elevated pressures and give an upper bound for measured velocities.
Equilibrium Analysis in Cake Cutting
Branzei, Simina; Miltersen, Peter Bro
2013-01-01
Cake cutting is a fundamental model in fair division; it represents the problem of fairly allocating a heterogeneous divisible good among agents with different preferences. The central criteria of fairness are proportionality and envy-freeness, and many of the existing protocols are designed...... to guarantee proportional or envy-free allocations, when the participating agents follow the protocol. However, typically, all agents following the protocol is not guaranteed to result in a Nash equilibrium. In this paper, we initiate the study of equilibria of classical cake cutting protocols. We consider one...... of the simplest and most elegant continuous algorithms -- the Dubins-Spanier procedure, which guarantees a proportional allocation of the cake -- and study its equilibria when the agents use simple threshold strategies. We show that given a cake cutting instance with strictly positive value density functions...
Neoclassical equilibrium in gyrokinetic simulations
Garbet, X.; Dif-Pradalier, G.; Nguyen, C.; Sarazin, Y.; Grandgirard, V.; Ghendrih, Ph.
2009-06-01
This paper presents a set of model collision operators, which reproduce the neoclassical equilibrium and comply with the constraints of a full-f global gyrokinetic code. The assessment of these operators is based on an entropy variational principle, which allows one to perform a fast calculation of the neoclassical diffusivity and poloidal velocity. It is shown that the force balance equation is recovered at lowest order in the expansion parameter, the normalized gyroradius, hence allowing one to calculate correctly the radial electric field. Also, the conventional neoclassical transport and the poloidal velocity are reproduced in the plateau and banana regimes. The advantages and drawbacks of the various model operators are discussed in view of the requirements for neoclassical and turbulent transport.
Local non-equilibrium thermodynamics.
Jinwoo, Lee; Tanaka, Hajime
2015-01-16
Local Shannon entropy lies at the heart of modern thermodynamics, with much discussion of trajectory-dependent entropy production. When taken at both boundaries of a process in phase space, it reproduces the second law of thermodynamics over a finite time interval for small scale systems. However, given that entropy is an ensemble property, it has never been clear how one can assign such a quantity locally. Given such a fundamental omission in our knowledge, we construct a new ensemble composed of trajectories reaching an individual microstate, and show that locally defined entropy, information, and free energy are properties of the ensemble, or trajectory-independent true thermodynamic potentials. We find that the Boltzmann-Gibbs distribution and Landauer's principle can be generalized naturally as properties of the ensemble, and that trajectory-free state functions of the ensemble govern the exact mechanism of non-equilibrium relaxation.
Ringed accretion disks: equilibrium configurations
Pugliese, D
2015-01-01
We investigate a model of ringed accretion disk, made up by several rings rotating around a supermassive Kerr black hole attractor. Each toroid of the ringed disk is governed by the General Relativity hydrodynamic Boyer condition of equilibrium configurations of rotating perfect fluids. Properties of the tori can be then determined by an appropriately defined effective potential reflecting the background Kerr geometry and the centrifugal effects. The ringed disks could be created in various regimes during the evolution of matter configurations around supermassive black holes. Therefore, both corotating and counterrotating rings have to be considered as being a constituent of the ringed disk. We provide constraints on the model parameters for the existence and stability of various ringed configurations and discuss occurrence of accretion onto the Kerr black hole and possible launching of jets from the ringed disk. We demonstrate that various ringed disks can be characterized by a maximum number of rings. We pr...
Walsh, S.J; McCallum, B.R
1998-01-01
.... Size selectivity and trawl efficiency of two shrimp trawls used in the surveys of the Grand Bank yellowtail flounder, Pleuronectes ferruginea, were analyzed from a series of comparative fishing tows...
Physical Equilibrium Evaluation in Parkinson Disease
Schmidt, Paula da Silva
2011-04-01
Full Text Available Introduction: The Parkinson disease can be among the multiple causes of alterations in the physical equilibrium. Accordingly, this study has the objective to evaluate Parkinson patients' physical equilibrium. Method: Potential study in which 12 Parkinson individuals were evaluated by way of tests of static and dynamic equilibrium, dynamic posturography and vectoelectronystagmograph. To compare the dynamic posturography results a group of gauged control was used. Results: Alterations in Romberg-Barré, Unterberger and Walk tests were found. The vestibular exam revealed 06 normal cases, 04 central vestibular syndrome and 02 cases of peripheral vestibular syndrome. In the dynamic posturography, an equilibrium alteration has been verified, when compared to the control group in all Sensorial Organization Tests, in average and in the utilization of vestibular system. Conclusion: Parkinson patients present a physical equilibrium alteration. The dynamic posturography was more sensitive to detect the equilibrium alterations than vectoelectronystagmograph.
A Constructive Generalization of Nash Equilibrium
Huang, Xiaofei
2009-01-01
In a society of multiple individuals, if everybody is only interested in maximizing his own payoff, will there exist any equilibrium for the society? John Nash proved more than 50 years ago that an equilibrium always exists such that nobody would benefit from unilaterally changing his strategy. Nash Equilibrium is a central concept in game theory, which offers the mathematical foundation for social science and economy. However, the original definition is declarative without including a solution to find them. It has been found later that it is computationally difficult to find a Nash equilibrium. Furthermore, a Nash equilibrium may be unstable, sensitive to the smallest variation of payoff functions. Making the situation worse, a society with selfish individuals can have an enormous number of equilibria, making it extremely hard to find out the global optimal one. This paper offers a constructive generalization of Nash equilibrium to cover the case when the selfishness of individuals are reduced to lower level...
Equilibrium Solubility of CO2 in Alkanolamines
Waseem Arshad, Muhammad; Fosbøl, Philip Loldrup; von Solms, Nicolas
2014-01-01
Equilibrium solubility of CO2 were measured in aqueous solutions of Monoethanolamine (MEA) and N,N-diethylethanolamine(DEEA). Equilibrium cells are generally used for these measurements. In this study, the equilibrium data were measured from the calorimetry. For this purpose a reaction calorimeter...... (model CPA 122 from ChemiSens AB, Sweden) was used. The advantage of this method is being the measurement of both heats of absorption and equilibrium solubility data of CO2 at the same time. The measurements were performed for 30 mass % MEA and 5M DEEA solutions as a function of CO2 loading at three...... different temperatures 40, 80 and 120 ºC. The measured 30 mass % MEA and 5M DEEA data were compared with the literature data obtained from different equilibrium cells which validated the use of calorimeters for equilibrium solubility measurements....
Mathematical models and equilibrium in irreversible microeconomics
Anatoly M. Tsirlin
2010-07-01
Full Text Available A set of equilibrium states in a system consisting of economic agents, economic reservoirs, and firms is considered. Methods of irreversible microeconomics are used. We show that direct sale/purchase leads to an equilibrium state which depends upon the coefficients of supply/demand functions. To reach the unique equilibrium state it is necessary to add either monetary exchange or an intermediate firm.
Characteristics of equilibrium reaction of zolazepam.
Hong, W H; Szulczewski, D H
1981-06-01
The equilibrium reaction of zolazepam, a pyrazolodiazepinone, was studied and analyzed using the approach used previously for other pyrazolodiazepinone derivatives. The intrinsic ring closure equilibrium constant for this reaction was approximately 100 times larger than that observed for pyrazolodiazepinones studied previously. This study illustrates that the diazepinone ring can dominate in equilibrium mixtures formed at pH values far below the pKa of the corresponding form.
1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO
T. EVANS; ET AL
2000-08-01
We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.
Rasmussen, Thomas Kjær; Watling, David P.; Prato, Carlo Giacomo
-off is strictly enforced: in a time-only model, if the current equilibrium travel time is 15.3 minutes, then adding a route with travel time of 15.4 minutes will have no impact on routing behaviour, whereas in practice (because of uncertainty, variability and unobserved attributes) the new route is likely....... This issue is further complicated by the fact that typically only a sub-set of possible routes will be identified in numerical algorithms solving for SUE. In the current study, we present new alternative forms of SUE conditions that permit unused alternatives, accommodate behaviour on used alternatives...... algorithms to the behaviourally sound SUE and the efficiency of solution algorithms to the DUE, we introduce a transformation of the cost function. This transformation function opens up a larger array of possible solution algorithms to the SUE, as it allows us to apply any path-based DUE solution algorithm...
National Center for Education Statistics (DHEW), Washington, DC.
A complex two-stage sample selection process was used in designing the National Longitudinal Study of the High School Class of 1972. The first-stage sampling frame used in the selection of schools was stratified by the following seven variables: public vs. private control, geographic region, grade 12 enrollment, proximity to institutions of higher…
Reflective Equilibrium: Epistemological or Political?
Andrew Lister
2016-01-01
Full Text Available One of the reasons for ongoing interest in the work of political philosopher John Rawls is that he developed novel methods for thinking systematically about the nature of justice. This paper examines the moral and epistemological motivations for Rawls’s method of “reflective equilibrium,” and the tension between them in Kai Nielsen’s use of “wide reflective equilibrium” in the service of critical and emancipatory social theory. Une des raisons de l’intérêt soutenu pour l’oeuvre du philosophe politique John Rawls est qu’il a développé de nouvelles méthodes de réflexion systématique au sujet de la nature de la justice. Cet article étudie les motifs moraux et épistémologiques soutenant la méthode d’ «équilibre réflectif» de Rawls, et les tensions entre eux dans l’utilisation par Kai Nielsen d’ «équilibre réflectif étendu» au service de la théorie sociale critique et émancipatrice.
Colin Rowe and ' Dynamic Equilibrium'
Pablo López Marín
2015-05-01
Full Text Available AbstractIn 1944 Gyorgy Kepes published what undoubtless will be his most influential text, "The language of vision". What Kepes tried to do was a guide of grammar and syntax of vision, which allows to face art as purely sensory experience or just visual, devisted of any literary , semantic or sentimental meaning.Among all the concepts that Kepes developes in his essay perhaps the most decisive one is the so called dynamic equilibrium, which is introduced in this work for fi rst time, verbalizing something that was in the air, orbiting around the entire modern plastic but far only explained in an empirical way.Colin Rowe reverberates the recent readed kepesian ideas on his own writings Transparency: Literal and Phenomenal and Neo-'Classicism' and Modern Architecture I and II, when the author tries to highlight the founding principles of the modern movement refusing the plastic dimension of the discipline . The article will try to expose and explain this influence.
RINGED ACCRETION DISKS: EQUILIBRIUM CONFIGURATIONS
Pugliese, D.; Stuchlík, Z., E-mail: d.pugliese.physics@gmail.com, E-mail: zdenek.stuchlik@physics.cz [Institute of Physics and Research Centre of Theoretical Physics and Astrophysics, Faculty of Philosophy and Science, Silesian University in Opava, Bezručovo náměstí 13, CZ-74601 Opava (Czech Republic)
2015-12-15
We investigate a model of a ringed accretion disk, made up by several rings rotating around a supermassive Kerr black hole attractor. Each toroid of the ringed disk is governed by the general relativity hydrodynamic Boyer condition of equilibrium configurations of rotating perfect fluids. Properties of the tori can then be determined by an appropriately defined effective potential reflecting the background Kerr geometry and the centrifugal effects. The ringed disks could be created in various regimes during the evolution of matter configurations around supermassive black holes. Therefore, both corotating and counterrotating rings have to be considered as being a constituent of the ringed disk. We provide constraints on the model parameters for the existence and stability of various ringed configurations and discuss occurrence of accretion onto the Kerr black hole and possible launching of jets from the ringed disk. We demonstrate that various ringed disks can be characterized by a maximum number of rings. We present also a perturbation analysis based on evolution of the oscillating components of the ringed disk. The dynamics of the unstable phases of the ringed disk evolution seems to be promising in relation to high-energy phenomena demonstrated in active galactic nuclei.
Equilibrium avalanches in spin glasses
Le Doussal, Pierre; Müller, Markus; Wiese, Kay Jörg
2012-06-01
We study the distribution of equilibrium avalanches (shocks) in Ising spin glasses which occur at zero temperature upon small changes in the magnetic field. For the infinite-range Sherrington-Kirkpatrick (SK) model, we present a detailed derivation of the density ρ(ΔM) of the magnetization jumps ΔM. It is obtained by introducing a multicomponent generalization of the Parisi-Duplantier equation, which allows us to compute all cumulants of the magnetization. We find that ρ(ΔM)˜ΔM-τ with an avalanche exponent τ=1 for the SK model, originating from the marginal stability (criticality) of the model. It holds for jumps of size 1≪ΔMmodel. For finite-range models, using droplet arguments, we obtain the prediction τ=(df+θ)/dm where df,dm, and θ are the fractal dimension, magnetization exponent, and energy exponent of a droplet, respectively. This formula is expected to apply to other glassy disordered systems, such as the random-field model and pinned interfaces. We make suggestions for further numerical investigations, as well as experimental studies of the Barkhausen noise in spin glasses.
Linear irreversible heat engines based on local equilibrium assumptions
Izumida, Yuki; Okuda, Koji
2015-08-01
We formulate an endoreversible finite-time Carnot cycle model based on the assumptions of local equilibrium and constant energy flux, where the efficiency and the power are expressed in terms of the thermodynamic variables of the working substance. By analyzing the entropy production rate caused by the heat transfer in each isothermal process during the cycle, and using the endoreversible condition applied to the linear response regime, we identify the thermodynamic flux and force of the present system and obtain a linear relation that connects them. We calculate the efficiency at maximum power in the linear response regime by using the linear relation, which agrees with the Curzon-Ahlborn (CA) efficiency known as the upper bound in this regime. This reason is also elucidated by rewriting our model into the form of the Onsager relations, where our model turns out to satisfy the tight-coupling condition leading to the CA efficiency.
Mesoscopic non-equilibrium thermodynamic analysis of molecular motors.
Kjelstrup, S; Rubi, J M; Pagonabarraga, I; Bedeaux, D
2013-11-28
We show that the kinetics of a molecular motor fueled by ATP and operating between a deactivated and an activated state can be derived from the principles of non-equilibrium thermodynamics applied to the mesoscopic domain. The activation by ATP, the possible slip of the motor, as well as the forward stepping carrying a load are viewed as slow diffusion along a reaction coordinate. Local equilibrium is assumed in the reaction coordinate spaces, making it possible to derive the non-equilibrium thermodynamic description. Using this scheme, we find expressions for the velocity of the motor, in terms of the driving force along the spacial coordinate, and for the chemical reaction that brings about activation, in terms of the chemical potentials of the reactants and products which maintain the cycle. The second law efficiency is defined, and the velocity corresponding to maximum power is obtained for myosin movement on actin. Experimental results fitting with the description are reviewed, giving a maximum efficiency of 0.45 at a myosin headgroup velocity of 5 × 10(-7) m s(-1). The formalism allows the introduction and test of meso-level models, which may be needed to explain experiments.
Approximate Equilibrium Problems and Fixed Points
H. Mazaheri
2013-01-01
Full Text Available We find a common element of the set of fixed points of a map and the set of solutions of an approximate equilibrium problem in a Hilbert space. Then, we show that one of the sequences weakly converges. Also we obtain some theorems about equilibrium problems and fixed points.
The Geometry of Finite Equilibrium Datasets
Balasko, Yves; Tvede, Mich
We investigate the geometry of finite datasets defined by equilibrium prices, income distributions, and total resources. We show that the equilibrium condition imposes no restrictions if total resources are collinear, a property that is robust to small perturbations. We also show that the set...
Equilibrium Tail Distribution Due to Touschek Scattering
Nash,B.; Krinsky, S.
2009-05-04
Single large angle Coulomb scattering is referred to as Touschek scattering. In addition to causing particle loss when the scattered particles are outside the momentum aperture, the process also results in a non-Gaussian tail, which is an equilibrium between the Touschek scattering and radiation damping. Here we present an analytical calculation for this equilibrium distribution.
Zeroth Law, Entropy, Equilibrium, and All That
Canagaratna, Sebastian G.
2008-01-01
The place of the zeroth law in the teaching of thermodynamics is examined in the context of the recent discussion by Gislason and Craig of some problems involving the establishment of thermal equilibrium. The concept of thermal equilibrium is introduced through the zeroth law. The relation between the zeroth law and the second law in the…
System of Operator Quasi Equilibrium Problems
Suhel Ahmad Khan
2014-01-01
Full Text Available We consider a system of operator quasi equilibrium problems and system of generalized quasi operator equilibrium problems in topological vector spaces. Using a maximal element theorem for a family of set-valued mappings as basic tool, we derive some existence theorems for solutions to these problems with and without involving Φ-condensing mappings.
Zeroth Law, Entropy, Equilibrium, and All That
Canagaratna, Sebastian G.
2008-01-01
The place of the zeroth law in the teaching of thermodynamics is examined in the context of the recent discussion by Gislason and Craig of some problems involving the establishment of thermal equilibrium. The concept of thermal equilibrium is introduced through the zeroth law. The relation between the zeroth law and the second law in the…
Ghirardi, Marco; Marchetti, Fabio; Pettinari, Claudio; Regis, Alberto; Roletto, Ezio
2015-01-01
A didactic sequence is proposed for the teaching of chemical equilibrium law. In this approach, we have avoided the kinetic derivation and the thermodynamic justification of the equilibrium constant. The equilibrium constant expression is established empirically by a trial-and-error approach. Additionally, students learn to use the criterion of…
Ghirardi, Marco; Marchetti, Fabio; Pettinari, Claudio; Regis, Alberto; Roletto, Ezio
2015-01-01
A didactic sequence is proposed for the teaching of chemical equilibrium law. In this approach, we have avoided the kinetic derivation and the thermodynamic justification of the equilibrium constant. The equilibrium constant expression is established empirically by a trial-and-error approach. Additionally, students learn to use the criterion of…
3D Equilibrium Reconstructions in DIII-D
Lao, L. L.; Ferraro, N. W.; Strait, E. J.; Turnbull, A. D.; King, J. D.; Hirshman, H. P.; Lazarus, E. A.; Sontag, A. C.; Hanson, J.; Trevisan, G.
2013-10-01
Accurate and efficient 3D equilibrium reconstruction is needed in tokamaks for study of 3D magnetic field effects on experimentally reconstructed equilibrium and for analysis of MHD stability experiments with externally imposed magnetic perturbations. A large number of new magnetic probes have been recently installed in DIII-D to improve 3D equilibrium measurements and to facilitate 3D reconstructions. The V3FIT code has been in use in DIII-D to support 3D reconstruction and the new magnetic diagnostic design. V3FIT is based on the 3D equilibrium code VMEC that assumes nested magnetic surfaces. V3FIT uses a pseudo-Newton least-square algorithm to search for the solution vector. In parallel, the EFIT equilibrium reconstruction code is being extended to allow for 3D effects using a perturbation approach based on an expansion of the MHD equations. EFIT uses the cylindrical coordinate system and can include the magnetic island and stochastic effects. Algorithms are being developed to allow EFIT to reconstruct 3D perturbed equilibria directly making use of plasma response to 3D perturbations from the GATO, MARS-F, or M3D-C1 MHD codes. DIII-D 3D reconstruction examples using EFIT and V3FIT and the new 3D magnetic data will be presented. Work supported in part by US DOE under DE-FC02-04ER54698, DE-FG02-95ER54309 and DE-AC05-06OR23100.
Bielecki, A; Saravanabhavan, G; Blais, E; Vincent, R; Kumarathasan, P
2012-01-01
Although several methods have been reported on the analysis of the oxidative stress marker 15(S)-8-iso-prostaglandin-F2alpha (8-iso-PGF2α) in biological fluids, they either involve extensive sample preparation and costly technology or require high sample volume. This study presents a sample preparation method that utilizes low sample volume for 8-iso-PGF2α analysis in plasma and urine by an enzyme immunoassay (EIA). In brief, 8-iso-PGF2α in deproteinized plasma or native urine sample is complexed with an antibody and then captured by molecular weight cut-off filtration. This method was compared with two other sample preparation methods that are typically used in the analysis of 8-iso-PGF2α by EIA: Cayman's affinity column purification method and solid-phase extraction on C-18. The immunoaffinity purification method described here was superior to the other two sample preparation methods and yielded recovery values of 99.8 and 54.1% for 8-iso-PGF2α in plasma and urine, respectively. Analytical precision (relative standard deviation) was ±5% for plasma and ±15% for urine. The analysis of healthy human plasma and urine resulted in basal 8-iso-PGF2α levels of 31.8 ± 5.5 pg/mL and 2.9 ± 2.0 ng/mg creatinine, respectively. The robustness and analytical performance of this method makes it a promising tool for high-throughput screening of biological samples for 8-iso-PGF2α.
Romain Guignard
Full Text Available OBJECTIVES: It is crucial for policy makers to monitor the evolution of tobacco smoking prevalence. In France, this monitoring is based on a series of cross-sectional general population surveys, the Health Barometers, conducted every five years and based on random samples. A methodological study has been carried out to assess the reliability of a monitoring system based on regular quota sampling surveys for smoking prevalence. DESIGN / OUTCOME MEASURES: In 2010, current and daily tobacco smoking prevalences obtained in a quota survey on 8,018 people were compared with those of the 2010 Health Barometer carried out on 27,653 people. Prevalences were assessed separately according to the telephone equipment of the interviewee (landline phone owner vs "mobile-only", and logistic regressions were conducted in the pooled database to assess the impact of the telephone equipment and of the survey mode on the prevalences found. Finally, logistic regressions adjusted for sociodemographic characteristics were conducted in the random sample in order to determine the impact of the needed number of calls to interwiew "hard-to-reach" people on the prevalence found. RESULTS: Current and daily prevalences were higher in the random sample (respectively 33.9% and 27.5% in 15-75 years-old than in the quota sample (respectively 30.2% and 25.3%. In both surveys, current and daily prevalences were lower among landline phone owners (respectively 31.8% and 25.5% in the random sample and 28.9% and 24.0% in the quota survey. The required number of calls was slightly related to the smoking status after adjustment for sociodemographic characteristics. CONCLUSION: Random sampling appears to be more effective than quota sampling, mainly by making it possible to interview hard-to-reach populations.
General equilibrium characteristics of a dual-lift helicopter system
Cicolani, L. S.; Kanning, G.
1986-01-01
The equilibrium characteristics of a dual-lift helicopter system are examined. The system consists of the cargo attached by cables to the endpoints of a spreader bar which is suspended by cables below two helicopters. Results are given for the orientation angles of the suspension system and its internal forces, and for the helicopter thrust vector requirements under general circumstances, including nonidentical helicopters, any accelerating or static equilibrium reference flight condition, any system heading relative to the flight direction, and any distribution of the load to the two helicopters. Optimum tether angles which minimize the sum of the required thrust magnitudes are also determined. The analysis does not consider the attitude degrees of freedom of the load and helicopters in detail, but assumes that these bodies are stable, and that their aerodynamic forces in equilibrium flight can be determined independently as functions of the reference trajectory. The ranges of these forces for sample helicopters and loads are examined and their effects on the equilibrium characteristics are given parametrically in the results.
Economic networks in and out of equilibrium
Squartini, Tiziano
2013-01-01
Economic and financial networks play a crucial role in various important processes, including economic integration, globalization, and financial crises. Of particular interest is understanding whether the temporal evolution of a real economic network is in a (quasi-)stationary equilibrium, i.e. characterized by smooth structural changes rather than abrupt transitions. Smooth changes in quasi-equilibrium networks can be generally controlled for, and largely predicted, via an appropriate rescaling of structural quantities, while this is generally not possible for abrupt transitions in non-stationary networks. Here we study whether real economic networks are in or out of equilibrium by checking their consistency with quasi-equilibrium maximum-entropy ensembles of graphs. As illustrative examples, we consider the International Trade Network (ITN) and the Dutch Interbank Network (DIN). We show that, despite the globalization process, the ITN is an almost perfect example of quasi-equilibrium network, while the DIN ...
Cosmological particle production and generalized thermodynamic equilibrium
Zimdahl, W
1998-01-01
With the help of a conformal, timelike Killing-vector we define generalized equilibrium states for cosmological fluids with particle production. For massless particles the generalized equilibrium conditions require the production rate to vanish and the well known ``global'' equilibrium of standard relativistic thermodynamics is recovered as a limiting case. The equivalence between the creation rate for particles with nonzero mass and an effective viscous fluid pressure follows as a consequence of the generalized equilibrium properties. The implications of this equivalence for the cosmological dynamics are discussed, including the possibility of a power-law inflationary behaviour. For a simple gas a microscopic derivation for such kind of equilibrium is given on the basis of relativistic kinetic theory.
Disturbances in equilibrium function after major earthquake
Honma, Motoyasu; Endo, Nobutaka; Osada, Yoshihisa; Kim, Yoshiharu; Kuriyama, Kenichi
2012-10-01
Major earthquakes were followed by a large number of aftershocks and significant outbreaks of dizziness occurred over a large area. However it is unclear why major earthquake causes dizziness. We conducted an intergroup trial on equilibrium dysfunction and psychological states associated with equilibrium dysfunction in individuals exposed to repetitive aftershocks versus those who were rarely exposed. Greater equilibrium dysfunction was observed in the aftershock-exposed group under conditions without visual compensation. Equilibrium dysfunction in the aftershock-exposed group appears to have arisen from disturbance of the inner ear, as well as individual vulnerability to state anxiety enhanced by repetitive exposure to aftershocks. We indicate potential effects of autonomic stress on equilibrium function after major earthquake. Our findings may contribute to risk management of psychological and physical health after major earthquakes with aftershocks, and allow development of a new empirical approach to disaster care after such events.
Conjectural Equilibrium in Water-filling Games
Su, Yi
2008-01-01
This paper considers a non-cooperative game in which competing users sharing a frequency-selective interference channel selfishly optimize their power allocation in order to improve their achievable rates. Previously, it was shown that a user having the knowledge of its opponents' channel state information can make foresighted decisions and substantially improve its performance compared with the case in which it deploys the conventional iterative water-filling algorithm, which does not exploit such knowledge. This paper discusses how a foresighted user can acquire this knowledge by modeling its experienced interference as a function of its own power allocation. To characterize the outcome of the multi-user interaction, the conjectural equilibrium is introduced, and the existence of this equilibrium for the investigated water-filling game is proved. Interestingly, both the Nash equilibrium and the Stackelberg equilibrium are shown to be special cases of the generalization of conjectural equilibrium. We develop...
Entanglement structure of non-equilibrium steady states
Mahajan, Raghu; Mumford, Sam; Tubman, Norm; Swingle, Brian
2016-01-01
We study the problem of calculating transport properties of interacting quantum systems, specifically electrical and thermal conductivities, by computing the non-equilibrium steady state (NESS) of the system biased by contacts. Our approach is based on the structure of entanglement in the NESS. With reasonable physical assumptions, we show that a NESS close to local equilibrium is lightly entangled and can be represented via a computationally efficient tensor network. We further argue that the NESS may be found by dynamically evolving the system within a manifold of appropriate low entanglement states. A physically realistic law of dynamical evolution is Markovian open system dynamics, or the Lindblad equation. We explore this approach in a well-studied free fermion model where comparisons with the literature are possible. We study both electrical and thermal currents with and without disorder, and compute entropic quantities such as mutual information and conditional mutual information. We conclude with a di...
EKELAND’S PRINCIPLE FOR SET-VALUED VECTOR EQUILIBRIUM PROBLEMS
龚循华
2014-01-01
In this paper, we introduce a concept of quasi C-lower semicontinuity for set-valued mapping and provide a vector version of Ekeland’s theorem related to set-valued vector equilibrium problems. As applications, we derive an existence theorem of weakly efficient so-lution for set-valued vector equilibrium problems without the assumption of convexity of the constraint set and the assumptions of convexity and monotonicity of the set-valued mapping. We also obtain an existence theorem of ε-approximate solution for set-valued vector equi-librium problems without the assumptions of compactness and convexity of the constraint set.
Stabilized oil production conditions in the development equilibrium of a water-flooding reservoir
Renshi Nie
2016-12-01
Full Text Available Water injection can compensate for pressure depletion of production. This paper firstly investigated into the equilibrium issue among water influx, water injection and production. Equilibrium principle was elaborated through deduction of equilibrium equation and presentation of equilibrium curves with an “equilibrium point”. Influences of artificial controllable factors (e.g. well ratio of injection to production and total well number on equilibrium were particularly analyzed using field data. It was found that the influences were mainly reflected as the location move of equilibrium point with factor change. Then reservoir pressure maintenance level was especially introduced to reveal the variation law of liquid rate and oil rate with the rising of water cut. It was also found that, even if reservoir pressure kept constant, oil rate still inevitably declined. However, in the field, a stabilized oil rate was always pursued for development efficiency. Therefore, the equilibrium issue of stabilized oil production was studied deeply through probing into some effective measures to realize oil rate stability after the increase of water cut for the example reservoir. Successful example application indicated that the integrated approach was very practical and feasible, and hence could be used to the other similar reservoir.
Shizuma, Kiyoshi, E-mail: shizuma@hiroshima-u.ac.jp [Graduate School of Engineering, Hiroshima University, Higashi-Hiroshima 739-8527 (Japan); Oba, Yurika; Takada, Momo [Graduate School of Integrated Arts and Sciences, Hiroshima University, Higashi-Hiroshima 739-8521 (Japan)
2016-09-15
A method for determining the γ-ray full-energy peak efficiency at positions close to three Ge detectors and at the well port of a well-type detector was developed for measuring environmental volume samples containing {sup 137}Cs, {sup 134}Cs and {sup 40}K. The efficiency was estimated by considering two correction factors: coincidence-summing and self-absorption corrections. The coincidence-summing correction for a cascade transition nuclide was estimated by an experimental method involving measuring a sample at the far and close positions of a detector. The derived coincidence-summing correction factors were compared with those of analytical and Monte Carlo simulation methods and good agreements were obtained. Differences in the matrix of the calibration source and the environmental sample resulted in an increase or decrease of the full-energy peak counts due to the self-absorption of γ-rays in the sample. The correction factor was derived as a function of the densities of several matrix materials. The present method was applied to the measurement of environmental samples and also low-level radioactivity measurements of water samples using the well-type detector.
Shizuma, Kiyoshi; Oba, Yurika; Takada, Momo
2016-09-01
A method for determining the γ-ray full-energy peak efficiency at positions close to three Ge detectors and at the well port of a well-type detector was developed for measuring environmental volume samples containing 137Cs, 134Cs and 40K. The efficiency was estimated by considering two correction factors: coincidence-summing and self-absorption corrections. The coincidence-summing correction for a cascade transition nuclide was estimated by an experimental method involving measuring a sample at the far and close positions of a detector. The derived coincidence-summing correction factors were compared with those of analytical and Monte Carlo simulation methods and good agreements were obtained. Differences in the matrix of the calibration source and the environmental sample resulted in an increase or decrease of the full-energy peak counts due to the self-absorption of γ-rays in the sample. The correction factor was derived as a function of the densities of several matrix materials. The present method was applied to the measurement of environmental samples and also low-level radioactivity measurements of water samples using the well-type detector.
Commodity Money Equilibrium in a Convex Trading Post Economy with Transaction Costs
Ross M. Starr
2007-01-01
Existence and efficiency of general equilibrium with commodity money is investigated in an economy where N commodities are traded at N(N-1)/2 commodity-pairwise trading posts. Trade is a resource-using activity recovering transaction costs through the spread between bid (wholesale) and ask (retail) prices. Budget constraints, enforced at each trading post separately, imply demand for a carrier of value between trading posts. Existence of general equilibrium is established under conventiona...
Roperch, Jean-Pierre
2015-05-29
Background Using quantitative methylation-specific PCR (QM-MSP) is a promising method for colorectal cancer (CRC) diagnosis from stool samples. Difficulty in eliminating PCR inhibitors of this body fluid has been extensively reported. Here, spermidine is presented as PCR facilitator for the detection of stool DNA methylation biomarkers using QM-MSP. We examined its effectiveness with NPY, PENK and WIF1, three biomarkers which we have previously shown to be of relevance to CRC. Results We determined an optimal window for the amplification of the albumin (Alb) gene (100 ng of bisulfite-treated stool DNA added of 1 mM spermidine) at which we report that spermidine acts as a PCR facilitator (AE = 1680%) for SG RT-PCR. We show that the amplification of methylated PENK, NPY and WIF1 is considerably facilitated by QM-MSP as measured by an increase of CMI (Cumulative Methylation Index, i.e. the sum of the three methylation values) by a factor of 1.5 to 23 fold in individual samples, and of 10 fold in a pool of five samples. Conclusions We contend that spermidine greatly reduces the problems of PCR inhibition in stool samples. This observed feature, after validation on a larger sampling, could be used in the development of stool-based CRC diagnosis tests.
Silvia, Paul J; Kwapil, Thomas R; Walsh, Molly A; Myin-Germeys, Inez
2014-03-01
Experience-sampling research involves trade-offs between the number of questions asked per signal, the number of signals per day, and the number of days. By combining planned missing-data designs and multilevel latent variable modeling, we show how to reduce the items per signal without reducing the number of items. After illustrating different designs using real data, we present two Monte Carlo studies that explored the performance of planned missing-data designs across different within-person and between-person sample sizes and across different patterns of response rates. The missing-data designs yielded unbiased parameter estimates but slightly higher standard errors. With realistic sample sizes, even designs with extensive missingness performed well, so these methods are promising additions to an experience-sampler's toolbox.
Phan Quoc Khanh
2014-01-01
Full Text Available The purpose of this paper is introduce several types of Levitin-Polyak well-posedness for bilevel vector equilibrium and optimization problems with equilibrium constraints. Base on criterion and characterizations for these types of Levitin-Polyak well-posedness we argue on diameters and Kuratowski’s, Hausdorff’s, or Istrǎtescus measures of noncompactness of approximate solution sets under suitable conditions, and we prove the Levitin-Polyak well-posedness for bilevel vector equilibrium and optimization problems with equilibrium constraints. Obtain a gap function for bilevel vector equilibrium problems with equilibrium constraints using the nonlinear scalarization function and consider relations between these types of LP well-posedness for bilevel vector optimization problems with equilibrium constraints and these types of Levitin-Polyak well-posedness for bilevel vector equilibrium problems with equilibrium constraints under suitable conditions; we prove the Levitin-Polyak well-posedness for bilevel equilibrium and optimization problems with equilibrium constraints.
Mirabilite solubility in equilibrium sea ice brines
Butler, Benjamin Miles; Papadimitriou, Stathys; Santoro, Anna; Kennedy, Hilary
2016-06-01
The sea ice microstructure is permeated by brine channels and pockets that contain concentrated seawater-derived brine. Cooling the sea ice results in further formation of pure ice within these pockets as thermal equilibrium is attained, resulting in a smaller volume of increasingly concentrated residual brine. The coupled changes in temperature and ionic composition result in supersaturation of the brine with respect to mirabilite (Na2SO4·10H2O) at temperatures below -6.38 °C, which consequently precipitates within the sea ice microstructure. Here, mirabilite solubility in natural and synthetic seawater derived brines, representative of sea ice at thermal equilibrium, has been measured in laboratory experiments between 0.2 and -20.6 °C, and hence we present a detailed examination of mirabilite dynamics within the sea ice system. Below -6.38 °C mirabilite displays particularly large changes in solubility as the temperature decreases, and by -20.6 °C its precipitation results in 12.90% and 91.97% reductions in the total dissolved Na+ and SO42- concentrations respectively, compared to that of conservative seawater concentration. Such large non-conservative changes in brine composition could potentially impact upon the measurement of sea ice brine salinity and pH, whilst the altered osmotic conditions may create additional challenges for the sympagic organisms that inhabit the sea ice system. At temperatures above -6.38 °C, mirabilite again displays large changes in solubility that likely aid in impeding its identification in field samples of sea ice. Our solubility measurements display excellent agreement with that of the FREZCHEM model, which was therefore used to supplement our measurements to colder temperatures. Measured and modelled solubility data were incorporated into a 1D model for the growth of first-year Arctic sea ice. Model results ultimately suggest that mirabilite has a near ubiquitous presence in much of the sea ice on Earth, and illustrate the
Zheng, Jian; Cao, Liguo; Tagami, Keiko; Uchida, Shigeo
2016-09-06
High yield fission products, (135)Cs and (137)Cs, have entered the environment as a result of anthropogenic nuclear activities. Analytical methods for ultratrace measurement of (135)Cs and (137)Cs are required for environmental geochemical and nuclear forensics studies. Here we report a highly sensitive method combining a desolvation sample introduction system (APEX-Q) with triple-quadrupole inductively coupled plasma mass spectrometry (AEPX-ICPMS/MS) for the determination of (135)Cs and (135)Cs/(137)Cs isotope ratio at femtogram levels. Using this system, we introduced only selected ions into the collision/reaction cell to react with N2O, significantly reducing the isobaric interferences ((135)Ba(+) and (137)Ba(+)) and polyatomic interferences ((95,97)Mo(40)Ar(+), (119)Sn(16)O(+), and (121)Sb(16)O(+)). Compared to the instrument setup of ICPMS/MS, the APEX-ICPMS/MS enables a 10-fold sensitivity increase. In addition, an effective chemical separation scheme consisting of ammonium molybdophosphate (AMP) Cs-selective adsorption and two-stage ion-exchange chromatographic separation was developed to remove major matrix and interfering elements from environmental samples (10-40 g). This separation method showed high decontamination factors (10(4)-10(7)) for major matrix elements (Al, Ca, K, Mg, Na, and Si) and interfering elements (Ba, Mo, Sb, and Sn). The high sensitivity of APEX-ICPMS/MS and the effective removal sample matrix allowed reliable analysis of (135)Cs and (137)Cs with extremely low detection limits (0.002 pg mL(-1), corresponding to 0.006 Bq mL(-1) (137)Cs). The accuracy and applicability of the APEX-ICPMS/MS method was validated by analysis of seven standard reference materials (soils, sediment, and plants). For the first time, ultratrace determination of (135)Cs and (135)Cs/(137)Cs isotope ratio at global fallout source environmental samples was achieved with the ICPMS technique.
Thermodynamics of the multicomponent vapor-liquid equilibrium under capillary pressure difference
Shapiro, Alexander; Stenby, Erling Halfdan
2001-01-01
We discuss the two-phase multicomponent equilibrium, provided that the phase pressures are different due to the action of capillary forces. We prove the two general properties of such an equilibrium, which have previously been known for a single-component case, however, to the best of our knowledge......, not for the multicomponent mixtures. The importance is emphasized on the space of the intensive variables P, T and mu (i), where the laws of capillary equilibrium have a simple geometrical interpretation. We formulate thermodynamic problems specific to such an equilibrium, and outline changes to be introduced to common...... algorithms of flash calculations in order to solve these problems. Sample calculations show large variation of the capillary properties of the mixture in the very neighborhood of the phase envelope and the restrictive role of the spinodal surface as a boundary for possible equilibrium states with different...
The unlikely Carnot efficiency.
Verley, Gatien; Esposito, Massimiliano; Willaert, Tim; Van den Broeck, Christian
2014-09-15
The efficiency of an heat engine is traditionally defined as the ratio of its average output work over its average input heat. Its highest possible value was discovered by Carnot in 1824 and is a cornerstone concept in thermodynamics. It led to the discovery of the second law and to the definition of the Kelvin temperature scale. Small-scale engines operate in the presence of highly fluctuating input and output energy fluxes. They are therefore much better characterized by fluctuating efficiencies. In this study, using the fluctuation theorem, we identify universal features of efficiency fluctuations. While the standard thermodynamic efficiency is, as expected, the most likely value, we find that the Carnot efficiency is, surprisingly, the least likely in the long time limit. Furthermore, the probability distribution for the efficiency assumes a universal scaling form when operating close-to-equilibrium. We illustrate our results analytically and numerically on two model systems.
Grant, Christopher V.; Yang, Yuan; Glibowicka, Mira; Wu, Chin H.; Park, Sang Ho; Deber, Charles M.; Opella, Stanley J.
2009-11-01
The design, construction, and performance of a cross-coil double-resonance probe for solid-state NMR experiments on lossy biological samples at high magnetic fields are described. The outer coil is a Modified Alderman-Grant Coil (MAGC) tuned to the 1H frequency. The inner coil consists of a multi-turn solenoid coil that produces a B 1 field orthogonal to that of the outer coil. This results in a compact nested cross-coil pair with the inner solenoid coil tuned to the low frequency detection channel. This design has several advantages over multiple-tuned solenoid coil probes, since RF heating from the 1H channel is substantially reduced, it can be tuned for samples with a wide range of dielectric constants, and the simplified circuit design and high inductance inner coil provides excellent sensitivity. The utility of this probe is demonstrated on two electrically lossy samples of membrane proteins in phospholipid bilayers (bicelles) that are particularly difficult for conventional NMR probes. The 72-residue polypeptide embedding the transmembrane helices 3 and 4 of the Cystic Fibrosis Transmembrane Conductance Regulator (CFTR) (residues 194-241) requires a high salt concentration in order to be successfully reconstituted in phospholipid bicelles. A second application is to paramagnetic relaxation enhancement applied to the membrane-bound form of Pf1 coat protein in phospholipid bicelles where the resistance to sample heating enables high duty cycle solid-state NMR experiments to be performed.
Feekings, Jordan P.; Christensen, Asbjørn; Jonsson, Patrik;
2014-01-01
The primary aim of this study was to determine whether the information collected as part of the at-sea-sampling program could be used to identify hydrographical and environmental variables that are influential on catch rates of Norway lobster. Ultimately, we wanted to know whether environmental...
A general framework for ion equilibrium calculations in compacted bentonite
Birgersson, Martin
2017-03-01
An approach for treating chemical equilibrium between compacted bentonite and aqueous solutions is presented. The treatment is based on conceptualizing bentonite as a homogeneous mixture of water and montmorillonite, and assumes Gibbs-Donnan membrane equilibrium across interfaces to external solutions. An equation for calculating the electrostatic potential difference between bentonite and external solution (Donnan potential) is derived and solved analytically for some simple systems. The solutions are furthermore analyzed in order to illuminate the general mechanisms of ion equilibrium and their relation to measurable quantities. A method is suggested for estimating interlayer activity coefficients based on the notion of an interlayer ionic strength. Using this method, several applications of the framework are presented, giving a set of quantitative predictions which may be relatively simply tested experimentally, e.g.: (1) the relative amount of anions entering the bentonite depends approximately on the square-root of the external concentration for a 1:2 salt (e.g. CaCl2). For a 1:1 salt (e.g. NaCl) the dependence is approximately linear, and for a 1:2 salt (e.g. Na2SO4) the dependence is approximately quadratic. (2) Bentonite contains substantially more nitrate as compared to chloride if equilibrated with the two salt solutions at equal external concentration. (3) Potassium bentonite generally contains more anions as compared to sodium bentonite if equilibrated at the same external concentration. (4) The anion concentration ratio in two bentonite samples of different cations (but with the same density and cation exchange capacity) resembles the ion exchange selectivity coefficient for that specific cation pair. The results show that an adequate treatment of chemical equilibrium between interlayers and bulk solutions are essential when modeling compacted bentonite, and that activity corrections generally are required for relevant ion equilibrium calculations. It
Bazregar, Mohammad; Rajabi, Maryam; Yamini, Yadollah; Asghari, Alireza; Abdossalami asl, Yousef
2015-09-04
A simple and efficient extraction technique with a sub-microliter organic solvent consumption termed as in-tube electro-membrane extraction (IEME) is introduced. This method is based upon the electro-kinetic migration of ionized compounds by the application of an electrical potential difference. For this purpose, a thin polypropylene (PP) sheet placed inside a tube acts as a support for the membrane solvent, and 30μL of an aqueous acceptor solution is separated by this solvent from 1.2mL of an aqueous donor solution. This method yielded high extraction recoveries (63-81%), and the consumption of the organic solvent used was only 0.5μL. By performing this method, the purification is high, and the utilization of the organic solvent, used as a mediator, is very simple and repeatable. The proposed method was evaluated by extraction of four synthetic food dyes (Amaranth, Ponceau 4R, Allura Red, and Carmoisine) as the model analytes. Optimization of variables affecting the method was carried out in order to achieve the best extraction efficiency. These variables were the type of membrane solvent, applied extraction voltage, extraction time, pH range, and concentration of salt added. Under the optimized conditions, IEME-HPLC-UV provided a good linearity in the range of 1.00-800ngmL(-1), low limits of detection (0.3-1ngmL(-1)), and good extraction repeatabilities (RSDs below 5.2%, n=5). It seems that this design is a proper one for the automation of the method. Also the consumption of the organic solvent in a sub-microliter scale, and its simplicity, high efficiency, and high purification can help one getting closer to the objectives of the green chemistry.
Electric Current Equilibrium in the Corona
Filippov, Boris
2013-01-01
A hyperbolic flux-tube configuration containing a null point below the flux rope is considered as a pre-eruptive state of coronal mass ejections that start simultaneously with flares. We demonstrate that this configuration is unstable and cannot exist for a long time in the solar corona. The inference follows from general equilibrium conditions and from analyzing simple models of the flux-rope equilibrium. A direct consequence of the stable flux-rope equilibrium in the corona are separatrices in the horizontal-field distribution in the chromosphere. They can be recognized as specific "herring-bone structures" in a chromospheric fibril pattern.
Electric Current Equilibrium in the Corona
Filippov, Boris
2013-04-01
A hyperbolic flux-tube configuration containing a null point below the flux rope is considered as a pre-eruptive state of coronal mass ejections that start simultaneously with flares. We demonstrate that this configuration is unstable and cannot exist for a long time in the solar corona. The inference follows from general equilibrium conditions and from analyzing simple models of the flux-rope equilibrium. A direct consequence of the stable flux-rope equilibrium in the corona are separatrices in the horizontal-field distribution in the chromosphere. They can be recognized as specific "herring-bone structures" in a chromospheric fibril pattern.
A Multi Period Equilibrium Pricing Model
Pirvu, Traian A
2012-01-01
In this paper, we propose an equilibrium pricing model in a dynamic multi-period stochastic framework with uncertain income streams. In an incomplete market, there exist two traded risky assets (e.g. stock/commodity and weather derivative) and a non-traded underlying (e.g. temperature). The risk preferences are of exponential (CARA) type with a stochastic coefficient of risk aversion. Both time consistent and time inconsistent trading strategies are considered. We obtain the equilibriums prices of a contingent claim written on the risky asset and non-traded underlying. By running numerical experiments we examine how the equilibriums prices vary in response to changes in model parameters.
THE STABILITY OF LIQUID EVAPORATION EQUILIBRIUM
SHIMIN ZHANG
2005-01-01
For the evaporation of the pure liquid under the condition of constant temperature and constant external pressure, the phase equilibrium of the liquid vapor in the bubble and the liquid outside the bubble is always a kind of stable equilibrium whether there is air or not in the bubble. If there is no air in the bubble, the bubble and liquid cannot coexist in the mechanical equilibrium when the vapor pressure of the liquid in the bubble is less than or equal to the external pressure; the bubbl...
Marcelino, Cleuton P.; Valentim, Adriano C.M.; Medeiros, Ana Catarina R. de; Girao, Joaquim H.S.; Barcia, Rosangela B. [Universidade Federal do Rio Grande do Norte (UFRN), Natal, RN (Brazil)
2004-07-01
Water soluble polymers have been used extensively in the petroleum recovery, due to their ability in increasing the viscosity of the injection water and to reduce water/oil mobility ratio and the water relative permeability in the reservoir. This reduction acts favorably as a secondary effect, and it reestablishes part of the pressure in the reservoir after the flow of the polymer, causing a correction of the injection profile in the wells through the restructuring of the resident fluids in the porous media. Nevertheless, some parameters influence the improve of this mechanism, such as petrophysics properties, chemical composition of the rock, adsorption, resistance factor and the residual resistance factor. Many paper in the area of polymers applied to the enhanced petroleum recovery indicate a high efficiency in the injection of different partially hydrolysed polyacrylamides, in different concentrations, or even in different injection conditions, as: temperature, flow, among others. In this work it was evaluated the behavior and efficiency of partially hydrolysed polyacrylamide flooding on outcrop cores from Botucatu, Rio Bonito, Clashach and Assu, using core flow tests and computer simulations. (author)
Elviri, Lisa; Foresti, Ruben; Bianchera, Annalisa; Silvestri, Marco; Bettini, Ruggero
2016-08-01
The potential of 3D printing technology was here exploited to prepare tailored polylactic acid (PLA) supports for desorption electrospray ionization (DESI) experiments. PLA rough solid supports presenting wells of different shape (i.e. cylindrical, cubic and hemispherical cavities) were designed to accommodate samples of different physical state. The potentials of such supports in terms of sample loading capacity, sensitivity, signal stability were tested by analysing a peptide (i.e. insulin) and an aminoglycoside antibiotic (i.e. gentamicin sulphate) from solution and a chitosan-based gel. The results obtained were compared with those obtained by using a traditional polytetrafluoroethylene (PTFE) support and discussed. By using PLA support on the flat side, signal intensity improved almost twice with respect to PTFE support, whereas with spherical wells a five times improved signal sensitivity and good stability (RSD3D printing technology for the development of devices for a DESI source, presenting different shapes or configuration as a function of the sample types.
无
2001-01-01
A novel spherical macroporous epoxy-dicyandiamide chelate resin was synthesized simply and rapidly from epoxy resin and used for the preconcentration and separation of trace amounts of Au( Ⅲ ), Hg (Ⅱ ), Pd ( Ⅳ ) and Ru ( Ⅲ ) ions from solution samples. The analyzed ions can be quantitatively concentrated by the resin at a flow rate of 2.0 mL/min at pH 4, and can also be desorbed with 15 mL of 4 mol/L HCl+0.3 g thiourea from the resin column with recoveries of 96.5%-99.0%. After the chelate resin was reused for 7 times, the recoveries of these ions were still over 92%, and 400-1 000 times of excess of Fe( Ⅲ ), Al( Ⅲ ), Ni( Ⅱ ), Mn( Ⅱ ), Cr ( Ⅲ ), Cu ( Ⅱ ), Cd ( Ⅱ ) and Pb( Ⅱ ) caused little interference with the determination of these ions by an inductively coupled plasma optical emission spectrometer (ICP-OES). The capacities of the resin for the analytes are in the range of 0.35～0.92 mmol/g. The RSDs of the proposed method are in the range of 1.1%～4.0% for each kind of the analyzed ions. The recoveries of a standard added in real solution samples are between 96.5 % and 98. 5 %, and the results for the analyzed ions in a powder sample are in good agreement with their reported values.
Githure John I
2009-09-01
Full Text Available Abstract Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression. The eigenfunction
Stochastic approach to equilibrium and nonequilibrium thermodynamics.
Tomé, Tânia; de Oliveira, Mário J
2015-04-01
We develop the stochastic approach to thermodynamics based on stochastic dynamics, which can be discrete (master equation) and continuous (Fokker-Planck equation), and on two assumptions concerning entropy. The first is the definition of entropy itself and the second the definition of entropy production rate, which is non-negative and vanishes in thermodynamic equilibrium. Based on these assumptions, we study interacting systems with many degrees of freedom in equilibrium or out of thermodynamic equilibrium and how the macroscopic laws are derived from the stochastic dynamics. These studies include the quasiequilibrium processes; the convexity of the equilibrium surface; the monotonic time behavior of thermodynamic potentials, including entropy; the bilinear form of the entropy production rate; the Onsager coefficients and reciprocal relations; and the nonequilibrium steady states of chemical reactions.
Effect of Ultrasound on Desorption Equilibrium
秦炜; 原永辉; 戴猷元
2001-01-01
Effects of ultrasound on intensification of separation process were investigated through the experiment of desorption equilibrium behavior. Tri-butyl phosphate (TBP) on NKA-X resin and phenol on a solvent impregnated resin, CL-TBP resin, were used for desorption processes. The desorption rate was measured with and without ultrasound. Desorption equilibrium was studied under various ultrasonic power densities or thermal infusion. Results showed that the desorption rate with ultrasound was much higher than that with normal thermal infusion. Both ultrasound and thermal infusion broke the desorption equilibrium existed at room temperature. However, after the systems were cooled down, the amount of solute desorbed in the liquid phase in the presence of ultrasound was much higher than that at the temperature corresponding to the same ultrasound power. It is proved that the initial desorption equilibrium was broken as a result of the spot energy effect of ultrasound.
Equilibrium Analysis for Anycast in WDM Networks
唐矛宁; 王汉兴
2005-01-01
In this paper, the wavelength-routed WDM network, was analyzed for the dynamic case where the arrival of anycast requests was modeled by a state-dependent Poisson process. The equilibrium analysis was also given with the UWNC algorithm.
POSITIVE EQUILIBRIUM SOLUTIONS OF SEMILINEAR PARABOLIC EQUATIONS
无
2006-01-01
The author studies semilinear parabolic equations with initial and periodic boundary value conditions. In the presence of non-well-ordered sub- and super-solutions:"subsolution (≤) supersolution", the existence and stability/instability of equilibrium solutions are obtained.
Equilibrium fluctuation energy of gyrokinetic plasma
Krommes, J.A.; Lee, W.W.; Oberman, C.
1985-11-01
The thermal equilibrium electric field fluctuation energy of the gyrokinetic model of magnetized plasma is computed, and found to be smaller than the well-known result
Information equilibrium as an economic principle
2015-01-01
A general information equilibrium model in the case of ideal information transfer is defined and then used to derive the relationship between supply (information destination) and demand (information source) with the price as the detector of information exchange between demand and supply. We recover the properties of the traditional economic supply-demand diagram. Information equilibrium is then applied to macroeconomic problems, recovering some common macroeconomic models in particular limits...
The Theory of Variances in Equilibrium Reconstruction
Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren
2008-01-14
The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature.
OPTIMAL RESOURCE ALLOCATION IN GENERAL COURNOTCOMPETITIVE EQUILIBRIUM
Ervik, Inger Sommerfelt; Soegaard, Christian
2013-01-01
Conventional economic theory stipulates that output in Cournot competition is too low relative to that which is attained in perfect competition. We revisit this result in a General Cournot-competitive Equilibrium model with two industries that di er only in terms of productivity. We show that in general equilibrium, the more ecient industry produces too little and the less ecient industry produces too much compared to an optimal scenario with perfect competition.
Liu Qing-you
2011-01-01
Full Text Available Abstract In this paper, a scalarization result of ε-weak efficient solution for a vector equilibrium problem (VEP is given. Using this scalarization result, the connectedness of ε-weak efficient and ε-efficient solutions sets for the VEPs are proved under some suitable conditions in real Hausdorff topological vector spaces. The main results presented in this paper improve and generalize some known results in the literature.
Bi-objective network equilibrium, traffic assignment and road pricing
Wang, JYT; Ehrgott, M
2014-01-01
Multi-objective equilibrium models of traffic assignment state that users of road networks travel on routes that are efficient with respect to several objectives, such as travel time and toll. This concept provides a general framework for modelling traffic flow in tolled road networks. We present the concept of time surplus maximisation as a way of handling user preferences. Given a toll, users have a maximum time they are willing to spend for a trip. Time surplus is this maximum time minus a...
Roda, A.; Nachman, G.; Hosein, F.
2012-01-01
The red palm mite (Raoiella indica), an invasive pest of coconut, entered the Western hemisphere in 2004, then rapidly spread through the Caribbean and into Florida, USA. Developing effective sampling methods may aid in the timely detection of the pest in a new area. Studies were conducted...... to provide and compare intra tree spatial distribution of red palm mite populations on coconut in two different geographical areas, Trinidad and Puerto Rico, recently invaded by the mite. The middle stratum of a palm hosted significantly more mites than fronds from the upper or lower canopy and fronds from...
Patra, Puneet Kumar; Bhattacharya, Baidurya
2016-10-01
Crook's fluctuation theorem (CFT) and Jarzynski equality (JE) are effective tools for obtaining free-energy difference Δ F (λA→λB,T0) through a set of finite-time protocol driven nonequilibrium transitions between two equilibrium states A and B [parametrized by the time-varying protocol λ (t ) ] at the same temperature T0. Using the generalized dimensionless work function Δ WG , we extend CFT to transitions between two nonequilibrium steady states (NESSs) created by a thermal gradient. We show that it is possible, provided the period over which the transitions occur is sufficiently long, to obtain Δ F (λA→λB,T0) for different values of T0, using the same set of finite-time transitions between these two NESSs. Our approach thus completely eliminates the need to make new samples for each new T0. The generalized form of JE arises naturally as the average of the exponentiated Δ WG . The results are demonstrated on two test cases: (i) a single particle quartic oscillator having a known closed form Δ F , and (ii) a one-dimensional ϕ4 chain. Each system is sampled from the canonical distribution at an arbitrary T' with λ =λA , then subjected to a temperature gradient between its ends, and after steady state is reached, the protocol change λA→λB is effected in time τ , following which Δ WG is computed. The reverse path likewise initiates in equilibrium at T' with λ =λB and the protocol is time reversed leading to λ =λA and the reverse Δ WG . Our method is found to be more efficient than either JE or CFT when free-energy differences at multiple T0's are required for the same system.
Herrero, P; Borrull, F; Pocurull, E; Marcé, R M
2013-09-27
An analytical method has been developed that allows the simultaneous determination of five benzotriazole (BTRs), four benzothiazole (BTs) and five benzenesulfonamide (BSAs) derivates. The method is based on tandem solid-phase extraction (SPE) with Oasis HLB followed by a clean-up step with Florisil. The chromatographic analysis was performed in less than 15min and detection was carried out with a triple quadrupole mass analyser operating in multiple reaction monitoring (MRM) mode. A comparison was performed between Oasis HLB and Oasis MAX sorbents for the solid-phase extraction, with Oasis HLB being the sorbent that gave the highest recoveries, ranging between 75% and 106%, depending on the compound and the matrix analysed. The proposed clean-up with Florisil sorbent reduced the matrix effect to below 20%. The repeatability (%RSD, 50-3000ng/L, n=3) of the method was less than 15% for all of the compounds in all of the matrices. The limits of detection (LODs) achieved ranged from 1ng/L for BTR in river water up to 100ng/L for BT in influent sewage. All of the compounds were determined in environmental waters such as river water and sewage. The highest concentrations determined corresponded to influent sewage samples in which the sum of concentrations for all compounds were between 4.6μg/L and 8.0μg/L. These concentrations were slightly reduced in secondary effluent and tertiary effluent sewage. Moreover, samples from tertiary effluent sewage based on ultra-filtration membrane treatments were also analysed and preliminary results seem to indicate that these treatments may be most effective for removing BTR, BT and BSA derivates.
1978-09-01
general equilibrium model of an economy with market fritions. A market is said to have frictions if buyers and sellers have trouble finding each other, if it is costly for them to search for each other, and if it is costly to wait to buy or sell. Equilibrium is a stationary probability distribution over the set of possible time paths of states of the economy. This equilibrium reflects rational expectations if all agents know the stationary distribution of the variables they observe and if they exploit this information. Prices are fixed and are not necessarily equilibrium
Asgharinezhad, Ali Akbar; Ebrahimzadeh, Homeira
2016-02-26
In this study, for the first time, 2-aminobenzothiazole monomer was polymerized on Fe3O4 NPs, graphene oxide/Fe3O4 (GO/Fe3O4) and graphene/Fe3O4 (G/Fe3O4) nanocomposites. The synthesized magnetic nanosorbents were characterized by various techniques. The extraction ability of these nanosorbents including Fe3O4, GO/Fe3O4, G/Fe3O4, Fe3O4@poly(2-aminobenzothiazole) (Fe3O4@PABT), GO/Fe3O4@PABT and G/Fe3O4@PABT were compared for dispersive-micro-solid phase extraction of three non-steroidal anti-inflammatory drugs. The results revealed that GO/Fe3O4@PABT nanocomposite demonstrates higher extraction efficiency for naproxen, diclofenac and ibuprofen as selected model analytes. Following the sorption and elution steps, the model analytes were quantified by high performance liquid chromatography-photo diode array detection. Afterwards, a central composite design methodology combined with desirability function approach was applied to find out the optimal experimental conditions. Under the optimized conditions, the limits of detection and linear dynamic ranges were achieved in the range of 0.07-0.3 μg L(-1) and 0.25-2000 μg L(-1), respectively. The percent of extraction recovery was 87.4, 85.5 and 90.5% for naproxen, diclofenac and ibuprofen, respectively. The obtained relative standard deviation (n=5) was 7.2, 5.4 and 6.4% for naproxen, diclofenac and ibuprofen, respectively. Ultimately, this method was employed for urinary monitoring of the target analytes and satisfactory results were obtained.