Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation.
Meyer, Karin
2016-08-01
Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty-derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated-rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined.
Recommended Maximum Temperature For Mars Returned Samples
Beaty, D. W.; McSween, H. Y.; Czaja, A. D.; Goreva, Y. S.; Hausrath, E.; Herd, C. D. K.; Humayun, M.; McCubbin, F. M.; McLennan, S. M.; Hays, L. E.
2016-01-01
The Returned Sample Science Board (RSSB) was established in 2015 by NASA to provide expertise from the planetary sample community to the Mars 2020 Project. The RSSB's first task was to address the effect of heating during acquisition and storage of samples on scientific investigations that could be expected to be conducted if the samples are returned to Earth. Sample heating may cause changes that could ad-versely affect scientific investigations. Previous studies of temperature requirements for returned mar-tian samples fall within a wide range (-73 to 50 degrees Centigrade) and, for mission concepts that have a life detection component, the recommended threshold was less than or equal to -20 degrees Centigrade. The RSSB was asked by the Mars 2020 project to determine whether or not a temperature requirement was needed within the range of 30 to 70 degrees Centigrade. There are eight expected temperature regimes to which the samples could be exposed, from the moment that they are drilled until they are placed into a temperature-controlled environment on Earth. Two of those - heating during sample acquisition (drilling) and heating while cached on the Martian surface - potentially subject samples to the highest temperatures. The RSSB focused on the upper temperature limit that Mars samples should be allowed to reach. We considered 11 scientific investigations where thermal excursions may have an adverse effect on the science outcome. Those are: (T-1) organic geochemistry, (T-2) stable isotope geochemistry, (T-3) prevention of mineral hydration/dehydration and phase transformation, (T-4) retention of water, (T-5) characterization of amorphous materials, (T-6) putative Martian organisms, (T-7) oxidation/reduction reactions, (T-8) (sup 4) He thermochronometry, (T-9) radiometric dating using fission, cosmic-ray or solar-flare tracks, (T-10) analyses of trapped gasses, and (T-11) magnetic studies.
Maximum Likelihood Under Response Biased Sampling\\ud
Chambers, Raymond; Dorfman, Alan; Wang, Suojin
2003-01-01
Informative sampling occurs when the probability of inclusion in sample depends on\\ud the value of the survey response variable. Response or size biased sampling is a\\ud particular case of informative sampling where the inclusion probability is proportional\\ud to the value of this variable. In this paper we describe a general model for response\\ud biased sampling, which we call array sampling, and develop maximum likelihood and\\ud estimating equation theory appropriate to this situation. The ...
EXACT SAMPLING DISTRIBUTION OF SAMPLE COEFFICIENT OF VARIATION
Dr.G.S.David Sam Jayakumar; A.Sulthan
2015-01-01
This paper proposes the sampling distribution of sample coefficient of variation from the normal population. We have derived the relationship between the sample coefficient of variation, standard normal and chi-square variate. We have derived density function of the sample coefficient of variation in terms of the confluent hyper-geometric distribution. Moreover, the first two moments of the distribution are derived and we have proved that the sample coefficient of variation (cv...
Maximum embryo absorbed dose from intravenous urography: interhospital variations
Damilakis, J.; Perisinakis, K. [University of Crete (Greece). Dept. of Medical Physics; Koukourakis, M. [University of Crete (Greece). Dept. of Radiology; Gourtsoyiannis, N. [University Hospital of Iraklion, Crete (Greece). Dept. of Radiotherapy
1997-12-01
The purpose of this study was to determine the maximum embryo dose during intravenous urography (IVU) examinations, when inadvertent irradiation of a pregnant woman occurs, and to investigate the variation of doses received from different institutions. Doses at average embryo depth from IVU examinations have been measured in four institutions using a Rando phantom and thermoluminescent crystals. In order to estimate the maximum range of embryo doses, radiologists were asked to carry out the examinations with the same technique as in female patients with acute ureteral obstruction. The range of doses estimated at embryo depth for the institutions participating in this study was 5.77 to 35.2 mGy. The considerable interhospital variation found in dose can be explained by different equipment and techniques used. A simple method of estimating embryo dose from pelvic radiographs reported previously was found to be also applicable to IVU examinations. Absorbed dose at 6 cm, the average embryo depth, was found significantly less than 50 mGy. (Author).
EXACT SAMPLING DISTRIBUTION OF SAMPLE COEFFICIENT OF VARIATION
Dr.G.S.David Sam Jayakumar
2015-06-01
Full Text Available This paper proposes the sampling distribution of sample coefficient of variation from the normal population. We have derived the relationship between the sample coefficient of variation, standard normal and chi-square variate. We have derived density function of the sample coefficient of variation in terms of the confluent hyper-geometric distribution. Moreover, the first two moments of the distribution are derived and we have proved that the sample coefficient of variation (cv is the biased estimator of the population coefficient of variation (CV. Moreover, the shape of the density function of sample co-efficient of variation is also visualized and the critical points of sample (cv at 5% and 1% level of significance for different sample sizes have also been computed.
Nonuniform sampling and maximum entropy reconstruction in multidimensional NMR.
Hoch, Jeffrey C; Maciejewski, Mark W; Mobli, Mehdi; Schuyler, Adam D; Stern, Alan S
2014-02-18
NMR spectroscopy is one of the most powerful and versatile analytic tools available to chemists. The discrete Fourier transform (DFT) played a seminal role in the development of modern NMR, including the multidimensional methods that are essential for characterizing complex biomolecules. However, it suffers from well-known limitations: chiefly the difficulty in obtaining high-resolution spectral estimates from short data records. Because the time required to perform an experiment is proportional to the number of data samples, this problem imposes a sampling burden for multidimensional NMR experiments. At high magnetic field, where spectral dispersion is greatest, the problem becomes particularly acute. Consequently multidimensional NMR experiments that rely on the DFT must either sacrifice resolution in order to be completed in reasonable time or use inordinate amounts of time to achieve the potential resolution afforded by high-field magnets. Maximum entropy (MaxEnt) reconstruction is a non-Fourier method of spectrum analysis that can provide high-resolution spectral estimates from short data records. It can also be used with nonuniformly sampled data sets. Since resolution is substantially determined by the largest evolution time sampled, nonuniform sampling enables high resolution while avoiding the need to uniformly sample at large numbers of evolution times. The Nyquist sampling theorem does not apply to nonuniformly sampled data, and artifacts that occur with the use of nonuniform sampling can be viewed as frequency-aliased signals. Strategies for suppressing nonuniform sampling artifacts include the careful design of the sampling scheme and special methods for computing the spectrum. Researchers now routinely report that they can complete an N-dimensional NMR experiment 3(N-1) times faster (a 3D experiment in one ninth of the time). As a result, high-resolution three- and four-dimensional experiments that were prohibitively time consuming are now practical
Fast Forward Maximum entropy reconstruction of sparsely sampled data.
Balsgart, Nicholas M; Vosegaard, Thomas
2012-10-01
We present an analytical algorithm using fast Fourier transformations (FTs) for deriving the gradient needed as part of the iterative reconstruction of sparsely sampled datasets using the forward maximum entropy reconstruction (FM) procedure by Hyberts and Wagner [J. Am. Chem. Soc. 129 (2007) 5108]. The major drawback of the original algorithm is that it required one FT and one evaluation of the entropy per missing datapoint to establish the gradient. In the present study, we demonstrate that the entire gradient may be obtained using only two FT's and one evaluation of the entropy derivative, thus achieving impressive time savings compared to the original procedure. An example: A 2D dataset with sparse sampling of the indirect dimension, with sampling of only 75 out of 512 complex points (15% sampling) would lack (512-75)×2=874 points per ν(2) slice. The original FM algorithm would require 874 FT's and entropy function evaluations to setup the gradient, while the present algorithm is ∼450 times faster in this case, since it requires only two FT's. This allows reduction of the computational time from several hours to less than a minute. Even more impressive time savings may be achieved with 2D reconstructions of 3D datasets, where the original algorithm required days of CPU time on high-performance computing clusters only require few minutes of calculation on regular laptop computers with the new algorithm.
IS THE SAMPLE COEFFICIENT OF VARIATION A GOOD ESTIMATOR FOR THE POPULATION COEFFICIENT OF VARIATION?
Mahmoudvand, Rahim; HASSANI, Hossein; Wilson, Rob
2007-01-01
In this paper, we obtain bounds for the population coefficient of variation (CV) in Bernoulli, Discrete Uniform, Normal and Exponential distributions. We also show that the sample coefficient of variation (cv) is not an accurate estimator of the population CV in the above indicated distributions. Finally we provide some suggestions based on the Maximum Likelihood Estimation to improve the population CV estimate.
A general maximum entropy framework for thermodynamic variational principles
Dewar, Roderick C., E-mail: roderick.dewar@anu.edu.au [Research School of Biology, The Australian National University, Canberra ACT 0200 (Australia)
2014-12-05
Minimum free energy principles are familiar in equilibrium thermodynamics, as expressions of the second law. They also appear in statistical mechanics as variational approximation schemes, such as the mean-field and steepest-descent approximations. These well-known minimum free energy principles are here unified and extended to any system analyzable by MaxEnt, including non-equilibrium systems. The MaxEnt Lagrangian associated with a generic MaxEnt distribution p defines a generalized potential Ψ for an arbitrary probability distribution p-hat, such that Ψ is a minimum at (p-hat) = p. Minimization of Ψ with respect to p-hat thus constitutes a generic variational principle, and is equivalent to minimizing the Kullback-Leibler divergence between p-hat and p. Illustrative examples of min–Ψ are given for equilibrium and non-equilibrium systems. An interpretation of changes in Ψ is given in terms of the second law, although min–Ψ itself is an intrinsic variational property of MaxEnt that is distinct from the second law.
A general maximum entropy framework for thermodynamic variational principles
Dewar, Roderick C.
2014-12-01
Minimum free energy principles are familiar in equilibrium thermodynamics, as expressions of the second law. They also appear in statistical mechanics as variational approximation schemes, such as the mean-field and steepest-descent approximations. These well-known minimum free energy principles are here unified and extended to any system analyzable by MaxEnt, including non-equilibrium systems. The MaxEnt Lagrangian associated with a generic MaxEnt distribution p defines a generalized potential Ψ for an arbitrary probability distribution ̂p, such that Ψ is a minimum at ̂p = p. Minimization of Ψ with respect to ̂p thus constitutes a generic variational principle, and is equivalent to minimizing the Kullback-Leibler divergence between ̂p and p. Illustrative examples of min-Ψ are given for equilibrium and non-equilibrium systems. An interpretation of changes in Ψ is given in terms of the second law, although min-Ψ itself is an intrinsic variational property of MaxEnt that is distinct from the second law.
Securing maximum diversity of Non Pollen Palynomorphs in palynological samples
Enevold, Renée; Odgaard, Bent Vad
2015-01-01
Palynology is no longer synonymous with analysis of pollen with the addition of a few fern spores. A wide range of Non Pollen Palynomorphs are now described and are potential palaeoenvironmental proxies in the palynological surveys. The contribution of NPP’s has proven important to the interpreta......Palynology is no longer synonymous with analysis of pollen with the addition of a few fern spores. A wide range of Non Pollen Palynomorphs are now described and are potential palaeoenvironmental proxies in the palynological surveys. The contribution of NPP’s has proven important.......g. Schulz & Shumilovskikh 2013). Increasingly it has become customary for palynologists to quantify at least some of the NPP’s appearing on the pollen slides (e.g. Strother et al. 2015, Odgaard 1994). Are these samples representative of the initial NPP assemblages? The usual sample preparation method...... for pollen analysis is based on acetylization (Erdtman 1969) and HF-treatment which are of variable destructiveness to the NPP’s. Some NPP’s might completely vanish and the prepared sample might hold less NPP diversity than the initial NPP assemblage. Consequently, it may be advisable to consider...
Moroz, Adam
2009-06-11
The maximum energy dissipation principle is employed to nonlinear chemical thermodynamics in terms of distance variable (generalized displacement) from the global equilibrium, applying the optimal control interpretation to develop a variational formulation. The cost-like functional was chosen to support the suggestion that such a formulation corresponds to the maximum energy dissipation principle. Using this approach, the variational framework was proposed for a nonlinear chemical thermodynamics, including a general cooperative kinetics model. The formulation is in good agreement with standard linear nonequilibrium chemical thermodynamics.
Xintao Xia
2013-07-01
Full Text Available This study proposed the bootstrap maximum-entropy method to evaluate the uncertainty of the starting torque of a slewing bearing. Addressing the variation coefficient of the slewing bearing starting torque under load, the probability density function, estimated true value and variation domain are obtained through experimental investigation of the slewing bearing starting torque under various loads. The probability density function is found to be characterized by variational figure, scale and location. In addition, the estimated true value and the variation domain vary from large to small along with increasing load, indicating better evolution of the stability and reliability of the starting friction torque. Finally, a sensitive spot exists where the estimated true value and the variation domain rise abnormally, showing a fluctuation in the immunity and a degenerative disorder in the stability and reliability of the starting friction torque.
Two-dimensional maximum local variation based on image euclidean distance for face recognition.
Gao, Quanxue; Gao, Feifei; Zhang, Hailin; Hao, Xiu-Juan; Wang, Xiaogang
2013-10-01
Manifold learning concerns the local manifold structure of high dimensional data, and many related algorithms are developed to improve image classification performance. None of them, however, consider both the relationships among pixels in images and the geometrical properties of various images during learning the reduced space. In this paper, we propose a linear approach, called two-dimensional maximum local variation (2DMLV), for face recognition. In 2DMLV, we encode the relationships among pixels in images using the image Euclidean distance instead of conventional Euclidean distance in estimating the variation of values of images, and then incorporate the local variation, which characterizes the diversity of images and discriminating information, into the objective function of dimensionality reduction. Extensive experiments demonstrate the effectiveness of our approach.
Wu Fuxian; Wen Weidong
2016-01-01
Classic maximum entropy quantile function method (CMEQFM) based on the probabil-ity weighted moments (PWMs) can accurately estimate the quantile function of random variable on small samples, but inaccurately on the very small samples. To overcome this weakness, least square maximum entropy quantile function method (LSMEQFM) and that with constraint condition (LSMEQFMCC) are proposed. To improve the confidence level of quantile function estimation, scatter factor method is combined with maximum entropy method to estimate the confidence inter-val of quantile function. From the comparisons of these methods about two common probability distributions and one engineering application, it is showed that CMEQFM can estimate the quan-tile function accurately on the small samples but inaccurately on the very small samples (10 sam-ples); LSMEQFM and LSMEQFMCC can be successfully applied to the very small samples;with consideration of the constraint condition on quantile function, LSMEQFMCC is more stable and computationally accurate than LSMEQFM; scatter factor confidence interval estimation method based on LSMEQFM or LSMEQFMCC has good estimation accuracy on the confidence interval of quantile function, and that based on LSMEQFMCC is the most stable and accurate method on the very small samples (10 samples).
Hazoglou, Michael J; Walther, Valentin; Dixit, Purushottam D; Dill, Ken A
2015-08-07
There has been interest in finding a general variational principle for non-equilibrium statistical mechanics. We give evidence that Maximum Caliber (Max Cal) is such a principle. Max Cal, a variant of maximum entropy, predicts dynamical distribution functions by maximizing a path entropy subject to dynamical constraints, such as average fluxes. We first show that Max Cal leads to standard near-equilibrium results—including the Green-Kubo relations, Onsager's reciprocal relations of coupled flows, and Prigogine's principle of minimum entropy production—in a way that is particularly simple. We develop some generalizations of the Onsager and Prigogine results that apply arbitrarily far from equilibrium. Because Max Cal does not require any notion of "local equilibrium," or any notion of entropy dissipation, or temperature, or even any restriction to material physics, it is more general than many traditional approaches. It also applicable to flows and traffic on networks, for example.
Moroz, Adam
2008-05-01
In this work we revise the applicability of the optimal control and variational approach to the maximum energy dissipation (MED) principle in non-equilibrium thermodynamics. The optimal control analogies for the kinetical and potential parts of thermodynamic Lagrangian (in the form of a sum of the positively defined thermodynamic potential and positively defined dissipative function) have been considered. An interpretation of thermodynamic momenta is discussed with respect to standard optimal control applications, which employ dynamic constraints. Also included is interpretation in terms of the least action principle.
Adam Hartstone-Rose
2011-01-01
Full Text Available In a recent study, we quantified the scaling of ingested food size (Vb—the maximum size at which an animal consistently ingests food whole—and found that Vb scaled isometrically between species of captive strepsirrhines. The current study examines the relationship between Vb and body size within species with a focus on the frugivorous Varecia rubra and the folivorous Propithecus coquereli. We found no overlap in Vb between the species (all V. rubra ingested larger pieces of food relative to those eaten by P. coquereli, and least-squares regression of Vb and three different measures of body mass showed no scaling relationship within each species. We believe that this lack of relationship results from the relatively narrow intraspecific body size variation and seemingly patternless individual variation in Vb within species and take this study as further evidence that general scaling questions are best examined interspecifically rather than intraspecifically.
Dhara, Chirag; Kleidon, Axel
2015-01-01
Convective and radiative cooling are the two principle mechanisms by which the Earth's surface transfers heat into the atmosphere and that shape surface temperature. However, this partitioning is not sufficiently constrained by energy and mass balances alone. We use a simple energy balance model in which convective fluxes and surface temperatures are determined with the additional thermodynamic limit of maximum convective power. We then show that the broad geographic variation of heat fluxes and surface temperatures in the climatological mean compare very well with the ERA-Interim reanalysis over land and ocean. We also show that the estimates depend considerably on the formulation of longwave radiative transfer and that a spatially uniform offset is related to the assumed cold temperature sink at which the heat engine operates.
On the maximum and minimum of two modified Gamma-Gamma variates with applications
Al-Quwaiee, Hessa
2014-04-01
In this work, we derive the statistical characteristics of the maximum and the minimum of two modified1 Gamma-Gamma variates in closed-form in terms of Meijer\\'s G-function and the extended generalized bivariate Meijer\\'s G-function. Then, we rely on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity undergoing independent but not necessarily identically distributed Gamma-Gamma fading under the impact of pointing errors and of (ii) a dual-hop free-space optical relay transmission system. Computer-based Monte-Carlo simulations verify our new analytical results.
Jones, R.B.; Cogan, J.D. Jr.
1991-09-19
An investigation was done to determine the maximum credible event value for samples of explosives and disassembled components up to 1.2 g when stored in conductive plastic vials as packaged and handled, stored, or transported at Mound. The test was performed at Test Firing, with photographs taken before and after the test. The standard propagation test setup was used; a vial containing 1.2 g of PETN (pentaerythritol tetranitrate) was surrounded by other like vials containing 1.2-g samples of PETN. The 1.2-g PETN pellet was then ignited by an EX-12 detonator. The test showed that there was no propagation and that the maximum credible event value for the handling tray is 1.2 g. The test also showed that when the tray is placed in a metal container the MCE value will still be 1.2 g. 9 figs.
Curating NASA's Future Extraterrestrial Sample Collections: How Do We Achieve Maximum Proficiency?
McCubbin, Francis; Evans, Cynthia; Zeigler, Ryan; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael
2016-01-01
The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "... documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working towards a state of maximum proficiency.
Maximum likelihood estimation for Cox's regression model under nested case-control sampling
Scheike, Thomas; Juul, Anders
2004-01-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...
Houle, D; Meyer, K
2015-08-01
We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.
Using Maximum Entropy Modeling for Optimal Selection of Sampling Sites for Monitoring Networks
Paul H. Evangelista
2011-05-01
Full Text Available Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2 of the National Ecological Observatory Network (NEON. We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint, within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.
Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks
Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.
2011-01-01
Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.
Gutenberg-Richter b-value maximum likelihood estimation and sample size
Nava, F. A.; Márquez-Ramírez, V. H.; Zúñiga, F. R.; Ávila-Barrientos, L.; Quinteros, C. B.
2017-01-01
The Aki-Utsu maximum likelihood method is widely used for estimation of the Gutenberg-Richter b-value, but not all authors are conscious of the method's limitations and implicit requirements. The Aki/Utsu method requires a representative estimate of the population mean magnitude; a requirement seldom satisfied in b-value studies, particularly in those that use data from small geographic and/or time windows, such as b-mapping and b-vs-time studies. Monte Carlo simulation methods are used to determine how large a sample is necessary to achieve representativity, particularly for rounded magnitudes. The size of a representative sample weakly depends on the actual b-value. It is shown that, for commonly used precisions, small samples give meaningless estimations of b. Our results give estimates on the probabilities of getting correct estimates of b for a given desired precision for samples of different sizes. We submit that all published studies reporting b-value estimations should include information about the size of the samples used.
Curating NASA's future extraterrestrial sample collections: How do we achieve maximum proficiency?
McCubbin, Francis; Evans, Cynthia; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael; Zeigler, Ryan
2016-07-01
Introduction: The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "…documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working to-wards a state of maximum proficiency. Founding Principle: Curatorial activities began at JSC (Manned Spacecraft Center before 1973) as soon as design and construction planning for the Lunar Receiving Laboratory (LRL) began in 1964 [1], not with the return of the Apollo samples in 1969, nor with the completion of the LRL in 1967. This practice has since proven that curation begins as soon as a sample return mission is conceived, and this founding principle continues to return dividends today [e.g., 2]. The Next Decade: Part of the curation process is planning for the future, and we refer to these planning efforts as "advanced curation" [3]. Advanced Curation is tasked with developing procedures, technology, and data sets necessary for curating new types of collections as envisioned by NASA exploration goals. We are (and have been) planning for future curation, including cold curation, extended curation of ices and volatiles, curation of samples with special chemical considerations such as perchlorate-rich samples, curation of organically- and biologically-sensitive samples, and the use of minimally invasive analytical techniques (e.g., micro-CT, [4]) to characterize samples. These efforts will be useful for Mars Sample Return
Variational Approach to Enhanced Sampling and Free Energy Calculations
Valsson, Omar; Parrinello, Michele
2014-08-01
The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.
Joint maximum likelihood estimation of carrier and sampling frequency offsets for OFDM systems
Kim, Y H
2010-01-01
In orthogonal-frequency division multiplexing (OFDM) systems, carrier and sampling frequency offsets (CFO and SFO, respectively) can destroy the orthogonality of the subcarriers and degrade system performance. In the literature, Nguyen-Le, Le-Ngoc, and Ko proposed a simple maximum-likelihood (ML) scheme using two long training symbols for estimating the initial CFO and SFO of a recursive least-squares (RLS) estimation scheme. However, the results of Nguyen-Le's ML estimation show poor performance relative to the Cramer-Rao bound (CRB). In this paper, we extend Moose's CFO estimation algorithm to joint ML estimation of CFO and SFO using two long training symbols. In particular, we derive CRBs for the mean square errors (MSEs) of CFO and SFO estimation. Simulation results show that the proposed ML scheme provides better performance than Nguyen-Le's ML scheme.
Carlos A. L. Pires
2013-02-01
Full Text Available The Minimum Mutual Information (MinMI Principle provides the least committed, maximum-joint-entropy (ME inferential law that is compatible with prescribed marginal distributions and empirical cross constraints. Here, we estimate MI bounds (the MinMI values generated by constraining sets Tcr comprehended by mcr linear and/or nonlinear joint expectations, computed from samples of N iid outcomes. Marginals (and their entropy are imposed by single morphisms of the original random variables. N-asymptotic formulas are given both for the distribution of cross expectation’s estimation errors, the MinMI estimation bias, its variance and distribution. A growing Tcr leads to an increasing MinMI, converging eventually to the total MI. Under N-sized samples, the MinMI increment relative to two encapsulated sets Tcr1 ⊂ Tcr2 (with numbers of constraints mcr1
Bøgh Andersen, Ida; Brasen, Claus L.; Christensen, Henry;
2015-01-01
BACKGROUND: According to current recommendations, blood samples should be taken in the morning after 15 minutes' resting time. Some components exhibit diurnal variation and in response to pressures to expand opening hours and reduce waiting time, the aims of this study were to investigate...... the impact of resting time prior to blood sampling and diurnal variation on biochemical components, including albumin, thyrotropin (TSH), total calcium and sodium in plasma. METHODS: All patients referred to an outpatient clinic for blood sampling were included in the period Nov 2011 until June 2014 (opening...... hours: 7am-3pm). Each patient's arrival time and time of blood sampling were registered. The impact of resting time and the time of day for all components was analysed using simple linear regression. The "maximum allowable bias" was used as quality indicator for the change in reference interval. RESULTS...
Espino, Susana; Schenk, H Jochen
2011-01-01
The maximum specific hydraulic conductivity (k(max)) of a plant sample is a measure of the ability of a plants' vascular system to transport water and dissolved nutrients under optimum conditions. Precise measurements of k(max) are needed in comparative studies of hydraulic conductivity, as well as for measuring the formation and repair of xylem embolisms. Unstable measurements of k(max) are a common problem when measuring woody plant samples and it is commonly observed that k(max) declines from initially high values, especially when positive water pressure is used to flush out embolisms. This study was designed to test five hypotheses that could potentially explain declines in k(max) under positive pressure: (i) non-steady-state flow; (ii) swelling of pectin hydrogels in inter-vessel pit membranes; (iii) nucleation and coalescence of bubbles at constrictions in the xylem; (iv) physiological wounding responses; and (v) passive wounding responses, such as clogging of the xylem by debris. Prehydrated woody stems from Laurus nobilis (Lauraceae) and Encelia farinosa (Asteraceae) collected from plants grown in the Fullerton Arboretum in Southern California, were used to test these hypotheses using a xylem embolism meter (XYL'EM). Treatments included simultaneous measurements of stem inflow and outflow, enzyme inhibitors, stem-debarking, low water temperatures, different water degassing techniques, and varied concentrations of calcium, potassium, magnesium, and copper salts in aqueous measurement solutions. Stable measurements of k(max) were observed at concentrations of calcium, potassium, and magnesium salts high enough to suppress bubble coalescence, as well as with deionized water that was degassed using a membrane contactor under strong vacuum. Bubble formation and coalescence under positive pressure in the xylem therefore appear to be the main cause for declining k(max) values. Our findings suggest that degassing of water is essential for achieving stable and
Lorenzo, C; Carretero, J M; Arsuaga, J L; Gracia, A; Martínez, I
1998-05-01
A sexual dimorphism more marked than in living humans has been claimed for European Middle Pleistocene humans, Neandertals and prehistoric modern humans. In this paper, body size and cranial capacity variation are studied in the Sima de los Huesos Middle Pleistocene sample. This is the largest sample of non-modern humans found to date from one single site, and with all skeletal elements represented. Since the techniques available to estimate the degree of sexual dimorphism in small palaeontological samples are all unsatisfactory, we have used the bootstraping method to asses the magnitude of the variation in the Sima de los Huesos sample compared to modern human intrapopulational variation. We analyze size variation without attempting to sex the specimens a priori. Anatomical regions investigated are scapular glenoid fossa; acetabulum; humeral proximal and distal epiphyses; ulnar proximal epiphysis; radial neck; proximal femur; humeral, femoral, ulnar and tibial shaft; lumbosacral joint; patella; calcaneum; and talar trochlea. In the Sima de los Huesos sample only the humeral midshaft perimeter shows an unusual high variation (only when it is expressed by the maximum ratio, not by the coefficient of variation). In spite of that the cranial capacity range at Sima de los Huesos almost spans the rest of the European and African Middle Pleistocene range. The maximum ratio is in the central part of the distribution of modern human samples. Thus, the hypothesis of a greater sexual dimorphism in Middle Pleistocene populations than in modern populations is not supported by either cranial or postcranial evidence from Sima de los Huesos.
Effect of sampling variation on error of rainfall variables measured by optical disdrometer
Liu, X. C.; Gao, T. C.; Liu, L.
2012-12-01
During the sampling process of precipitation particles by optical disdrometers, the randomness of particles and sampling variability has great impact on the accuracy of precipitation variables. Based on a marked point model of raindrop size distribution, the effect of sampling variation on drop size distribution and velocity distribution measurement using optical disdrometers are analyzed by Monte Carlo simulation. The results show that the samples number, rain rate, drop size distribution, and sampling size have different influences on the accuracy of rainfall variables. The relative errors of rainfall variables caused by sampling variation in a descending order as: water concentration, mean diameter, mass weighed mean diameter, mean volume diameter, radar reflectivity factor, and number density, which are independent with samples number basically; the relative error of rain variables are positively correlated with the margin probability, which is also positively correlated with the rain rate and the mean diameter of raindrops; the sampling size is one of the main factors that influence the margin probability, with the decreasing of sampling area, especially the decreasing of short side of sample size, the probability of margin raindrops is getting greater, hence the error of rain variables are getting greater, and the variables of median size raindrops have the maximum error. To ensure the relative error of rainfall variables measured by optical disdrometer less than 1%, the width of light beam should be at least 40 mm.
A novel autofocus algorithm based on maximum total variation criteria for SAR images
MA Lun; LIAO Guisheng
2007-01-01
A novel autofocus algorithm for synthetic aperture radar (SAR)based on total variation is presented in this Paper.The method,which starts with a complex phase-degraded SAR image,after the phase errors model is introduced into the range-compressed phase-history domain,carries out phase errors correction by changing the focus till the total variation of the azimuth profile is maximized.Compared with the minimum entropy autofocus algorithm,the autofocus algorithm has less computational complexity and is easier to implement.The simulation and the processing results of the measured data show the validity of the proposed method.
Wilson, Robert M.
1990-01-01
The level of skill in predicting the size of the sunspot cycle is investigated for the two types of precursor techniques, single variate and bivariate fits, both applied to cycle 22. The present level of growth in solar activity is compared to the mean level of growth (cycles 10-21) and to the predictions based on the precursor techniques. It is shown that, for cycle 22, both single variate methods (based on geomagnetic data) and bivariate methods suggest a maximum amplitude smaller than that observed for cycle 19, and possibly for cycle 21. Compared to the mean cycle, cycle 22 is presently behaving as if it were a +2.6 sigma cycle (maximum amplitude of about 225), which means that either it will be the first cycle not to be reliably predicted by the combined precursor techniques or its deviation relative to the mean cycle will substantially decrease over the next 18 months.
A Variational Approach to Enhanced Sampling and Free Energy Calculations
Parrinello, Michele
2015-03-01
The presence of kinetic bottlenecks severely hampers the ability of widely used sampling methods like molecular dynamics or Monte Carlo to explore complex free energy landscapes. One of the most popular methods for addressing this problem is umbrella sampling which is based on the addition of an external bias which helps overcoming the kinetic barriers. The bias potential is usually taken to be a function of a restricted number of collective variables. However constructing the bias is not simple, especially when the number of collective variables increases. Here we introduce a functional of the bias which, when minimized, allows us to recover the free energy. We demonstrate the usefulness and the flexibility of this approach on a number of examples which include the determination of a six dimensional free energy surface. Besides the practical advantages, the existence of such a variational principle allows us to look at the enhanced sampling problem from a rather convenient vantage point.
Maris, E.
1998-01-01
The sampling interpretation of confidence intervals and hypothesis tests is discussed in the context of conditional maximum likelihood estimation. Three different interpretations are discussed, and it is shown that confidence intervals constructed from the asymptotic distribution under the third sampling scheme discussed are valid for the first…
Crimi, Alessandro; Lillholm, Martin; Nielsen, Mads
2011-01-01
the estimates' influence on a missing-data reconstruction task, where high resolution vertebra and cartilage models are reconstructed from incomplete and lower dimensional representations. Our results demonstrate that our methods outperform the traditional ML method and Tikhonov regularization......., and may lead to unreliable results. In this paper, we discuss regularization by prior knowledge using maximum a posteriori (MAP) estimates. We compare ML to MAP using a number of priors and to Tikhonov regularization. We evaluate the covariance estimates on both synthetic and real data, and we analyze...
Foulger, G.R.
1995-01-01
Given a uniform lithology and strain rate and a full seismic data set, the maximum depth of earthquakes may be viewed to a first order as an isotherm. These conditions are approached at the Hengill geothermal area, S. Iceland, a dominantly basaltic area. The temperature at which seismic failure ceases for the strain rates likely at the Hengill geothermal area is determined by analogy with oceanic crust, and is about 650 ?? 50??C. The topographies of the top and bottom of the seismogenic layer were mapped using 617 earthquakes. The thickness of the seismogenic layer is roughly constant and about 3 km. A shallow, aseismic, low-velocity volume within the spreading plate boundary that crosses the area occurs above the top of the seismogenic layer and is interpreted as an isolated body of partial melt. The base of the seismogenic layer has a maximum depth of about 6.5 km beneath the spreading axis and deepens to about 7 km beneath a transform zone in the south of the area. -from Author
Moment series for the coefficient of variation in Weibull sampling
Bowman, K.O.; Shenton, L.R.
1981-01-01
For the 2-parameter Weibull distribution function F(t) = 1 - exp(-t/b)/sup c/, t > 0, with c and b positive, a moment estimator c* for c is the solution of the equationGAMMA(1 + 2/c*)/GAMMA/sup 2/ (1 + 1/c*) = 1 + v*/sup 2/ where v* is the coefficient of variation in the form ..sqrt..m/sub 2//m/sub 1/', m/sub 1/' being the sample mean, m/sub 2/ the sample second central moment (it is trivial in the present context to replace m/sub 2/ by the variance). One approach to the moments of c* (Bowman and Shenton, 1981) is to set-up moment series for the scale-free v*. The series are apparently divergent and summation algorithms are essential; we consider methods due to Levin (1973) and one, introduced ourselves (Bowman and Shenton, 1976).
Maximum likelihood estimation for Cox's regression model under nested case-control sampling
Scheike, Thomas Harder; Juul, Anders
2004-01-01
-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used...
Oudyn, Frederik W; Lyons, David J; Pringle, M J
2012-01-01
Many scientific laboratories follow, as standard practice, a relatively short maximum holding time (within 7 days) for the analysis of total suspended solids (TSS) in environmental water samples. In this study we have subsampled from bulk water samples stored at ∼4 °C in the dark, then analysed for TSS at time intervals up to 105 days after collection. The nonsignificant differences in TSS results observed over time demonstrates that storage at ∼4 °C in the dark is an effective method of preserving samples for TSS analysis, far past the 7-day standard practice. Extending the maximum holding time will ease the pressure on sample collectors and laboratory staff who until now have had to determine TSS within an impractically short period.
Iijima, T.; Naito, H.
2017-04-01
Context. The outburst of the symbiotic recurrent nova V407 Cyg in 2010 has been studied by numerous authors. On the other hand, its spectral variations in the quiescent stage have not been well studied yet. This paper is probably the first report for the relation between the pulsation of the secondary Mira variable and the temperature of the primary hot component for V407 Cyg. Aims: The spectral variation in the post-outburst stage has been monitored to study the properties of this object. In the course of this work, we found some unexpected spectral variations around the light maximum of the secondary Mira variable in 2012. The relation between the mass transfer in the binary system and the pulsation of the secondary Mira variable is studied. Methods: High- and low-resolution optical spectra obtained at the Astronomical Observatories at Asiago were used. The photometric data depend on the database of the VSNET. Results: The secondary Mira variable reached its light maximum in 2012, when an absorption spectrum of a late-M-type giant developed and the emission line of Hδ became stronger than those of Hβ and Hγ, which are typical spectral features of Mira variables at light maxima. On the other hand, intensity ratios to Hβ of the emission lines of He I, He II, [Fe VII], etc., which obviously depended on the temperature of the hot component, rapidly varied around the light maximum. The intensity ratios started to decrease at phase about 0.9 of the periodical light variation of the Mira variable. This phenomenon suggests that the mass transfer rate, as well as the mass accretion rate onto the hot component, decreased according to the contraction of the Mira variable. However, these intensity ratios somewhat recovered just on the light maximum: phase 0.99. There might have occurred a temporal mass loss from the Mira variable at that time. The intensity ratios decreased again after the light maximum, then recovered and returned to the normal level at phase about 0
The effects of disjunct sampling and averaging time on maximum mean wind speeds
Larsén, Xiaoli Guo; Mann, J.
2006-01-01
Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...... period before being saved. In either case, the extreme wind will be underestimated. This paper investigates the effects of the disjunct sampling interval and the averaging time on the attenuation of the extreme wind estimation by means of a simple theoretical approach as well as measurements...
Variation of Resin Properties Through the Thickness of Cured Samples
1984-01-01
It is the purpose of this work to gain knowledge of the glassy materials used as matrices in composites and to study the homogeneity resulting from the curing process. An attempt is made to link the glass transition quantitatively with the presence of a given material. Expoxy resins containing various amounts of hardener (TGDDM/DDS system) were cured in a muffle furnace at 473 K for seven hours. The glass transition temperature, T sub g versus weight minus percent of hardener in the epoxy resin were measured. A limit was rapidly reached in T sub g at only two percent hardener. Thus, the glass transition of the fully cured epoxy-amine matrix seems not much different from the epoxide-epoxide cure. The T sub g versus cure-time for the epoxide-epoxide reaction was also studied. My 720 was cured by itself in an oil bath at 473 K for different lengths of time. The T sub g was found to increase exponentially with the cure time, and a maximum T sub g of about 450 K was reached after eleven hours. The reaction was found to be inhibited by running the sample under argon.
Draxler, Clemens; Alexandrowicz, Rainer W
2015-12-01
This paper refers to the exponential family of probability distributions and the conditional maximum likelihood (CML) theory. It is concerned with the determination of the sample size for three groups of tests of linear hypotheses, known as the fundamental trinity of Wald, score, and likelihood ratio tests. The main practical purpose refers to the special case of tests of the class of Rasch models. The theoretical background is discussed and the formal framework for sample size calculations is provided, given a predetermined deviation from the model to be tested and the probabilities of the errors of the first and second kinds.
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Variation in rank abundance replicate samples and impact of clustering
Neuteboom, J.H.; Struik, P.C.
2005-01-01
Calculating a single-sample rank abundance curve by using the negative-binomial distribution provides a way to investigate the variability within rank abundance replicate samples and yields a measure of the degree of heterogeneity of the sampled community. The calculation of the single-sample rank a
G. M. J. HASAN
2014-10-01
Full Text Available Climate, one of the major controlling factors for well-being of the inhabitants in the world, has been changing in accordance with the natural forcing and manmade activities. Bangladesh, the most densely populated countries in the world is under threat due to climate change caused by excessive use or abuse of ecology and natural resources. This study checks the rainfall patterns and their associated changes in the north-eastern part of Bangladesh mainly Sylhet city through statistical analysis of daily rainfall data during the period of 1957 - 2006. It has been observed that a good correlation exists between the monthly mean and daily maximum rainfall. A linear regression analysis of the data is found to be significant for all the months. Some key statistical parameters like the mean values of Coefficient of Variability (CV, Relative Variability (RV and Percentage Inter-annual Variability (PIV have been studied and found to be at variance. Monthly, yearly and seasonal variation of rainy days also analysed to check for any significant changes.
Yang, X.; Huang, W.; Liu, Q.
2012-12-01
The high-resolution geomagnetic field records from the Last Glacial Maximum to the Holocene, which possessed of a notable climate change, were scarce in the global area. In this abstract, two gravity piston cores ZSQD2 (114.16oE, 19.58oN, ~190 cm in length, water depth 681m) and ZSQD34 (114.74oE, 19.05oN, ~184 cm in length, water depth 1820 m), situated in the north of South China Sea, were selected to study the secular variations of geomagnetic field in this area. Radiocarbon ages of G.sacculifer suggest that the deposition rate varied with 56.1 cm/kyr and 3.7 cm/kyr during the Last Glacial and the Holocene, respectively. Rock magnetic results indicate that the pseudo-single domain magnetite with low coercivity dominate the properties of sediments. The characteristic remanent magnetization (ChRM) values are evaluated using the 5-8 AF steps when MAD values are generally <5. Constrained by the radiocarbon chronology, the secular variation curves since ~18 cal. kyr can be constructed using the ChRM directions and NRM/ARM ratios (as a proxy of relative intensity). Comparing the Holocene SV with that from terrestrial lakes in Southern China, similar shape corroborates the reliability of records and uniform pattern of non-dipole magnetic field. Two significant features on SV curves present the geomagnetic field characteristics from ~17 cal. kyr to the early Holocene. One is that the direction variations lag behind the relative intensity on the millennium time scale. Such as a major direction shift occurred around 13 cal. kyr while the relative intensity low was about 14 cal. kyr. Another feature is the fast and frequent wiggles both in direction and intensity between ~17 to ~14.5 cal. kyr. During this period, two significant negative inclination anomalies occurred at ~16.4 and ~15.4 cal. kyr associated with low intensity, respectively. Nevertheless, the corresponding declinations did not show the reversed features although they had also some slight fluctuations. The
Size variation in samples of fossil and recent murid teeth
Freudenthal, M.; Martín Suárez, E.
1990-01-01
The variability coefficient proposed by Freudenthal & Cuenca Bescós (1984) for samples of fossil cricetid teeth, is calculated for about 200 samples of fossil and recent murid teeth. The results are discussed, and compared with those obtained for the Cricetidae.
Size variation in samples of fossil and recent murid teeth
Freudenthal, M.; Martín Suárez, E.
1990-01-01
The variability coefficient proposed by Freudenthal & Cuenca Bescós (1984) for samples of fossil cricetid teeth, is calculated for about 200 samples of fossil and recent murid teeth. The results are discussed, and compared with those obtained for the Cricetidae.
Rezaeian Mahdi
2015-01-01
Full Text Available Containment of a transport cask during both normal and accident conditions is important to the health and safety of the public and of the operators. Based on IAEA regulations, releasable activity and maximum permissible volumetric leakage rate within the cask containing fuel samples of Tehran Research Reactor enclosed in an irradiated capsule are calculated. The contributions to the total activity from the four sources of gas, volatile, fines, and corrosion products are treated separately. These calculations are necessary to identify an appropriate leak test that must be performed on the cask and the results can be utilized as the source term for dose evaluation in the safety assessment of the cask.
Burns, Brian; Wilson, Neil E; Furuyama, Jon K; Thomas, M Albert
2014-02-01
The four-dimensional (4D) echo-planar correlated spectroscopic imaging (EP-COSI) sequence allows for the simultaneous acquisition of two spatial (ky, kx) and two spectral (t2, t1) dimensions in vivo in a single recording. However, its scan time is directly proportional to the number of increments in the ky and t1 dimensions, and a single scan can take 20–40 min using typical parameters, which is too long to be used for a routine clinical protocol. The present work describes efforts to accelerate EP-COSI data acquisition by application of non-uniform under-sampling (NUS) to the ky–t1 plane of simulated and in vivo EP-COSI datasets then reconstructing missing samples using maximum entropy (MaxEnt) and compressed sensing (CS). Both reconstruction problems were solved using the Cambridge algorithm, which offers many workflow improvements over other l1-norm solvers. Reconstructions of retrospectively under-sampled simulated data demonstrate that the MaxEnt and CS reconstructions successfully restore data fidelity at signal-to-noise ratios (SNRs) from 4 to 20 and 5× to 1.25× NUS. Retrospectively and prospectively 4× under-sampled 4D EP-COSI in vivo datasets show that both reconstruction methods successfully remove NUS artifacts; however, MaxEnt provides reconstructions equal to or better than CS. Our results show that NUS combined with iterative reconstruction can reduce 4D EP-COSI scan times by 75% to a clinically viable 5 min in vivo, with MaxEnt being the preferred method. 2013 John Wiley & Sons, Ltd.
Variational level set segmentation for forest based on MCMC sampling
Yang, Tie-Jun; Huang, Lin; Jiang, Chuan-xian; Nong, Jian
2014-11-01
Environmental protection is one of the themes of today's world. The forest is a recycler of carbon dioxide and natural oxygen bar. Protection of forests, monitoring of forest growth is long-term task of environmental protection. It is very important to automatically statistic the forest coverage rate using optical remote sensing images and the computer, by which we can timely understand the status of the forest of an area, and can be freed from tedious manual statistics. Towards the problem of computational complexity of the global optimization using convexification, this paper proposes a level set segmentation method based on Markov chain Monte Carlo (MCMC) sampling and applies it to forest segmentation in remote sensing images. The presented method needs not to do any convexity transformation for the energy functional of the goal, and uses MCMC sampling method with global optimization capability instead. The possible local minima occurring by using gradient descent method is also avoided. There are three major contributions in the paper. Firstly, by using MCMC sampling, the convexity of the energy functional is no longer necessary and global optimization can still be achieved. Secondly, taking advantage of the data (texture) and knowledge (a priori color) to guide the construction of Markov chain, the convergence rate of Markov chains is improved significantly. Finally, the level set segmentation method by integrating a priori color and texture for forest is proposed. The experiments show that our method can efficiently and accurately segment forest in remote sensing images.
Kim, K; Lee, S K; Kim, Y H
2010-10-01
The weakening of trunk muscles is known to be related to a reduction of the stabilization function provided by the muscles to the lumbar spine; therefore, strengthening deep muscles might reduce the possibility of injury and pain in the lumbar spine. In this study, the effect of variation in maximum forces of trunk muscles on the joint forces and moments in the lumbar spine was investigated. Accordingly, a three-dimensional finite element model of the lumbar spine that included the trunk muscles was used in this study. The variation in maximum forces of specific muscle groups was then modelled, and joint compressive and shear forces, as well as resultant joint moments, which were presumed to be related to spinal stabilization from a mechanical viewpoint, were analysed. The increase in resultant joint moments occurred owing to decrease in maximum forces of the multifidus, interspinales, intertransversarii, rotatores, iliocostalis, longissimus, psoas, and quadratus lumborum. In addition, joint shear forces and resultant joint moments were reduced as the maximum forces of deep muscles were increased. These results from finite element analysis indicate that the variation in maximum forces exerted by trunk muscles could affect the joint forces and joint moments in the lumbar spine.
Gang Li
2012-01-01
Full Text Available Vertical patterns of early summer chlorophyll a (Chl a concentration from the Indian Ocean are presented, as well as the variations of depth and size-fractioned Chl a in the deep chlorophyll maximum (DCM. A total of 38 stations were investigated from 12 April to 5 May 2011, with 8 discrete-depth samples (7 fixed and 1 variable at real DCM measured at each station. Depth-integrated Chl a concentration (∑Chl a varied from 11.5 to 26.8 mg m−2, whereas Chl a content at DCM ranged from 0.17 to 0.57 μg L−1 with picophytoplankton (<3 μm accounting for 82% to 93%. The DCM depth varied from 55.6 to 91 m and shoaled latitudinally to northward. Moreover, our results indicated that the ∑Chl a could be underestimated by up to 9.3% with a routine sampling protocol of collecting samples only at 7 fixed depths as the real DCM was missed. The underestimation was negatively correlated to the DCM depth when it varied from 55.6 to 71.3 m (r=−0.63, P<0.05 but positively correlated when it ranged from 75.8 to 91 m (r=0.68, P<0.01. This indicates that in the Indian Ocean the greater the departure of the DCM from 75 m depth, the greater the underestimation of integrated Chl a concentration that could occur if the real DCM is missed.
Martin M Gossner
Full Text Available There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic. We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when
Gossner, Martin M; Struwe, Jan-Frederic; Sturm, Sarah; Max, Simeon; McCutcheon, Michelle; Weisser, Wolfgang W; Zytynska, Sharon E
2016-01-01
There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic). We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when genetic analysis
Variation of surface water spectral response as a function of in situ sampling technique
Davis, Bruce A.; Hodgson, Michael E.
1988-01-01
Tests were carried out to determine the spectral variation contributed by a particular sampling technique. A portable radiometer was used to measure the surface water spectral response. Variation due to the reflectance of objects near the radiometer (i.e., the boat side) during data acquisition was studied. Consideration was also given to the variation due to the temporal nature of the phenomena (i.e., wave activity).
Tianjiao Chu
Full Text Available Our goal was to test the hypothesis that inter-individual genomic copy number variation in control samples is a confounding factor in the non-invasive prenatal detection of fetal microdeletions via the sequence-based analysis of maternal plasma DNA. The database of genomic variants (DGV was used to determine the "Genomic Variants Frequency" (GVF for each 50kb region in the human genome. Whole genome sequencing of fifteen karyotypically normal maternal plasma and six CVS DNA controls samples was performed. The coefficient of variation of relative read counts (cv.RTC for these samples was determined for each 50kb region. Maternal plasma from two pregnancies affected with a chromosome 5p microdeletion was also sequenced, and analyzed using the GCREM algorithm. We found strong correlation between high variance in read counts and GVF amongst controls. Consequently we were unable to confirm the presence of the microdeletion via sequencing of maternal plasma samples obtained from two sequential affected pregnancies. Caution should be exercised when performing NIPT for microdeletions. It is vital to develop our understanding of the factors that impact the sensitivity and specificity of these approaches. In particular, benign copy number variation amongst controls is a major confounder, and their effects should be corrected bioinformatically.
Persson, Lennart; Elliott, J Malcolm
2013-05-01
The theory of cannibal dynamics predicts a link between population dynamics and individual life history. In particular, increased individual growth has, in both modeling and empirical studies, been shown to result from a destabilization of population dynamics. We used data from a long-term study of the dynamics of two leech (Erpobdella octoculata) populations to test the hypothesis that maximum size should be higher in a cycling population; one of the study populations exhibited a delayed feedback cycle while the other population showed no sign of cyclicity. A hump-shaped relationship between individual mass of 1-year-old leeches and offspring density the previous year was present in both populations. As predicted from the theory, the maximum mass of individuals was much larger in the fluctuating population. In contrast to predictions, the higher growth rate was not related to energy extraction from cannibalism. Instead, the higher individual mass is suggested to be due to increased availability of resources due to a niche widening with increased individual body mass. The larger individual mass in the fluctuating population was related to a stronger correlation between the densities of 1-year-old individuals and 2-year-old individuals the following year in this population. Although cannibalism was the major mechanism regulating population dynamics, its importance was negligible in terms of providing cannibalizing individuals with energy subsequently increasing their fecundity. Instead, the study identifies a need for theoretical and empirical studies on the largely unstudied interplay between ontogenetic niche shifts and cannibalistic population dynamics.
Babin, Marcel; Morel, André; Claustre, Hervé; Bricaud, Annick; Kolber, Zbigniew; Falkowski, Paul G.
1996-08-01
Natural variability of the maximum quantum yield of carbon fixation ( φC max), as determined from the initial slope of the photosynthesis-irradiance curve and from light absorption measurements, was studied at three sites in the northeast tropical Atlantic representing typical eutrophic, mesotrophic and oligotrophic regimes. At the eutrophic and mesotrophic sites, where the mixed layer extended deeper than the euphotic layer, all photosynthetic parameters were nearly constant with depth, and φC max averaged between 0.05 and 0.03 molC (mol quanta absorbed) -1, respectively. At the oligotrophic site, a deep chlorophyll maximum (DCM) existed and φC max varied from ca 0.005 in the upper nutrient-depleted mixed layer to 0.063 below the DCM in stratified waters. firstly, φC max was found roughly to covary with nitrate concentration between sites and with depth at the oligotrophic site, and secondly, it was found to decrease with increasing relative concentrations of non-photosynthetic pigments. The extent of φC max variations directly related to nitrate concentration was inferred from variations in the fraction of functional PS2 reaction centers ( f), measured using fast repetition rate fluorometry. Covariations between f and nitrate concentration indicate that the latter factor may be responsible for a 2-fold variation in φC max. Moreover, partitioning light absorption between photosynthetic and non-photosynthetic pigments suggests that the variable contribution of the non-photosynthetic absorption may explain a 3-fold variation in φC max, as indicated by variations in the effective absorption cross-section of photosystem 2 ( σPS2). Results confirm the role of nitrate in φC max variation, and emphasize those of light and vertical mixing.
Garamszegi, László Z; Møller, Anders P
2010-11-01
Comparative analyses aim to explain interspecific variation in phenotype among taxa. In this context, phylogenetic approaches are generally applied to control for similarity due to common descent, because such phylogenetic relationships can produce spurious similarity in phenotypes (known as phylogenetic inertia or bias). On the other hand, these analyses largely ignore potential biases due to within-species variation. Phylogenetic comparative studies inherently assume that species-specific means from intraspecific samples of modest sample size are biologically meaningful. However, within-species variation is often significant, because measurement errors, within- and between-individual variation, seasonal fluctuations, and differences among populations can all reduce the repeatability of a trait. Although simulations revealed that low repeatability can increase the type I error in a phylogenetic study, researchers only exercise great care in accounting for similarity in phenotype due to common phylogenetic descent, while problems posed by intraspecific variation are usually neglected. A meta-analysis of 194 comparative analyses all adjusting for similarity due to common phylogenetic descent revealed that only a few studies reported intraspecific repeatabilities, and hardly any considered or partially dealt with errors arising from intraspecific variation. This is intriguing, because the meta-analytic data suggest that the effect of heterogeneous sampling can be as important as phylogenetic bias, and thus they should be equally controlled in comparative studies. We provide recommendations about how to handle such effects of heterogeneous sampling.
Hauschke, D; Steinijans, W V; Diletti, E; Schall, R; Luus, H G; Elze, M; Blume, H
1994-07-01
Bioequivalence studies are generally performed as crossover studies and, therefore, information on the intrasubject coefficient of variation is needed for sample size planning. Unfortunately, this information is usually not presented in publications on bioequivalence studies, and only the pooled inter- and intrasubject coefficient of variation for either test or reference formulation is reported. Thus, the essential information for sample size planning of future studies is not made available to other researchers. In order to overcome such shortcomings, the presentation of results from bioequivalence studies should routinely include the intrasubject coefficient of variation. For the relevant coefficients of variation, theoretical background together with modes of calculation and presentation are given in this communication with particular emphasis on the multiplicative model.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Zhong, Kun; Wang, Wei; He, Falin; Wang, Zhiguo
2015-02-01
The aims of this article was to provide the quality control requirements of preanalytical variation for the determination of lead in samples of human origin, reduce the influence of preanalytical variation on the test results. According to the Clinical and Laboratory Standards Institute documents, control of preanalytical variation in trace element determinations, analytical procedures for the determination of lead in blood and urine and other references and guidelines, the methods of quality control of lead determination had been made, including: the factors needed to be considered before collection, preservation, transportation and other preanalytical factors, the abilities and considerations of laboratory staff, etc.
Weekday variation in triglyceride concentrations in 1.8 million blood samples
Jaskolowski, Jörn; Ritz, Christian; Sjödin, Anders Mikael
2017-01-01
BACKGROUND: Triglyceride (TG) concentration is used as a marker of cardio-metabolic risk. However, diurnal and possibly weekday variation exists in TG concentrations. OBJECTIVE: To investigate weekday variation in TG concentrations among 1.8 million blood samples drawn between 2008 and 2015 from...... variations in TG concentrations were recorded for out-patients between the age of 9 to 26 years, with up to 20% higher values on Mondays compared to Fridays (all PTriglyceride concentrations were highest after the weekend and gradually declined during the week. We suggest that unhealthy...
Werne, J. P.; Halbur, J.; Rubesch, M.; Brown, E. T.; Ortega, B.; Caballero, M.; Correa-Metrio, A.; Lozano, S.
2013-05-01
The water balance of the Southwestern United States and most of Mexico is dependent on regional climate systems, including the Mexican (or North American) Monsoon. The Mexican Monsoon leads to significant summer rainfall across a broad swath of the continent, which constitutes the major source of annual precipitation over much of this region. The position of the ITCZ and the strength of the accompanying monsoon are affected by variability in insolation. Stronger northern hemisphere summer insolation shifts the ITCZ northward, bringing about a more intense monsoon. Here we discuss a new geochemical climate record from Lake Chalco, Mexico, which couples inorganic (X-ray fluorescence) and organic (biomarkers and stable isotopes) geochemical proxies to reconstruct temperature and aridity over the past 45,000 years, as well as the response of terrestrial vegetation to such climate changes. The Basin of Mexico is a high altitude closed lacustrine basin (20°N, 99°W; 2240 m.a.s.l.) in the Trans Mexican Volcanic Belt. The plain of Lake Chalco, located near Mexico City in the southern sub-basin, has an area of 120 km2 and a catchment of 1100 km2. Though the present-day lake has been reduced to a small marsh due to historic diversion of its waters, over longer timescales the lake has been a sensitive recorder of hydroclimatic variations. Low Ca concentrations indicate more arid periods during the late glacial (34 - 15 kybp) compared to the last interstadial or early Holocene. This observation is supported by the ratio of terrestrial to aquatic lipid biomarkers (long vs. short chain n-alkanes), which indicate greater relative inputs of aquatic biomarkers during wetter periods. The changes in aridity as shown in these geochemical proxies are compared with temperature as reflected in glycerol dialkyl glycerol tetraether (GDGT) based paleotemperature proxies to assess the extent to which insolation may have driven aridity variations, and with terrestrial and aquatic biomarker
Vandergoes, Marcus J.; Newnham, Rewi M.; Denton, George H.; Blaauw, Maarten; Barrell, David J. A.
2013-08-01
Westland occurred sometime between ca 18,490 and ca 17,370 cal. yr BP. A similar general pattern of stadials and interstadials is seen, to varying degrees of resolution but generally with lesser chronological control, in many other paleoclimate proxy records from the New Zealand region. This highly resolved chronology of vegetation changes from southwestern New Zealand contributes to the examination of past climate variations in the southwest Pacific region. The stadial and interstadial episodes defined by south Westland pollen records represent notable climate variability during the latter part of the Last Glaciation. Similar climatic patterns recorded farther afield, for example from Antarctica and the Southern Ocean, imply that climate variations during the latter part of the Last Glaciation and the transition to the Holocene interglacial were inter-regionally extensive in the Southern Hemisphere and thus important to understand in detail and to place into a global context.
Designing a multiple dependent state sampling plan based on the coefficient of variation.
Yan, Aijun; Liu, Sanyang; Dong, Xiaojuan
2016-01-01
A multiple dependent state (MDS) sampling plan is developed based on the coefficient of variation of the quality characteristic which follows a normal distribution with unknown mean and variance. The optimal plan parameters of the proposed plan are solved by a nonlinear optimization model, which satisfies the given producer's risk and consumer's risk at the same time and minimizes the sample size required for inspection. The advantages of the proposed MDS sampling plan over the existing single sampling plan are discussed. Finally an example is given to illustrate the proposed plan.
Mittermeyer, Gabriele; Malinowsky, Katharina; Beese, Christian; Höfler, Heinz; Schmalfeldt, Barbara; Becker, Karl-Friedrich; Avril, Stefanie
2013-01-01
Although the expression of cell signaling proteins is used as prognostic and predictive biomarker, variability of protein levels within tumors is not well studied. We assessed intratumoral heterogeneity of protein expression within primary ovarian cancer. Full-length proteins were extracted from 88 formalin-fixed and paraffin-embedded tissue samples of 13 primary high-grade serous ovarian carcinomas with 5-9 samples each. In addition, 14 samples of normal fallopian tube epithelium served as reference. Quantitative reverse phase protein arrays were used to analyze the expression of 36 cell signaling proteins including HER2, EGFR, PI3K/Akt, and angiogenic pathways as well as 15 activated (phosphorylated) proteins. We found considerable intratumoral heterogeneity in the expression of proteins with a mean coefficient of variation of 25% (range 17-53%). The extent of intratumoral heterogeneity differed between proteins (p<0.005). Interestingly, there were no significant differences in the extent of heterogeneity between phosphorylated and non-phosphorylated proteins. In comparison, we assessed the variation of protein levels amongst tumors from different patients, which revealed a similar mean coefficient of variation of 21% (range 12-48%). Based on hierarchical clustering, samples from the same patient clustered more closely together compared to samples from different patients. However, a clear separation of tumor versus normal tissue by clustering was only achieved when mean expression values of all individual samples per tumor were analyzed. While differential expression of some proteins was detected independently of the sampling method used, the majority of proteins only demonstrated differential expression when mean expression values of multiple samples per tumor were analyzed. Our data indicate that assessment of established and novel cell signaling proteins as diagnostic or prognostic markers may require sampling of serous ovarian cancers at several distinct
Gabriele Mittermeyer
Full Text Available Although the expression of cell signaling proteins is used as prognostic and predictive biomarker, variability of protein levels within tumors is not well studied. We assessed intratumoral heterogeneity of protein expression within primary ovarian cancer. Full-length proteins were extracted from 88 formalin-fixed and paraffin-embedded tissue samples of 13 primary high-grade serous ovarian carcinomas with 5-9 samples each. In addition, 14 samples of normal fallopian tube epithelium served as reference. Quantitative reverse phase protein arrays were used to analyze the expression of 36 cell signaling proteins including HER2, EGFR, PI3K/Akt, and angiogenic pathways as well as 15 activated (phosphorylated proteins. We found considerable intratumoral heterogeneity in the expression of proteins with a mean coefficient of variation of 25% (range 17-53%. The extent of intratumoral heterogeneity differed between proteins (p<0.005. Interestingly, there were no significant differences in the extent of heterogeneity between phosphorylated and non-phosphorylated proteins. In comparison, we assessed the variation of protein levels amongst tumors from different patients, which revealed a similar mean coefficient of variation of 21% (range 12-48%. Based on hierarchical clustering, samples from the same patient clustered more closely together compared to samples from different patients. However, a clear separation of tumor versus normal tissue by clustering was only achieved when mean expression values of all individual samples per tumor were analyzed. While differential expression of some proteins was detected independently of the sampling method used, the majority of proteins only demonstrated differential expression when mean expression values of multiple samples per tumor were analyzed. Our data indicate that assessment of established and novel cell signaling proteins as diagnostic or prognostic markers may require sampling of serous ovarian cancers at
Sérgio Luiz Gomes Antunes
2012-03-01
Full Text Available Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL. When acid-fast bacilli (AFB are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data.
Antunes, Sérgio Luiz Gomes; Chimelli, Leila; Jardim, Márcia Rodrigues; Vital, Robson Teixeira; Nery, José Augusto da Costa; Corte-Real, Suzana; Hacker, Mariana Andréa Vilas Boas; Sarno, Euzenir Nunes
2012-03-01
Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL). When acid-fast bacilli (AFB) are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies) to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN)] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres) than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data.
Jennerjahn, T. C.; Gesierich, K.; Schefuß, E.; Mohtadi, M.
2014-12-01
Global climate change is a mosaic of regional changes to a large extent determined by region-specific feedbacks between climate and ecosystems. At present the ocean is forming a major sink in the global carbon cycle. Organic matter (OM) storage in sediments displays large regional variations and varied over time during the Quaternary. Upwelling regions are sites of high primary productivity and major depocenters of organic carbon (OC), the least understood of which is the Indian Ocean upwelling off Indonesia. In order to reconstruct the burial and composition of OM during the Late Quaternary, we analyzed five sediment cores from the Indian Ocean continental margin off the Indonesian islands Sumatra to Flores spanning the last 20,000 years (20 kyr). Sediments were analyzed for bulk composition, stable carbon and nitrogen isotopes of OM, amino acids and hexosamines and terrestrial plant wax n-alkanes and their stable carbon isotope composition. Sedimentation rates hardly varied over time in the western part of the transect. They were slightly lower in the East during the Last Glacial Maximum (LGM) and deglaciation, but increased strongly during the Holocene. The amount and composition of OM was similar along the transect with maximum values during the deglaciation and the late Holocene. High biogenic opal covarying with OM content indicates upwelling-induced primary productivity dominated by diatoms to be a major control of OM burial in sediments in the East during the past 20 kyr. The content of labile OM was low throughout the transect during the LGM and increased during the late Holocene. The increase was stronger and the OM less degraded in the East than in the West indicating that continental margin sediments off Java and Flores were the major depocenter of OC burial along the Indian Ocean margin off SW Indonesia. Temporal variations probably resulted from changes in upwelling intensity and terrestrial inputs driven by variations in monsoon strength.
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.
Kelley, Ken
2007-11-01
The accuracy in parameter estimation approach to sample size planning is developed for the coefficient of variation, where the goal of the method is to obtain an accurate parameter estimate by achieving a sufficiently narrow confidence interval. The first method allows researchers to plan sample size so that the expected width of the confidence interval for the population coefficient of variation is sufficiently narrow. A modification allows a desired degree of assurance to be incorporated into the method, so that the obtained confidence interval will be sufficiently narrow with some specified probability (e.g., 85% assurance that the 95 confidence interval width will be no wider than to units). Tables of necessary sample size are provided for a variety of scenarios that may help researchers planning a study where the coefficient of variation is of interest plan an appropriate sample size in order to have a sufficiently narrow confidence interval, optionally with somespecified assurance of the confidence interval being sufficiently narrow. Freely available computer routines have been developed that allow researchers to easily implement all of the methods discussed in the article.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Mello, Pier A.; Shi, Zhou; Genack, Azriel Z.
2016-08-01
We study the average energy - or particle - density of waves inside disordered 1D multiply-scattering media. We extend the transfer-matrix technique that was used in the past for the calculation of the intensity beyond the sample to study the intensity in the interior of the sample by considering the transfer matrices of the two segments that form the entire waveguide. The statistical properties of the two disordered segments are found using a maximum-entropy ansatz subject to appropriate constraints. The theoretical expressions are shown to be in excellent agreement with 1D transfer-matrix simulations.
Al-Quwaiee, Hessa
2016-01-07
In this work, we derive the exact statistical characteristics of the maximum and the minimum of two modified1 double generalized gamma variates in closed-form in terms of Meijer’s G-function, Fox’s H-function, the extended generalized bivariate Meijer’s G-function and H-function in addition to simple closed-form asymptotic results in terms of elementary functions. Then, we rely on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity and of (ii) a dual-hop free-space optical relay transmission system over double generalized gamma fading channels with the impact of pointing errors. In addition, we provide asymptotic results of the bit error rate of the two systems at high SNR regime. Computer-based Monte-Carlo simulations verify our new analytical results.
Contributions from the data samples in NOC technique on the extracting of the Sq variation
Wu, Yingyan; Xu, Wenyao
2015-04-01
The solar quiet daily variation, Sq, a rather regular variation is usually observed at mid-low latitudes on magnetic quiet days or less-disturbed days. It is mainly resulted from the dynamo currents in the ionospheric E region, which are driven by the atmospheric tidal wind and different processes and flow as two current whorls in each of the northern and southern hemispheres[1]. The Sq exhibits a conspicuous day-to-day (DTD) variability in daily range (or strength), shape (or phase) and its current focus. This variability is mainly attributed to changes in the ionospheric conductivity and tidal winds, varying with solar radiation and ionospheric conditions. Furthermore, it presents a seasonal variation and solar cycle variation[2-4]. In generally, Sq is expressed with the average value of the five international magnetic quiet days. Using data from global magnetic stations, equivalent current system of daily variation can be constructed to reveal characteristics of the currents[5]. In addition, using the differences of H component at two stations on north and south side of the Sq currents of focus, Sq is extracted much better[6]. Recently, the method of Natural Orthoganal Components (NOC) is used to decompose the magnetic daily variation and express it as the summation of eigenmodes, and indicate the first NOC eigenmode as the solar quiet daily variation, the second as the disturbance daily variation[7-9]. As we know, the NOC technique can help reveal simpler patterns within a complex set of variables, without designed basic-functions such as FFT technique. But the physical explanation of the NOC eigenmodes is greatly depends on the number of data samples and data regular-quality. Using the NOC method, we focus our present study on the analysis of the hourly means of the H component at BMT observatory in China from 2001 to 2008. The contributions of the number and the regular-quality of the data samples on which eigenmode corresponds to the Sq are analyzed, by
Sanna, Daria; Pala, Maria; Cossu, Piero; Dedola, Gian Luca; Melis, Sonia; Fresu, Giovanni; Morelli, Laura; Obinu, Domenica; Tonolo, Giancarlo; Secchi, Giannina; Triunfo, Riccardo; Lorenz, Joseph G.; Scheinfeldt, Laura; Torroni, Antonio; Robledo, Renato; Francalacci, Paolo
2011-01-01
We report a sampling strategy based on Mendelian Breeding Units (MBUs), representing an interbreeding group of individuals sharing a common gene pool. The identification of MBUs is crucial for case-control experimental design in association studies. The aim of this work was to evaluate the possible existence of bias in terms of genetic variability and haplogroup frequencies in the MBU sample, due to severe sample selection. In order to reach this goal, the MBU sampling strategy was compared to a standard selection of individuals according to their surname and place of birth. We analysed mitochondrial DNA variation (first hypervariable segment and coding region) in unrelated healthy subjects from two different areas of Sardinia: the area around the town of Cabras and the western Campidano area. No statistically significant differences were observed when the two sampling methods were compared, indicating that the stringent sample selection needed to establish a MBU does not alter original genetic variability and haplogroup distribution. Therefore, the MBU sampling strategy can be considered a useful tool in association studies of complex traits. PMID:21734814
Daria Sanna
2011-01-01
Full Text Available We report a sampling strategy based on Mendelian Breeding Units (MBUs, representing an interbreeding group of individuals sharing a common gene pool. The identification of MBUs is crucial for case-control experimental design in association studies. The aim of this work was to evaluate the possible existence of bias in terms of genetic variability and haplogroup frequencies in the MBU sample, due to severe sample selection. In order to reach this goal, the MBU sampling strategy was compared to a standard selection of individuals according to their surname and place of birth. We analysed mitochondrial DNA variation (first hypervariable segment and coding region in unrelated healthy subjects from two different areas of Sardinia: the area around the town of Cabras and the western Campidano area. No statistically significant differences were observed when the two sampling methods were compared, indicating that the stringent sample selection needed to establish a MBU does not alter original genetic variability and haplogroup distribution. Therefore, the MBU sampling strategy can be considered a useful tool in association studies of complex traits.
Field and Lab Methods to Reduce Sampling Variation in Soil Carbon
Mattson, K. G.; Zhang, J.
2015-12-01
Natural variability in soil and detrital carbon sampling is typically large enough that it hinders accurate assessment of standing stock and changes that may occur following disturbances and experimental treatments. We are developing carbon budgets in forests of Northern California and wish to see how experimental canopy thinning may affect carbon cycling in these forests. In the pre-treatment phase, we have sought methods to quantify detrital carbon pools in an accurate and efficient manner. We have found that small soil excavations 15 cm diameter to a depth of 10 cm work very well to reduce variation an avoid introducing sampling biases. We excavate a pit carefully of uniform dimensions using cutting chisels and scoops. We fill the void created using small pebbles contained in a small net and then weigh the pebbles to obtain a volume estimate of the soil collected. The samples are sorted moist through a series of sieves of 6, 4, and 2 mm into rocks, live roots, dead roots, woody debris, and remaining soil and its organic matter. From a single sample, we estimate proportional rock volume, fine soil bulk density (soil bulk density of the 2 mm fraction), live roots, dead roots, woody debris, and proportion of organic matter in the 2 mm fraction. The standard deviations of soil measures (soil carbon, loss on ignition, bulk density, rock volume, live and dead root mass) were universally reduced over similar measures by soil corers, in some instances by up to 5-fold. Coefficient of variation using excavation pits are typically 5 to 10 %, whereas cores were 20 to 30 %. We have observed that variation in soil organic matter is more a function of variation in soil bulk density than with variation in percent soil organic matter content. As a result, we often see increased soil organic matter stores at depths below 10 cm. Soils beneath highly decayed logs show increases in soil carbon in the mineral soil suggesting woody debris is a source of soil carbon. Below
Yafeng Wang
Full Text Available Little is known about tree height and height growth (as annual shoot elongation of the apical part of vertical stems of coniferous trees growing at various altitudes on the Tibetan Plateau, which provides a high-elevation natural platform for assessing tree growth performance in relation to future climate change. We here investigated the variation of maximum tree height and annual height increment of Smith fir (Abies georgei var. smithii in seven forest plots (30 m×40 m along two altitudinal transects between 3,800 m and 4,200/4,390 m above sea level (a.s.l. in the Sygera Mountains, southeastern Tibetan Plateau. Four plots were located on north-facing slopes and three plots on southeast-facing slopes. At each site, annual shoot growth was obtained by measuring the distance between successive terminal bud scars along the main stem of 25 trees that were between 2 and 4 m high. Maximum/mean tree height and mean annual height increment of Smith fir decreased with increasing altitude up to the tree line, indicative of a stress gradient (the dominant temperature gradient along the altitudinal transect. Above-average mean minimum summer (particularly July temperatures affected height increment positively, whereas precipitation had no significant effect on shoot growth. The time series of annual height increments of Smith fir can be used for the reconstruction of past climate on the southeastern Tibetan Plateau. In addition, it can be expected that the rising summer temperatures observed in the recent past and anticipated for the future will enhance Smith fir's growth throughout its altitudinal distribution range.
Seasonal Variation, Chemical Composition and Antioxidant Activity of Brazilian Propolis Samples
Érica Weinstein Teixeira
2010-01-01
Full Text Available Total phenolic contents, antioxidant activity and chemical composition of propolis samples from three localities of Minas Gerais state (southeast Brazil were determined. Total phenolic contents were determined by the Folin–Ciocalteau method, antioxidant activity was evaluated by DPPH, using BHT as reference, and chemical composition was analyzed by GC/MS. Propolis from Itapecerica and Paula Cândido municipalities were found to have high phenolic contents and pronounced antioxidant activity. From these extracts, 40 substances were identified, among them were simple phenylpropanoids, prenylated phenylpropanoids, sesqui- and diterpenoids. Quantitatively, the main constituent of both samples was allyl-3-prenylcinnamic acid. A sample from Virginópolis municipality had no detectable phenolic substances and contained mainly triterpenoids, the main constituents being α- and β-amyrins. Methanolic extracts from Itapecerica and Paula Cândido exhibited pronounced scavenging activity towards DPPH, indistinguishable from BHT activity. However, extracts from Virginópolis sample exhibited no antioxidant activity. Total phenolic substances, GC/MS analyses and antioxidant activity of samples from Itapecerica collected monthly over a period of 1 year revealed considerable variation. No correlation was observed between antioxidant activity and either total phenolic contents or contents of artepillin C and other phenolic substances, as assayed by CG/MS analysis.
Męczykowska, Hanna; Kobylis, Paulina; Stepnowski, Piotr; Caban, Magda
2017-05-04
Passive sampling is one of the most efficient methods of monitoring pharmaceuticals in environmental water. The reliability of the process relies on a correctly performed calibration experiment and a well-defined sampling rate (Rs) for target analytes. Therefore, in this review the state-of-the-art methods of passive sampler calibration for the most popular pharmaceuticals: antibiotics, hormones, β-blockers and non-steroidal anti-inflammatory drugs (NSAIDs), along with the sampling rate variation, were presented. The advantages and difficulties in laboratory and field calibration were pointed out, according to the needs of control of the exact conditions. Sampling rate calculating equations and all the factors affecting the Rs value - temperature, flow, pH, salinity of the donor phase and biofouling - were discussed. Moreover, various calibration parameters gathered from the literature published in the last 16 years, including the device types, were tabled and compared. What is evident is that the sampling rate values for pharmaceuticals are impacted by several factors, whose influence is still unclear and unpredictable, while there is a big gap in experimental data. It appears that the calibration procedure needs to be improved, for example, there is a significant deficiency of PRCs (Performance Reference Compounds) for pharmaceuticals. One of the suggestions is to introduce correction factors for Rs values estimated in laboratory conditions.
On the sampling distribution of the coefficient of L-variation for hydrological applications
A. Viglione
2010-08-01
Full Text Available The coefficient of L-variation (L-CV is commonly used in statistical hydrology, in particular in regional frequency analysis, as a measure of steepness of the frequency curve. The aim of this work is to infer the full frequency distribution of the sample L-CV (and, consequently, its confidence intervals for small samples and without making assumptions on the underlying parent distribution of the hydrological variable of interest. Several two-parameters candidate distributions are compared for a wide range of cases using Monte-Carlo simulations. A distribution-free method, recently proposed to estimate the variance structure of sample L-moments, is used to provide the parameters for the candidate distributions. It is shown that the log-Student t distribution approximates best, in most of the cases, the distribution of the sample L-CV and that a simple correction of the bias for the sample L-CV and its variance improves the fit. Also, the parametric method proposed here is demonstrated to perform better than the non-parametric bootstrap. An example of how this result could be used in hydrology is presented, namely in the comparison of methods for regional flood frequency analysis.
Variation in marital quality in a national sample of divorced women.
James, Spencer L
2015-06-01
Previous work has compared marital quality between stably married and divorced individuals. Less work has examined the possibility of variation among divorcés in trajectories of marital quality as divorce approaches. This study addressed that hole by first examining whether distinct trajectories of marital quality can be discerned among women whose marriages ended in divorce and, second, the profile of women who experienced each trajectory. Latent class growth analyses with longitudinal data from a nationally representative sample were used to "look backward" from the time of divorce. Although demographic and socioeconomic variables from this national sample did not predict the trajectories well, nearly 66% of divorced women reported relatively high levels of both happiness and communication and either low or moderate levels of conflict. Future research including personality or interactional patterns may lead to theoretical insights about patterns of marital quality in the years leading to divorce.
Pratt, R G
2005-10-01
ABSTRACT Leaf samples of forage bermudagrass with symptoms of infection by species of Bipolaris, Curvularia, and Exserohilum (dematiaceous hyphomycetes) were collected from three swine waste application sites in Mississippi at eight sampling times during each of 3 years. Samples were assayed for pathogens by observing sporulation on plated leaf tissue. Among 3,600 leaves assayed, eight species of the three genera were observed. Features and criteria for the practical identification of species on plated leaf samples are described. Sporulation by dematiaceous hyphomycetes was observed on 97% of leaves; a single pathogen was observed on 20% and two to five pathogens were observed on 77% of leaves. Distributions of leaves among classes with one to five pathogens per leaf, for sites within years, always differed significantly (P = 0.01) from a Poisson distribution and usually included fewer leaves than expected with four or five pathogens. Significant (P = 0.05) variation in frequencies of occurrence of pathogens among 72 samples of 50 leaves each was attributed to pathogen species, sampling times, and species-time interactions. Exserohilum rostratum, Curvularia lunata, and Bipolaris cynodontis were the most frequent pathogens across years and sites; B. spicifera and C. geniculata were intermediate; and B. hawaiiensis, B. sorokiniana, and B. stenospila were least frequent. For the five most common pathogens, significant differences in frequency among locations were commonplace. Six pathogens exhibited significant (P = 0.05) positive and negative correlations with others in overall frequencies of occurrence across years, sampling times, and sites. However, chi(2) tests of dual occurrence indicated that interactions between specific pairs of pathogens in or on leaves are not likely to be major causes for overall frequency correlations. Results indicate that dematiaceous hyphomycetes typically infect forage bermudagrass on swine waste application sites in complexes rather
Wigman, J T W; Wardenaar, K J; Wanders, R B K; Booij, S H; Jeronimus, B F; van der Krieke, L; Wichers, M; de Jonge, P
2017-05-01
Mild psychotic experiences are common in the general population. Although transient and benign in most cases, these experiences are predictive of later mental health problems for a significant minority. The goal of the present study was to perform examinations of the dimensional and discrete variations in individuals' reporting of subclinical positive and negative psychotic experiences in a unique Dutch internet-based sample from the general population. Positive and negative subclinical psychotic experiences were measured with the Community Assessment of Psychic Experiences in 2870 individuals. First, the prevalence of these experiences and their associations with demographics, affect, psychopathology and quality of life were investigated. Next, latent class analysis was used to identify data-driven subgroups with different symptom patterns, which were subsequently compared on aforementioned variables. Subclinical psychotic experiences were commonly reported. Both positive and negative psychotic experiences were associated with younger age, more negative affect, anxiety and depression as well as less positive affect and poorer quality of life. Seven latent classes ('Low psychotic experiences', 'Lethargic', 'Blunted', 'Distressed', 'Paranormal', 'Distressed/grandiose' and 'Distressed/positive psychotic experiences') were identified that demonstrated both dimensional differences in the number/severity of psychotic experiences and discrete differences in the patterns of reported experiences. Subclinical psychotic experiences show both dimensional severity variations and discrete symptom-pattern variations across individuals. To understand and capture all interindividual variations in subclinical psychotic experiences, their number, nature and context (co-occurrence patterns) should be considered at the same time. Only some psychotic experiences may lay on a true psychopathological psychosis continuum. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Evidence of linkage of HDL level variation to APOC3 in two samples with different ascertainment.
Gagnon, France; Jarvik, Gail P; Motulsky, Arno G; Deeb, Samir S; Brunzell, John D; Wijsman, Ellen M
2003-11-01
The APOA1-C3-A4-A5 gene complex encodes genes whose products are implicated in the metabolism of HDL and/or triglycerides. Although the relationship between polymorphisms in this gene cluster and dyslipidemias was first reported more than 15 years ago, association and linkage results have remained inconclusive. This is due, in part, to the oligogenic and multivariate nature of dyslipidemic phenotypes. Therefore, we investigate evidence of linkage of APOC3 and HDL using two samples of dyslipidemic pedigrees: familial combined hyperlipidemia (FCHL) and isolated low-HDL (ILHDL). We used a strategy that deals with several difficulties inherent in the study of complex traits: by using a Bayesian Markov Chain Monte Carlo (MCMC) approach we allow for oligogenic trait models, as well as simultaneous incorporation of covariates, in the context of multipoint analysis. By using this approach on extended pedigrees we provide evidence of linkage of APOC3 and HDL level variation in two samples with different ascertainment. In addition to APOC3, we estimate that two to three genes, each with a substantial effect on total variance, are responsible for HDL variation in both data sets. We also provide evidence, using the FCHL data set, for a pleiotropic effect between HDL, HDL3 and triglycerides at the APOC3 locus.
Laínez, José M; Orcun, Seza; Pekny, Joseph F; Reklaitis, Gintaras V; Suvannasankha, Attaya; Fausel, Christopher; Anaissie, Elias J; Blau, Gary E
2014-01-01
Variable metabolism, dose-dependent efficacy, and a narrow therapeutic target of cyclophosphamide (CY) suggest that dosing based on individual pharmacokinetics (PK) will improve efficacy and minimize toxicity. Real-time individualized CY dose adjustment was previously explored using a maximum a posteriori (MAP) approach based on a five serum-PK sampling in patients with hematologic malignancy undergoing stem cell transplantation. The MAP approach resulted in an improved toxicity profile without sacrificing efficacy. However, extensive PK sampling is costly and not generally applicable in the clinic. We hypothesize that the assumption-free Bayesian approach (AFBA) can reduce sampling requirements, while improving the accuracy of results. Retrospective analysis of previously published CY PK data from 20 patients undergoing stem cell transplantation. In that study, Bayesian estimation based on the MAP approach of individual PK parameters was accomplished to predict individualized day-2 doses of CY. Based on these data, we used the AFBA to select the optimal sampling schedule and compare the projected probability of achieving the therapeutic end points. By optimizing the sampling schedule with the AFBA, an effective individualized PK characterization can be obtained with only two blood draws at 4 and 16 hours after administration on day 1. The second-day doses selected with the AFBA were significantly different than the MAP approach and averaged 37% higher probability of attaining the therapeutic targets. The AFBA, based on cutting-edge statistical and mathematical tools, allows an accurate individualized dosing of CY, with simplified PK sampling. This highly accessible approach holds great promise for improving efficacy, reducing toxicities, and lowering treatment costs. © 2013 Pharmacotherapy Publications, Inc.
Application of mobile sampling to investigate spatial variation in fine particle composition
Li, Hugh Z.; Dallmann, Timothy R.; Gu, Peishi; Presto, Albert A.
2016-10-01
Long-term exposure to particulate matter (PM) is a major contributor to air pollution related deaths. Evidence indicates that metals play an important role in harming human health due to their redox potential. We conducted a mobile sampling campaign in 2013 summer and winter in Pittsburgh, PA to characterize spatial variation in PM2.5 mass and composition. Thirty-six sites were chosen based on three stratification variables: traffic density, proximity to point sources, and elevation. We collected filters in three time sessions (morning, afternoon, and overnight) in each season. X-ray fluorescence (XRF) was used to analyze concentrations of 26 elements: Na, Mg, Al, Si, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Se, Br, Rb, Sr, Zr, Cd, Sb, and Pb. Trace elements had a broad range of concentrations from 0 to 300 ng/m3. Comparison of data from mobile sampling filters with stationary monitors suggested that the mobile sampling strategy did not lead to a biased dataset. We developed Land Use Regression (LUR) models to describe spatial variation of PM2.5, Si, S, Cl, K, Ca, Ti, Cr, Fe, Cu, and Zn. Using ArcGIS-10.3 (ESRI, Redlands, CA), we extracted different independent variables related to traffic influence, land-use type, and facility emissions based on the National Emission Inventory (NEI). To validate LUR models, we used regression diagnostics such as leave-one-out cross validation (LOOCV), mean studentized prediction residual (MSPR), and root mean square of studentized residuals (RMS). The number of predictors in final LUR models ranged from 1 to 6. Models had an average R2 of 0.57 (SD = 0.16). Traffic related variables explained the most variability with an average R2 contribution of 0.20 (SD = 0.20). Overall, these results demonstrated significant intra-urban spatial variability of fine particle composition.
Oranje, Andreas
2006-01-01
A multitude of methods has been proposed to estimate the sampling variance of ratio estimates in complex samples (Wolter, 1985). Hansen and Tepping (1985) studied some of those variance estimators and found that a high coefficient of variation (CV) of the denominator of a ratio estimate is indicative of a biased estimate of the standard error of a…
Francescon, Paolo; Beddar, Sam; Satariano, Ninfa; Das, Indra J.
2014-01-01
Purpose: Evaluate the ability of different dosimeters to correctly measure the dosimetric parameters percentage depth dose (PDD), tissue-maximum ratio (TMR), and off-axis ratio (OAR) in water for small fields. Methods: Monte Carlo (MC) simulations were used to estimate the variation of kQclin,Qmsrfclin,fmsr for several types of microdetectors as a function of depth and distance from the central axis for PDD, TMR, and OAR measurements. The variation of kQclin,Qmsrfclin,fmsr enables one to evaluate the ability of a detector to reproduce the PDD, TMR, and OAR in water and consequently determine whether it is necessary to apply correction factors. The correctness of the simulations was verified by assessing the ratios between the PDDs and OARs of 5- and 25-mm circular collimators used with a linear accelerator measured with two different types of dosimeters (the PTW 60012 diode and PTW PinPoint 31014 microchamber) and the PDDs and the OARs measured with the Exradin W1 plastic scintillator detector (PSD) and comparing those ratios with the corresponding ratios predicted by the MC simulations. Results: MC simulations reproduced results with acceptable accuracy compared to the experimental results; therefore, MC simulations can be used to successfully predict the behavior of different dosimeters in small fields. The Exradin W1 PSD was the only dosimeter that reproduced the PDDs, TMRs, and OARs in water with high accuracy. With the exception of the EDGE diode, the stereotactic diodes reproduced the PDDs and the TMRs in water with a systematic error of less than 2% at depths of up to 25 cm; however, they produced OAR values that were significantly different from those in water, especially in the tail region (lower than 20% in some cases). The microchambers could be used for PDD measurements for fields greater than those produced using a 10-mm collimator. However, with the detector stem parallel to the beam axis, the microchambers could be used for TMR measurements for all
Nagarajan, Rajakumar; Iqbal, Zohaib; Burns, Brian; Wilson, Neil E; Sarma, Manoj K; Margolis, Daniel A; Reiter, Robert E; Raman, Steven S; Thomas, M Albert
2015-11-01
The overlap of metabolites is a major limitation in one-dimensional (1D) spectral-based single-voxel MRS and multivoxel-based MRSI. By combining echo planar spectroscopic imaging (EPSI) with a two-dimensional (2D) J-resolved spectroscopic (JPRESS) sequence, 2D spectra can be recorded in multiple locations in a single slice of prostate using four-dimensional (4D) echo planar J-resolved spectroscopic imaging (EP-JRESI). The goal of the present work was to validate two different non-linear reconstruction methods independently using compressed sensing-based 4D EP-JRESI in prostate cancer (PCa): maximum entropy (MaxEnt) and total variation (TV). Twenty-two patients with PCa with a mean age of 63.8 years (range, 46-79 years) were investigated in this study. A 4D non-uniformly undersampled (NUS) EP-JRESI sequence was implemented on a Siemens 3-T MRI scanner. The NUS data were reconstructed using two non-linear reconstruction methods, namely MaxEnt and TV. Using both TV and MaxEnt reconstruction methods, the following observations were made in cancerous compared with non-cancerous locations: (i) higher mean (choline + creatine)/citrate metabolite ratios; (ii) increased levels of (choline + creatine)/spermine and (choline + creatine)/myo-inositol; and (iii) decreased levels of (choline + creatine)/(glutamine + glutamate). We have shown that it is possible to accelerate the 4D EP-JRESI sequence by four times and that the data can be reliably reconstructed using the TV and MaxEnt methods. The total acquisition duration was less than 13 min and we were able to detect and quantify several metabolites.
Iskender, Ilker; Kadioglu, Salih Zeki; Kosar, Altug; Atasalihi, Ali; Kir, Altan
2011-06-01
The maximum standardized uptake value (SUV(max)) varies among positron emission tomography-integrated computed tomography (PET/CT) centers in the staging of non-small cell lung cancer. We evaluated the ratio of the optimum SUV(max) cut-off for the lymph nodes to the median SUV(max) of the primary tumor (ratioSUV(max)) to determine SUV(max) variations between PET/CT scanners. The previously described PET predictive ratio (PPR) was also evaluated. PET/CT and mediastinoscopy and/or thoracotomy were performed on 337 consecutive patients between September 2005 and March 2009. Thirty-six patients were excluded from the study. The pathological results were correlated with the PET/CT findings. Histopathological examination was performed on 1136 N2 lymph nodes using 10 different PET/CT centers. The majority of patients (group A: 240) used the same PET/CT scanner at four different centers. Others patients were categorized as group B. The ratioSUV(max) for groups A and B was 0.18 and 0.22, respectively. The same ratio for centers 1, 2, 3 and 4 was 0.2, 0.21, 0.21, and 0.23, respectively. The optimal cut-off value of the PPR to predict mediastinal lymph node pathology for malignancy was 0.49 (likelihood ratio +2.02; sensitivity 70%, specificity 65%). We conclude that the ratioSUV(max) was similar for different scanners. Thus, SUV(max) is a valuable cut-off for comparing-centers.
Mobli, Mehdi; Stern, Alan S.; Bermel, Wolfgang; King, Glenn F.; Hoch, Jeffrey C.
2010-05-01
One of the stiffest challenges in structural studies of proteins using NMR is the assignment of sidechain resonances. Typically, a panel of lengthy 3D experiments are acquired in order to establish connectivities and resolve ambiguities due to overlap. We demonstrate that these experiments can be replaced by a single 4D experiment that is time-efficient, yields excellent resolution, and captures unique carbon-proton connectivity information. The approach is made practical by the use of non-uniform sampling in the three indirect time dimensions and maximum entropy reconstruction of the corresponding 3D frequency spectrum. This 4D method will facilitate automated resonance assignment procedures and it should be particularly beneficial for increasing throughput in NMR-based structural genomics initiatives.
Davis, P B; Yee, R L; Millar, J
1994-08-01
Medical practice variation is extensive and well documented, particularly for surgical interventions, and raises important questions for health policy. To date, however, little work has been carried out on interpractitioner variation in prescribing activity in the primary care setting. An analytical model of medical variation is derived from the literature and relevant indicators are identified from a study of New Zealand general practice. The data are based on nearly 9,500 completed patient encounter records drawn from over a hundred practitioners in the Waikato region of the North Island, New Zealand. The data set represents a 1% sample of all weekday general practice office encounters in the Hamilton Health District recorded over a 12-month period. Overall levels of prescribing, and the distribution of drug mentions across diagnostic groupings, are broadly comparable to results drawn from international benchmark data. A multivariate analysis is carried out on seven measures of activity in the areas of prescribing volume, script detail, and therapeutic choice. The analysis indicates that patient, practitioner and practice attributes exert little systematic influence on the prescribing task. The principal influences are diagnosis, followed by practitioner identity. The pattern of findings suggests also that the prescribing task cannot be viewed as an undifferentiated activity. It is more usefully considered as a process of decision-making in which 'core' judgements--such as the decision to prescribe and the choice of drug--are highly predictable and strongly influenced by diagnosis, while 'peripheral' features of the task--such as choosing a combination drug or prescribing generically--are less determinate and more subject to the exercise of clinical discretion.(ABSTRACT TRUNCATED AT 250 WORDS)
Spatial scales of variation in lichens: implications for sampling design in biomonitoring surveys.
Giordani, Paolo; Brunialti, Giorgio; Frati, Luisa; Incerti, Guido; Ianesch, Luca; Vallone, Emanuele; Bacaro, Giovanni; Maccherini, Simona
2013-02-01
The variability of biological data is a main constraint affecting the quality and reliability of lichen biomonitoring surveys for estimation of the effects of atmospheric pollution. Although most epiphytic lichen bioindication surveys focus on between-site differences at the landscape level, associated with the large scale effects of atmospheric pollution, current protocols are based on multilevel sampling, thus adding further sources of variation and affecting the error budget. We test the hypothesis that assemblages of lichen communities vary at each spatial scale examined, in order to determine what scales should be included in future monitoring studies. We compared four sites in Italy, along gradients of atmospheric pollution and climate, to test the partitioning of the variance components of lichen diversity across spatial scales (from trunks to landscapes). Despite environmental heterogeneity, we observed comparable spatial variance. However, residuals often overcame between-plot variability, leading to biased estimation of atmospheric pollution effects.
Coarse graining from variationally enhanced sampling applied to the Ginzburg-Landau model.
Invernizzi, Michele; Valsson, Omar; Parrinello, Michele
2017-03-28
A powerful way to deal with a complex system is to build a coarse-grained model capable of catching its main physical features, while being computationally affordable. Inevitably, such coarse-grained models introduce a set of phenomenological parameters, which are often not easily deducible from the underlying atomistic system. We present a unique approach to the calculation of these parameters, based on the recently introduced variationally enhanced sampling method. It allows us to obtain the parameters from atomistic simulations, providing thus a direct connection between the microscopic and the mesoscopic scale. The coarse-grained model we consider is that of Ginzburg-Landau, valid around a second-order critical point. In particular, we use it to describe a Lennard-Jones fluid in the region close to the liquid-vapor critical point. The procedure is general and can be adapted to other coarse-grained models.
Coarse graining from variationally enhanced sampling applied to the Ginzburg–Landau model
Invernizzi, Michele; Valsson, Omar; Parrinello, Michele
2017-01-01
A powerful way to deal with a complex system is to build a coarse-grained model capable of catching its main physical features, while being computationally affordable. Inevitably, such coarse-grained models introduce a set of phenomenological parameters, which are often not easily deducible from the underlying atomistic system. We present a unique approach to the calculation of these parameters, based on the recently introduced variationally enhanced sampling method. It allows us to obtain the parameters from atomistic simulations, providing thus a direct connection between the microscopic and the mesoscopic scale. The coarse-grained model we consider is that of Ginzburg–Landau, valid around a second-order critical point. In particular, we use it to describe a Lennard–Jones fluid in the region close to the liquid–vapor critical point. The procedure is general and can be adapted to other coarse-grained models. PMID:28292890
Lee, Sharon X; McLachlan, Geoffrey J; Pyne, Saumyadipta
2016-01-01
We present an algorithm for modeling flow cytometry data in the presence of large inter-sample variation. Large-scale cytometry datasets often exhibit some within-class variation due to technical effects such as instrumental differences and variations in data acquisition, as well as subtle biological heterogeneity within the class of samples. Failure to account for such variations in the model may lead to inaccurate matching of populations across a batch of samples and poor performance in classification of unlabeled samples. In this paper, we describe the Joint Clustering and Matching (JCM) procedure for simultaneous segmentation and alignment of cell populations across multiple samples. Under the JCM framework, a multivariate mixture distribution is used to model the distribution of the expressions of a fixed set of markers for each cell in a sample such that the components in the mixture model may correspond to the various populations of cells, which have similar expressions of markers (that is, clusters), in the composition of the sample. For each class of samples, an overall class template is formed by the adoption of random-effects terms to model the inter-sample variation within a class. The construction of a parametric template for each class allows for direct quantification of the differences between the template and each sample, and also between each pair of samples, both within or between classes. The classification of a new unclassified sample is then undertaken by assigning the unclassified sample to the class that minimizes the distance between its fitted mixture density and each class density as provided by the class templates. For illustration, we use a symmetric form of the Kullback-Leibler divergence as a distance measure between two densities, but other distance measures can also be applied. We show and demonstrate on four real datasets how the JCM procedure can be used to carry out the tasks of automated clustering and alignment of cell
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
Variations among animals when estimating the undegradable fraction of fiber in forage samples
Cláudia Batista Sampaio
2014-10-01
Full Text Available The objective of this study was to assess the variability among animals regarding the critical time to estimate the undegradable fraction of fiber (ct using an in situ incubation procedure. Five rumenfistulated Nellore steers were used to estimate the degradation profile of fiber. Animals were fed a standard diet with an 80:20 forage:concentrate ratio. Sugarcane, signal grass hay, corn silage and fresh elephant grass samples were assessed. Samples were put in F57 Ankom® bags and were incubated in the rumens of the animals for 0, 6, 12, 18, 24, 48, 72, 96, 120, 144, 168, 192, 216, 240 and 312 hours. The degradation profiles were interpreted using a mixed non-linear model in which a random effect was associated with the degradation rate. For sugarcane, signal grass hay and corn silage, there were no significant variations among animals regarding the fractional degradation rate of neutral and acid detergent fiber; consequently, the ct required to estimate the undegradable fiber fraction did not vary among animals for those forages. However, a significant variability among animals was found for the fresh elephant grass. The results seem to suggest that the variability among animals regarding the degradation rate of fibrous components can be significant.
The contribution of simple random sampling to observed variations in faecal egg counts.
Torgerson, Paul R; Paul, Michaela; Lewis, Fraser I
2012-09-10
It has been over 100 years since the classical paper published by Gosset in 1907, under the pseudonym "Student", demonstrated that yeast cells suspended in a fluid and measured by a haemocytometer conformed to a Poisson process. Similarly parasite eggs in a faecal suspension also conform to a Poisson process. Despite this there are common misconceptions how to analyse or interpret observations from the McMaster or similar quantitative parasitic diagnostic techniques, widely used for evaluating parasite eggs in faeces. The McMaster technique can easily be shown from a theoretical perspective to give variable results that inevitably arise from the random distribution of parasite eggs in a well mixed faecal sample. The Poisson processes that lead to this variability are described and illustrative examples of the potentially large confidence intervals that can arise from observed faecal eggs counts that are calculated from the observations on a McMaster slide. Attempts to modify the McMaster technique, or indeed other quantitative techniques, to ensure uniform egg counts are doomed to failure and belie ignorance of Poisson processes. A simple method to immediately identify excess variation/poor sampling from replicate counts is provided.
Yamagata, Koichi; Yamanishi, Ayako; Kokubu, Chikara; Takeda, Junji; Sese, Jun
2016-05-05
An important challenge in cancer genomics is precise detection of structural variations (SVs) by high-throughput short-read sequencing, which is hampered by the high false discovery rates of existing analysis tools. Here, we propose an accurate SV detection method named COSMOS, which compares the statistics of the mapped read pairs in tumor samples with isogenic normal control samples in a distinct asymmetric manner. COSMOS also prioritizes the candidate SVs using strand-specific read-depth information. Performance tests on modeled tumor genomes revealed that COSMOS outperformed existing methods in terms of F-measure. We also applied COSMOS to an experimental mouse cell-based model, in which SVs were induced by genome engineering and gamma-ray irradiation, followed by polymerase chain reaction-based confirmation. The precision of COSMOS was 84.5%, while the next best existing method was 70.4%. Moreover, the sensitivity of COSMOS was the highest, indicating that COSMOS has great potential for cancer genome analysis.
Hamilton City Board of Education (Ontario).
Suggestions for studying the topic of variation of individuals and objects (balls) to help develop elementary school students' measurement, comparison, classification, evaluation, and data collection and recording skills are made. General suggestions of variables that can be investigated are made for the study of human variation. Twelve specific…
Biomarkers for monitoring pre-analytical quality variation of mRNA in blood samples.
Hui Zhang
Full Text Available There is an increasing need for proper quality control tools in the pre-analytical phase of the molecular diagnostic workflow. The aim of the present study was to identify biomarkers for monitoring pre-analytical mRNA quality variations in two different types of blood collection tubes, K2EDTA (EDTA tubes and PAXgene Blood RNA Tubes (PAXgene tubes. These tubes are extensively used both in the diagnostic setting as well as for research biobank samples. Blood specimens collected in the two different blood collection tubes were stored for varying times at different temperatures, and microarray analysis was performed on resultant extracted RNA. A large set of potential mRNA quality biomarkers for monitoring post-phlebotomy gene expression changes and mRNA degradation in blood was identified. qPCR assays for the potential biomarkers and a set of relevant reference genes were generated and used to pre-validate a sub-set of the selected biomarkers. The assay precision of the potential qPCR based biomarkers was determined, and a final validation of the selected quality biomarkers using the developed qPCR assays and blood samples from 60 healthy additional subjects was performed. In total, four mRNA quality biomarkers (USP32, LMNA, FOSB, TNRFSF10C were successfully validated. We suggest here the use of these blood mRNA quality biomarkers for validating an experimental pre-analytical workflow. These biomarkers were further evaluated in the 2nd ring trial of the SPIDIA-RNA Program which demonstrated that these biomarkers can be used as quality control tools for mRNA analyses from blood samples.
Chattopadhyay, Bhargab; Kelley, Ken
2016-01-01
The coefficient of variation is an effect size measure with many potential uses in psychology and related disciplines. We propose a general theory for a sequential estimation of the population coefficient of variation that considers both the sampling error and the study cost, importantly without specific distributional assumptions. Fixed sample size planning methods, commonly used in psychology and related fields, cannot simultaneously minimize both the sampling error and the study cost. The sequential procedure we develop is the first sequential sampling procedure developed for estimating the coefficient of variation. We first present a method of planning a pilot sample size after the research goals are specified by the researcher. Then, after collecting a sample size as large as the estimated pilot sample size, a check is performed to assess whether the conditions necessary to stop the data collection have been satisfied. If not an additional observation is collected and the check is performed again. This process continues, sequentially, until a stopping rule involving a risk function is satisfied. Our method ensures that the sampling error and the study costs are considered simultaneously so that the cost is not higher than necessary for the tolerable sampling error. We also demonstrate a variety of properties of the distribution of the final sample size for five different distributions under a variety of conditions with a Monte Carlo simulation study. In addition, we provide freely available functions via the MBESS package in R to implement the methods discussed.
Presentation of coefficient of variation for bioequivalence sample-size calculation .
Lee, Yi Lin; Mak, Wen Yao; Looi, Irene; Wong, Jia Woei; Yuen, Kah Hay
2017-07-01
The current study aimed to further contribute information on intrasubject coefficient of variation (CV) from 43 bioequivalence studies conducted by our center. Consistent with Yuen et al. (2001), current work also attempted to evaluate the effect of different parameters (AUC0-t, AUC0-∞, and Cmax) used in the estimation of the study power. Furthermore, we have estimated the number of subjects required for each study by looking at the values of intrasubject CV of AUC0-∞ and have also taken into consideration the minimum sample-size requirement set by the US FDA. A total of 37 immediate-release and 6 extended-release formulations from 28 different active pharmaceutical ingredients (APIs) were evaluated. Out of the total number of studies conducted, 10 studies did not achieve satisfactory statistical power on two or more parameters; 4 studies consistently scored poorly across all three parameters. In general, intrasubject CV values calculated from Cmax were more variable compared to either AUC0-t and AUC0-∞. 20 out of 43 studies did not achieve more than 80% power when the value was calculated from Cmax value, compared to only 11 (AUC0-∞) and 8 (AUC0-t) studies. This finding is consistent with Steinijans et al. (1995) [2] and Yuen et al. (2001) [3]. In conclusion, the CV values obtained from AUC0-t and AUC0-∞ were similar, while those derived from Cmax were consistently more variable. Hence, CV derived from AUC instead of Cmax should be used in sample-size calculation to achieve a sufficient, yet practical, test power. .
Two research studies funded and overseen by EPA have been conducted since October 2006 on soil gas sampling methods and variations in shallow soil gas concentrations with the purpose of improving our understanding of soil gas methods and data for vapor intrusion applications. Al...
Variation in drug injection frequency among out-of-treatment drug users in a national sample.
Singer, M; Himmelgreen, D; Dushay, R; Weeks, M R
1998-05-01
This article analyzes data on drug injection frequency in a sample of more than 13,000 out-of-treatment drug injectors interviewed across 21 U.S. cities and Puerto Rico through the National Institute on Drug Abuse (NIDA) Cooperative Agreement for AIDS Community-Based Outreach/Intervention Research Program. The goals of the article are to present findings on injection frequency and to predict variation in terms of a set of variables suggested by previous research, including location, ethnicity, gender, age, educational attainment, years since first use of alcohol and marijuana, income, living arrangement, homelessness, drugs injected, and duration of injection across drugs. Three models were tested. Significant intersite differences were identified in injection frequency, although most of the other predictor variables we tested accounted for little of the variance. Ethnicity and drugs injected, however, were found to be significant. Taken together, location, ethnicity, and type of drug injected provide a configuration that differentiated and (for the variables available for the analysis) best predicted injection frequency. The public health implications of these findings are presented.
Seat Belt Detection Based on Maximum Local Variation%基于最大局部变化的安全带检测算法
杨鹏
2016-01-01
While the effect of traditional edge detection algorithms is largely dependent on the selection of threshold value, and a new edge detection method based on maximum local variation and 2D OTSU was proposed. The method calculated the difference of the gray values of the all pixels and the center pixel in a local area in an image and used the biggest difference values in different local areas to describe the edge distribution information of the image. To do this, an edge distribution information image could be obtained, and then the method used the 2D OTSU method to segment the edge distribution information image and gained the edge binary image. Based on the edge binary image and some prior information of the vehicle, an algorithm for car window area and driver area localiza-tion was proposed. In the end, a seat belt detection method was proposed by detecting whether there is a line meeting the seatbelt prior characteristics in the driver area. The experimental results show that the method can accurately locate the car window edges and the pi-lot area, and can be applied to detect the seat belts, and then has a certain practical value.%传统的边缘检测算法的效果很大程度上取决于阈值的选取，针对这个问题，提出了基于局部最大变化和二维OTSU的边缘检测方法，该方法利用图像局部区域的所有像素灰度值与中心像素灰度值的最大差值来描述图像边缘分布信息，从而得到图像边缘分布信息图，然后利用二维OTSU方法对该边缘分布信息图进行二值化处理得到边缘二值图。利用该边缘二值图，结合车辆的一些先验信息，提出车窗定位算法，并进一步确定驾驶员区域，最后通过在驾驶员区域内检测是否存在满足安全带先验特征的直线来判断驾驶员是否佩戴安全带。实验结果表明，该方法能够准确定位车窗边缘和驾驶员区域，可以应用于安全带的检测，具有一定的实用价值。
Eldridge, Sandra M; Ashby, Deborah; Kerry, Sally
2006-10-01
Cluster randomized trials are increasingly popular. In many of these trials, cluster sizes are unequal. This can affect trial power, but standard sample size formulae for these trials ignore this. Previous studies addressing this issue have mostly focused on continuous outcomes or methods that are sometimes difficult to use in practice. We show how a simple formula can be used to judge the possible effect of unequal cluster sizes for various types of analyses and both continuous and binary outcomes. We explore the practical estimation of the coefficient of variation of cluster size required in this formula and demonstrate the formula's performance for a hypothetical but typical trial randomizing UK general practices. The simple formula provides a good estimate of sample size requirements for trials analysed using cluster-level analyses weighting by cluster size and a conservative estimate for other types of analyses. For trials randomizing UK general practices the coefficient of variation of cluster size depends on variation in practice list size, variation in incidence or prevalence of the medical condition under examination, and practice and patient recruitment strategies, and for many trials is expected to be approximately 0.65. Individual-level analyses can be noticeably more efficient than some cluster-level analyses in this context. When the coefficient of variation is <0.23, the effect of adjustment for variable cluster size on sample size is negligible. Most trials randomizing UK general practices and many other cluster randomized trials should account for variable cluster size in their sample size calculations.
Variation in orgasm occurrence by sexual orientation in a sample of U.S. singles.
Garcia, Justin R; Lloyd, Elisabeth A; Wallen, Kim; Fisher, Helen E
2014-11-01
Despite recent advances in understanding orgasm variation, little is known about ways in which sexual orientation is associated with men's and women's orgasm occurrence. To assess orgasm occurrence during sexual activity across sexual orientation categories. Data were collected by Internet questionnaire from 6,151 men and women (ages 21-65+ years) as part of a nationally representative sample of single individuals in the United States. Analyses were restricted to a subsample of 2,850 singles (1,497 men, 1,353 women) who had experienced sexual activity in the past 12 months. Participants reported their sex/gender, self-identified sexual orientation (heterosexual, gay/lesbian, bisexual), and what percentage of the time they experience orgasm when having sex with a familiar partner. Mean occurrence rate for experiencing orgasm during sexual activity with a familiar partner was 62.9% among single women and 85.1% among single men, which was significantly different (F1,2848 = 370.6, P sexual orientation: heterosexual men 85.5%, gay men 84.7%, bisexual men 77.6% (F2,1494 = 2.67, P = 0.07, η(2) = 0.004). For women, however, mean occurrence rate of orgasm varied significantly by sexual orientation: heterosexual women 61.6%, lesbian women 74.7%, bisexual women 58.0% (F2,1350 = 10.95, P sexual orientation, have less predictable, more varied orgasm experiences than do men and that for women, but not men, the likelihood of orgasm varies with sexual orientation. These findings demonstrate the need for further investigations into the comparative sexual experiences and sexual health outcomes of sexual minorities. © 2014 International Society for Sexual Medicine.
Scogin, J. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-03-24
Thermogravimetric analysis with mass spectroscopy of the evolved gas (TGA-MS) is used to quantify the moisture content of materials in the 3013 destructive examination (3013 DE) surveillance program. Salts frequently present in the 3013 DE materials volatilize in the TGA and condense in the gas lines just outside the TGA furnace. The buildup of condensate can restrict the flow of purge gas and affect both the TGA operations and the mass spectrometer calibration. Removal of the condensed salts requires frequent maintenance and subsequent calibration runs to keep the moisture measurements by mass spectroscopy within acceptable limits, creating delays in processing samples. In this report, the feasibility of determining the total moisture from TGA-MS measurements at a lower temperature is investigated. A temperature of the TGA-MS analysis which reduces the complications caused by the condensation of volatile materials is determined. Analysis shows that an excellent prediction of the presently measured total moisture value can be made using only the data generated up to 700 °C and there is a sound physical basis for this estimate. It is recommended that the maximum temperature of the TGA-MS determination of total moisture for the 3013 DE program be reduced from 1000 °C to 700 °C. It is also suggested that cumulative moisture measurements at 550 °C and 700°C be substituted for the measured value of total moisture in the 3013 DE database. Using these raw values, any of predictions of the total moisture discussed in this report can be made.
Fouz, R.; Vilar, M.J.; Yus, E.; Sanjuán, M.L.; Diéguez, F.J.
2016-11-01
The objective of this study was to investigate the variability in cow´s milk somatic cell counts (SCC) depending on the type of milk meter used by dairy farms for official milk recording. The study was performed in 2011 and 2012 in the major cattle area of Spain. In total, 137,846 lactations of Holstein-Friesian cows were analysed at 1,912 farms. A generalised least squares regression model was used for data analysis. The model showed that the milk meter had a substantial effect on the SCC for individual milk samples obtained for official milk recording. The results suggested an overestimation of the SCC in milk samples from farms that had electronic devices in comparison with farms that used portable devices and underestimation when volumetric meters are used. A weak positive correlation was observed between the SCC and the percentage of fat in individual milk samples. The results underline the importance of considering this variable when using SCC data from milk recording in the dairy herd improvement program or in quality milk programs. (Author)
María J. Vilar
2016-03-01
Full Text Available The objective of this study was to investigate the variability in cow´s milk somatic cell counts (SCC depending on the type of milk meter used by dairy farms for official milk recording. The study was performed in 2011 and 2012 in the major cattle area of Spain. In total, 137,846 lactations of Holstein-Friesian cows were analysed at 1,912 farms. A generalised least squares regression model was used for data analysis. The model showed that the milk meter had a substantial effect on the SCC for individual milk samples obtained for official milk recording. The results suggested an overestimation of the SCC in milk samples from farms that had electronic devices in comparison with farms that used portable devices and underestimation when volumetric meters are used. A weak positive correlation was observed between the SCC and the percentage of fat in individual milk samples. The results underline the importance of considering this variable when using SCC data from milk recording in the dairy herd improvement program or in quality milk programs.
Moes, C.C.M.
2007-01-01
The pressure distribution and the location of the points of maximum pressure, usually below the ischial tuberosities, was measured for subjects sitting on a flat, hard and horizontal support, and varying angle of the rotation of the pelvis. The pressure data were analyzed for force- and pressure-rel
Pesch, Hans-Josef
2013-01-01
International audience; The purpose of the present paper is to show that the most prominent results in optimal control theory, the distinction between state and control variables, the maximum principle, and the principle of optimality, resp. Bellman's equation are immediate consequences of Carathéodory's achievements published about two decades before optimal control theory saw the light of day.
Moes, C.C.M.
2007-01-01
The pressure distribution and the location of the points of maximum pressure, usually below the ischial tuberosities, was measured for subjects sitting on a flat, hard and horizontal support, and varying angle of the rotation of the pelvis. The pressure data were analyzed for force- and
Angelbeck-Schulze, Mandy; Mischke, Reinhard; Rohn, Karl; Hewicker-Trautwein, Marion; Naim, Hassan Y.; Bäumer, Wolfgang
2014-01-01
Background Previously, we evaluated a minimally invasive epidermal lipid sampling method called skin scrub, which achieved reproducible and comparable results to skin scraping. The present study aimed at investigating regional variations in canine epidermal lipid composition using the skin scrub technique and its suitability for collecting skin lipids in dogs suffering from certain skin diseases. Eight different body sites (5 highly and 3 lowly predisposed for atopic lesions) were sampled by ...
Angelbeck-Schulze, Mandy; Mischke, Reinhard; Rohn, Karl; Hewicker-Trautwein, Marion; Naim, Hassan Y.; Bäumer, Wolfgang
2014-01-01
Background Previously, we evaluated a minimally invasive epidermal lipid sampling method called skin scrub, which achieved reproducible and comparable results to skin scraping. The present study aimed at investigating regional variations in canine epidermal lipid composition using the skin scrub technique and its suitability for collecting skin lipids in dogs suffering from certain skin diseases. Eight different body sites (5 highly and 3 lowly predisposed for atopic lesions) were sampled by ...
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Cholet, M.; Minerbe, F.; Oliviero, G.; Pestel, V. [Université de Caen, 6 bd du Mal Juin, 14050 Caen Cedex (France); Frémont, F., E-mail: francois.fremont@ensicaen.fr [Centre de Recherche sur les Ions, les Matériaux et la Photonique, Unité Mixte Université de Caen-CEA-CNRS-EnsiCaen, 6 bd du Mal Juin, 14050 Caen Cedex 4 (France)
2014-08-15
Highlights: • Young type interferences with electrons are revisited. • Oscillations in the angular distribution of the energy maximum of Auger spectra are evidenced. • Model calculations are in good agreement with the experimental result. • The position of the Auger spectra oscillates in counterphase with the total intensity. - Abstract: In this article, we present experimental evidence of a particular electron-interference phenomenon. The electrons are provided by autoionization of 2l2l′ doubly excited He atoms following the capture of H{sub 2} electrons by a slow He{sup 2+} incoming ion. We observe that the position of the energy maximum of the Auger structures oscillates with the detection angle. Calculation based on a simple model that includes interferences clearly shows that the present oscillations are due to Young-type interferences caused by electrons scattering on both H{sup +} centers.
Large-Sample Theory for Generalized Linear Models with Non-natural Link and Random Variates
Jie-li Ding; Xi-ru Chen
2006-01-01
For generalized linear models (GLM), in the case that the regressors are stochastic and have different distributions and the observations of the responses may have different dimensionality, the asymptotic theory of the maximum likelihood estimate (MLE) of the parameters are studied under the assumption of a non-natural link function.
A. V. Belov
Full Text Available Ulysses, launched in October 1990, began its second out-of-ecliptic orbit in September 1997. In 2000/2001 the spacecraft passed from the south to the north polar regions of the Sun in the inner heliosphere. In contrast to the first rapid pole to pole passage in 1994/1995 close to solar minimum, Ulysses experiences now solar maximum conditions. The Kiel Electron Telescope (KET measures also protons and alpha-particles in the energy range from 5 MeV/n to >2 GeV/n. To derive radial and latitudinal gradients for >2 GeV/n protons and alpha-particles, data from the Chicago instrument on board IMP-8 and the neutron monitor network have been used to determine the corresponding time profiles at Earth. We obtain a spatial distribution at solar maximum which differs greatly from the solar minimum distribution. A steady-state approximation, which was characterized by a small radial and significant latitudinal gradient at solar minimum, was interchanged with a highly variable one with a large radial and a small – consistent with zero – latitudinal gradient. A significant deviation from a spherically symmetric cosmic ray distribution following the reversal of the solar magnetic field in 2000/2001 has not been observed yet. A small deviation has only been observed at northern polar regions, showing an excess of particles instead of the expected depression. This indicates that the reconfiguration of the heliospheric magnetic field, caused by the reappearance of the northern polar coronal hole, starts dominating the modulation of galactic cosmic rays already at solar maximum.
Key words. Interplanetary physics (cosmic rays; energetic particles – Space plasma physics (charged particle motion and acceleration
Taylor, Wendy; Stacey, Kaye
2014-01-01
This article presents "The Two Children Problem," published by Martin Gardner, who wrote a famous and widely-read math puzzle column in the magazine "Scientific American," and a problem presented by puzzler Gary Foshee. This paper explains the paradox of Problems 2 and 3 and many other variations of the theme. Then the authors…
Varekar, Vikas; Karmakar, Subhankar; Jha, Ramakar; Ghosh, N C
2015-06-01
The design of a water quality monitoring network (WQMN) is a complicated decision-making process because each sampling involves high installation, operational, and maintenance costs. Therefore, data with the highest information content should be collected. The effect of seasonal variation in point and diffuse pollution loadings on river water quality may have a significant impact on the optimal selection of sampling locations, but this possible effect has never been addressed in the evaluation and design of monitoring networks. The present study proposes a systematic approach for siting an optimal number and location of river water quality sampling stations based on seasonal or monsoonal variations in both point and diffuse pollution loadings. The proposed approach conceptualizes water quality monitoring as a two-stage process; the first stage of which is to consider all potential water quality sampling sites, selected based on the existing guidelines or frameworks, and the locations of both point and diffuse pollution sources. The monitoring at all sampling sites thus identified should be continued for an adequate period of time to account for the effect of the monsoon season. In the second stage, the monitoring network is then designed separately for monsoon and non-monsoon periods by optimizing the number and locations of sampling sites, using a modified Sanders approach. The impacts of human interventions on the design of the sampling net are quantified geospatially by estimating diffuse pollution loads and verified with land use map. To demonstrate the proposed methodology, the Kali River basin in the western Uttar Pradesh state of India was selected as a study area. The final design suggests consequential pre- and post-monsoonal changes in the location and priority of water quality monitoring stations based on the seasonal variation of point and diffuse pollution loadings.
Spatial Variation of Soil Lead in an Urban Community Garden: Implications for Risk-Based Sampling.
Bugdalski, Lauren; Lemke, Lawrence D; McElmurry, Shawn P
2014-01-01
Soil lead pollution is a recalcitrant problem in urban areas resulting from a combination of historical residential, industrial, and transportation practices. The emergence of urban gardening movements in postindustrial cities necessitates accurate assessment of soil lead levels to ensure safe gardening. In this study, we examined small-scale spatial variability of soil lead within a 15 × 30 m urban garden plot established on two adjacent residential lots located in Detroit, Michigan, USA. Eighty samples collected using a variably spaced sampling grid were analyzed for total, fine fraction (less than 250 μm), and bioaccessible soil lead. Measured concentrations varied at sampling scales of 1-10 m and a hot spot exceeding 400 ppm total soil lead was identified in the northwest portion of the site. An interpolated map of total lead was treated as an exhaustive data set, and random sampling was simulated to generate Monte Carlo distributions and evaluate alternative sampling strategies intended to estimate the average soil lead concentration or detect hot spots. Increasing the number of individual samples decreases the probability of overlooking the hot spot (type II error). However, the practice of compositing and averaging samples decreased the probability of overestimating the mean concentration (type I error) at the expense of increasing the chance for type II error. The results reported here suggest a need to reconsider U.S. Environmental Protection Agency sampling objectives and consequent guidelines for reclaimed city lots where soil lead distributions are expected to be nonuniform.
Strain variation within Campylobacter species in fecal samples from dogs and cats
Koene, M.G.J.; Houwers, D.J.; Dijkstra, J.R.; Duim, B.; Wagenaar, J.A.
2009-01-01
To investigate the incidence of co-colonization of different strains of Campylobacter species present in canine and feline stool samples, isolates were recovered by culture from 40 samples from dogs (n = 34) and cats (n = 6). Animals were of different ages, with diarrhoea or without clinical signs.
Elodie Caboux
Full Text Available The European Prospective Investigation into Cancer and nutrition (EPIC is a long-term, multi-centric prospective study in Europe investigating the relationships between cancer and nutrition. This study has served as a basis for a number of Genome-Wide Association Studies (GWAS and other types of genetic analyses. Over a period of 5 years, 52,256 EPIC DNA samples have been extracted using an automated DNA extraction platform. Here we have evaluated the pre-analytical factors affecting DNA yield, including anthropometric, epidemiological and technical factors such as center of subject recruitment, age, gender, body-mass index, disease case or control status, tobacco consumption, number of aliquots of buffy coat used for DNA extraction, extraction machine or procedure, DNA quantification method, degree of haemolysis and variations in the timing of sample processing. We show that the largest significant variations in DNA yield were observed with degree of haemolysis and with center of subject recruitment. Age, gender, body-mass index, cancer case or control status and tobacco consumption also significantly impacted DNA yield. Feedback from laboratories which have analyzed DNA with different SNP genotyping technologies demonstrate that the vast majority of samples (approximately 88% performed adequately in different types of assays. To our knowledge this study is the largest to date to evaluate the sources of pre-analytical variations in DNA extracted from peripheral leucocytes. The results provide a strong evidence-based rationale for standardized recommendations on blood collection and processing protocols for large-scale genetic studies.
Caboux, Elodie; Lallemand, Christophe; Ferro, Gilles; Hémon, Bertrand; Mendy, Maimuna; Biessy, Carine; Sims, Matt; Wareham, Nick; Britten, Abigail; Boland, Anne; Hutchinson, Amy; Siddiq, Afshan; Vineis, Paolo; Riboli, Elio; Romieu, Isabelle; Rinaldi, Sabina; Gunter, Marc J.; Peeters, Petra H. M.; van der Schouw, Yvonne T.; Travis, Ruth; Bueno-de-Mesquita, H. Bas; Canzian, Federico; Sánchez, Maria-José; Skeie, Guri; Olsen, Karina Standahl; Lund, Eiliv; Bilbao, Roberto; Sala, Núria; Barricarte, Aurelio; Palli, Domenico; Navarro, Carmen; Panico, Salvatore; Redondo, Maria Luisa; Polidoro, Silvia; Dossus, Laure; Boutron-Ruault, Marie Christine; Clavel-Chapelon, Françoise; Trichopoulou, Antonia; Trichopoulos, Dimitrios; Lagiou, Pagona; Boeing, Heiner; Fisher, Eva; Tumino, Rosario; Agnoli, Claudia; Hainaut, Pierre
2012-01-01
The European Prospective Investigation into Cancer and nutrition (EPIC) is a long-term, multi-centric prospective study in Europe investigating the relationships between cancer and nutrition. This study has served as a basis for a number of Genome-Wide Association Studies (GWAS) and other types of genetic analyses. Over a period of 5 years, 52,256 EPIC DNA samples have been extracted using an automated DNA extraction platform. Here we have evaluated the pre-analytical factors affecting DNA yield, including anthropometric, epidemiological and technical factors such as center of subject recruitment, age, gender, body-mass index, disease case or control status, tobacco consumption, number of aliquots of buffy coat used for DNA extraction, extraction machine or procedure, DNA quantification method, degree of haemolysis and variations in the timing of sample processing. We show that the largest significant variations in DNA yield were observed with degree of haemolysis and with center of subject recruitment. Age, gender, body-mass index, cancer case or control status and tobacco consumption also significantly impacted DNA yield. Feedback from laboratories which have analyzed DNA with different SNP genotyping technologies demonstrate that the vast majority of samples (approximately 88%) performed adequately in different types of assays. To our knowledge this study is the largest to date to evaluate the sources of pre-analytical variations in DNA extracted from peripheral leucocytes. The results provide a strong evidence-based rationale for standardized recommendations on blood collection and processing protocols for large-scale genetic studies. PMID:22808065
Tim A. Moore
2016-01-01
Full Text Available DOI: 10.17014/ijog.3.1.29-51Stratified sampling of coal seams for petrographic analysis using block samples is a viable alternative to standard methods of channel sampling and particulate pellet mounts. Although petrographic analysis of particulate pellets is employed widely, it is both time consuming and does not allow variation within sampling units to be assessed - an important measure in any study whether it be for paleoenvironmental reconstruction or in obtaining estimates of industrial attributes. Also, samples taken as intact blocks provide additional information, such as texture and botanical affinity that cannot be gained using particulate pellets. Stratified sampling can be employed both on ‘fine’ and ‘coarse’ grained coal units. Fine-grained coals are defined as those coal intervals that do not contain vitrain bands greater than approximately 1 mm in thickness (as measured perpendicular to bedding. In fine-grained coal seams, a reasonable sized block sample (with a polished surface area of ~3 cm2 can be taken that encapsulates the macroscopic variability. However, for coarse-grained coals (vitrain bands >1 mm a different system has to be employed in order to accurately account for the larger particles. Macroscopic point counting of vitrain bands can accurately account for those particles>1 mm within a coal interval. This point counting method is conducted using something as simple as string on a coal face with marked intervals greater than the largest particle expected to be encountered (although new technologies are being developed to capture this type of information digitally. Comparative analyses of particulate pellets and blocks on the same interval show less than 6% variation between the two sample types when blocks are recalculated to include macroscopic counts of vitrain. Therefore even in coarse-grained coals, stratified sampling can be used effectively and representatively.
Halyo, Nesim; Direskeneli, Haldun; Barkstrom, Bruce R.
1991-01-01
Satellite measurements are subject to a wide range of uncertainties due to their temporal, spatial, and directional sampling characteristics. An information-theory approach is suggested to examine the nonuniform temporal sampling of ERB measurements. The information (i.e., its entropy or uncertainty) before and after the measurements is determined, and information gain (IG) is defined as a reduction in the uncertainties involved. A stochastic model for the diurnal outgoing flux variations that affect the ERB is developed. Using Gaussian distributions for the a priori and measured radiant exitance fields, the IG is obtained by computing the a posteriori covariance. The IG for the monthly outgoing flux measurements is examined for different orbital parameters and orbital tracks, using the Earth Observing System orbital parameters as specific examples. Variations in IG due to changes in the orbit's inclination angle and the initial ascending node local time are investigated.
Wellens, H.L.L.; Kuijpers-Jagtman, A.M.; Halazonetis, D.J.
2013-01-01
This investigation aimed to quantify craniofacial variation in a sample of modern humans. In all, 187 consecutive orthodontic patients were collected, of which 79 were male (mean age 13.3, SD 3.7, range 7.5-40.8) and 99 were female (mean age 12.3, SD 1.9, range 8.7-19.1). The male and female subgrou
Ljubiša Stanković
2015-01-01
Full Text Available An approach to sparse signals reconstruction considering its missing measurements/samples as variables is recently proposed. Number and positions of missing samples determine the uniqueness of the solution. It has been assumed that analyzed signals are sparse in the discrete Fourier transform (DFT domain. A theorem for simple uniqueness check is proposed. Two forms of the theorem are presented, for an arbitrary sparse signal and for an already reconstructed signal. The results are demonstrated on illustrative and statistical examples.
Influence of common preanalytical variations on the metabolic profile of serum samples in biobanks
Fliniaux, Ophelie [University of Picardie Jules Verne, Laboratoire de Phytotechnologie EA 3900-BioPI (France); Gaillard, Gwenaelle [Biobanque de Picardie (France); Lion, Antoine [University of Picardie Jules Verne, Laboratoire de Phytotechnologie EA 3900-BioPI (France); Cailleu, Dominique [Batiment Serres-Transfert, rue de Mai/rue Dallery, Plateforme Analytique (France); Mesnard, Francois, E-mail: francois.mesnard@u-picardie.fr [University of Picardie Jules Verne, Laboratoire de Phytotechnologie EA 3900-BioPI (France); Betsou, Fotini [Integrated Biobank of Luxembourg (Luxembourg)
2011-12-15
A blood pre-centrifugation delay of 24 h at room temperature influenced the proton NMR spectroscopic profiles of human serum. A blood pre-centrifugation delay of 24 h at 4 Degree-Sign C did not influence the spectroscopic profile as compared with 4 h delays at either room temperature or 4 Degree-Sign C. Five or ten serum freeze-thaw cycles also influenced the proton NMR spectroscopic profiles. Certain common in vitro preanalytical variations occurring in biobanks may impact the metabolic profile of human serum.
Svendsen, Jon C; Tirsgaard, Bjørn; Cordero, Gerardo A; Steffensen, John F
2015-01-01
Intraspecific variation and trade-off in aerobic and anaerobic traits remain poorly understood in aquatic locomotion. Using gilthead sea bream (Sparus aurata) and Trinidadian guppy (Poecilia reticulata), both axial swimmers, this study tested four hypotheses: (1) gait transition from steady to unsteady (i.e., burst-assisted) swimming is associated with anaerobic metabolism evidenced as excess post exercise oxygen consumption (EPOC); (2) variation in swimming performance (critical swimming speed; U crit) correlates with metabolic scope (MS) or anaerobic capacity (i.e., maximum EPOC); (3) there is a trade-off between maximum sustained swimming speed (U sus) and minimum cost of transport (COTmin); and (4) variation in U sus correlates positively with optimum swimming speed (U opt; i.e., the speed that minimizes energy expenditure per unit of distance traveled). Data collection involved swimming respirometry and video analysis. Results showed that anaerobic swimming costs (i.e., EPOC) increase linearly with the number of bursts in S. aurata, with each burst corresponding to 0.53 mg O2 kg(-1). Data are consistent with a previous study on striped surfperch (Embiotoca lateralis), a labriform swimmer, suggesting that the metabolic cost of burst swimming is similar across various types of locomotion. There was no correlation between U crit and MS or anaerobic capacity in S. aurata indicating that other factors, including morphological or biomechanical traits, influenced U crit. We found no evidence of a trade-off between U sus and COTmin. In fact, data revealed significant negative correlations between U sus and COTmin, suggesting that individuals with high U sus also exhibit low COTmin. Finally, there were positive correlations between U sus and U opt. Our study demonstrates the energetic importance of anaerobic metabolism during unsteady swimming, and provides intraspecific evidence that superior maximum sustained swimming speed is associated with superior swimming
Jon Christian Svendsen
2015-02-01
Full Text Available Intraspecific variation and trade-off in aerobic and anaerobic traits remain poorly understood in aquatic locomotion. Using gilthead sea bream (Sparus aurata and Trinidadian guppy (Poecilia reticulata, both axial swimmers, this study tested four hypotheses: 1 gait transition from steady to unsteady (i.e. burst-assisted swimming is associated with anaerobic metabolism evidenced as excess post exercise oxygen consumption (EPOC; 2 variation in swimming performance (critical swimming speed; Ucrit correlates with metabolic scope (MS or anaerobic capacity (i.e. maximum EPOC; 3 there is a trade-off between maximum sustained swimming speed (Usus and minimum cost of transport (COTmin; and 4 variation in Usus correlates positively with optimum swimming speed (Uopt; i.e. the speed that minimizes energy expenditure per unit of distance travelled. Data collection involved swimming respirometry and video analysis. Results showed that anaerobic swimming costs (i.e. EPOC increase linearly with the number of bursts in S. aurata, with each burst corresponding to 0.53 mg O2 kg-1. Data are consistent with a previous study on striped surfperch (Embiotoca lateralis, a labriform swimmer, suggesting that the metabolic cost of burst swimming is similar across various types of locomotion. There was no correlation between Ucrit and MS or anaerobic capacity in S. aurata indicating that other factors, including morphological or biomechanical traits, influenced Ucrit. We found no evidence of a trade-off between Usus and COTmin. In fact, data revealed significant negative correlations between Usus and COTmin, suggesting that individuals with high Usus also exhibit low COTmin. Finally, there were positive correlations between Usus and Uopt. Our study demonstrates the energetic importance of anaerobic metabolism during unsteady swimming, and provides intraspecific evidence that superior maximum sustained swimming speed is associated with superior swimming economy and optimum
Svendsen, Jon C.; Tirsgaard, Bjørn; Cordero, Gerardo A.; Steffensen, John F.
2015-01-01
Intraspecific variation and trade-off in aerobic and anaerobic traits remain poorly understood in aquatic locomotion. Using gilthead sea bream (Sparus aurata) and Trinidadian guppy (Poecilia reticulata), both axial swimmers, this study tested four hypotheses: (1) gait transition from steady to unsteady (i.e., burst-assisted) swimming is associated with anaerobic metabolism evidenced as excess post exercise oxygen consumption (EPOC); (2) variation in swimming performance (critical swimming speed; Ucrit) correlates with metabolic scope (MS) or anaerobic capacity (i.e., maximum EPOC); (3) there is a trade-off between maximum sustained swimming speed (Usus) and minimum cost of transport (COTmin); and (4) variation in Usus correlates positively with optimum swimming speed (Uopt; i.e., the speed that minimizes energy expenditure per unit of distance traveled). Data collection involved swimming respirometry and video analysis. Results showed that anaerobic swimming costs (i.e., EPOC) increase linearly with the number of bursts in S. aurata, with each burst corresponding to 0.53 mg O2 kg−1. Data are consistent with a previous study on striped surfperch (Embiotoca lateralis), a labriform swimmer, suggesting that the metabolic cost of burst swimming is similar across various types of locomotion. There was no correlation between Ucrit and MS or anaerobic capacity in S. aurata indicating that other factors, including morphological or biomechanical traits, influenced Ucrit. We found no evidence of a trade-off between Usus and COTmin. In fact, data revealed significant negative correlations between Usus and COTmin, suggesting that individuals with high Usus also exhibit low COTmin. Finally, there were positive correlations between Usus and Uopt. Our study demonstrates the energetic importance of anaerobic metabolism during unsteady swimming, and provides intraspecific evidence that superior maximum sustained swimming speed is associated with superior swimming economy and
Vishal Diwan
Full Text Available The presence of antibiotics in the environment and their subsequent impact on resistance development has raised concerns globally. Hospitals are a major source of antibiotics released into the environment. To reduce these residues, research to improve knowledge of the dynamics of antibiotic release from hospitals is essential. Therefore, we undertook a study to estimate seasonal and temporal variation in antibiotic release from two hospitals in India over a period of two years. For this, 6 sampling sessions of 24 hours each were conducted in the three prominent seasons of India, at all wastewater outlets of the two hospitals, using continuous and grab sampling methods. An in-house wastewater sampler was designed for continuous sampling. Eight antibiotics from four major antibiotic groups were selected for the study. To understand the temporal pattern of antibiotic release, each of the 24-hour sessions were divided in three sub-sampling sessions of 8 hours each. Solid phase extraction followed by liquid chromatography/tandem mass spectrometry (LC-MS/MS was used to determine the antibiotic residues. Six of the eight antibiotics studied were detected in the wastewater samples. Both continuous and grab sampling methods indicated that the highest quantities of fluoroquinolones were released in winter followed by the rainy season and the summer. No temporal pattern in antibiotic release was detected. In general, in a common timeframe, continuous sampling showed less concentration of antibiotics in wastewater as compared to grab sampling. It is suggested that continuous sampling should be the method of choice as grab sampling gives erroneous results, it being indicative of the quantities of antibiotics present in wastewater only at the time of sampling. Based on our studies, calculations indicate that from hospitals in India, an estimated 89, 1 and 25 ng/L/day of fluroquinolones, metronidazole and sulfamethoxazole respectively, might be getting
Diwan, Vishal; Stålsby Lundborg, Cecilia; Tamhankar, Ashok J.
2013-01-01
The presence of antibiotics in the environment and their subsequent impact on resistance development has raised concerns globally. Hospitals are a major source of antibiotics released into the environment. To reduce these residues, research to improve knowledge of the dynamics of antibiotic release from hospitals is essential. Therefore, we undertook a study to estimate seasonal and temporal variation in antibiotic release from two hospitals in India over a period of two years. For this, 6 sampling sessions of 24 hours each were conducted in the three prominent seasons of India, at all wastewater outlets of the two hospitals, using continuous and grab sampling methods. An in-house wastewater sampler was designed for continuous sampling. Eight antibiotics from four major antibiotic groups were selected for the study. To understand the temporal pattern of antibiotic release, each of the 24-hour sessions were divided in three sub-sampling sessions of 8 hours each. Solid phase extraction followed by liquid chromatography/tandem mass spectrometry (LC-MS/MS) was used to determine the antibiotic residues. Six of the eight antibiotics studied were detected in the wastewater samples. Both continuous and grab sampling methods indicated that the highest quantities of fluoroquinolones were released in winter followed by the rainy season and the summer. No temporal pattern in antibiotic release was detected. In general, in a common timeframe, continuous sampling showed less concentration of antibiotics in wastewater as compared to grab sampling. It is suggested that continuous sampling should be the method of choice as grab sampling gives erroneous results, it being indicative of the quantities of antibiotics present in wastewater only at the time of sampling. Based on our studies, calculations indicate that from hospitals in India, an estimated 89, 1 and 25 ng/L/day of fluroquinolones, metronidazole and sulfamethoxazole respectively, might be getting released into the
Depression and Racial/Ethnic Variations within a Diverse Nontraditional College Sample
Hudson, Richard; Towey, James; Shinar, Ori
2008-01-01
The study's objective was to ascertain whether rates of depression were significantly higher for Dominican, Puerto Rican, South and Central American and Jamaican/Haitian students than for African American and White students. The sample consisted of 987 predominantly nontraditional college students. The depression rate for Dominican students was…
Matsumoto, Yasutoshi; Ozawa, Yasuaki; Imafuku, Yuji; Yoshida, Hiroshi
2011-11-01
Diurnal variations in serum iron concentration were examined to investigate the influence of sampling time in hemodialysis (HD) patients and healthy subjects. The serum iron concentration and TIBC of HD patients decreased significantly (pserum ferritin concentration of HD patients increased significantly (piron transport system: under such condition intracellural iron transition out into peripheral blood stream is low in HD patients. Serum iron concentration in samples collected in the evening decreased significantly both in HD patients (pserum iron concentration reveal almost similar decrement in both groups. In HD patients, serum iron concentration of the blood samples collected on the third day morning after HD and second day morning after HD was examined to see the influence from changes of circulating plasma volume. The serum iron concentration and Hct value in the second day sampling increased significantly compared to the third day sampling (pserum iron concentration corrected by Hct in the second day sampling increased significantly (pserum iron concentration vary in sampling time in HD patients as well as in healthy subjects. We also deduct that there may be other factors concerning changes in circulating plasma volume.
Zvolensky, Michael J; Sachs-Ericsson, Natalie; Feldner, Matthew T; Schmidt, Norman B; Bowman, Carrie J
2006-03-30
The present study evaluated a moderational model of neuroticism on the relation between smoking level and panic disorder using data from the National Comorbidity Survey. Participants (n=924) included current regular smokers, as defined by a report of smoking regularly during the past month. Findings indicated that a generalized tendency to experience negative affect (neuroticism) moderated the effects of maximum smoking frequency (i.e., number of cigarettes smoked per day during the period when smoking the most) on lifetime history of panic disorder even after controlling for drug dependence, alcohol dependence, major depression, dysthymia, and gender. These effects were specific to panic disorder, as no such moderational effects were apparent for other anxiety disorders. Results are discussed in relation to refining recent panic-smoking conceptual models and elucidating different pathways to panic-related problems.
Miller, A. J.; Nagatani, R. M.; Laver, J. D.; Korty, B.
1979-01-01
Midlatitude 100-mb height fields are employed to determine the effects of ground based sampling locations on measurements of variations in the total ozone content of the atmosphere. The precision of the zonal average heights computed by the technique of Angell and Korshover (1978) from data over ozone sampling areas at 50 deg N is compared to the zonal average computed from the entire data set. Linear regressions of ozone contents determined by an analysis of backscatter UV satellite data with respect to 100 mb heights are utilized to transform zonal differences in height to ozone levels. The zonal average total ozone sampling error is found to be on the order of 2% for midlatitudes of the Northern hemisphere, indicating that the general shape of ozone trends determined by ground-based observations appears to be real and the increase of ozone from the mid-1960's to the early 1970's may be greater than previously suggested.
Signes-Pastor, Antonio J; Carey, Manus; Carbonell-Barrachina, Angel A; Moreno-Jiménez, Eduardo; Green, Andy J; Meharg, Andrew A
2016-07-01
This study investigated total arsenic and arsenic speciation in rice using ion chromatography with mass spectrometric detection (IC-ICP-MS), covering the main rice-growing regions of the Iberian Peninsula in Europe. The main arsenic species found were inorganic and dimethylarsinic acid. Samples surveyed were soil, shoots and field-collected rice grain. From this information soil to plant arsenic transfer was investigated plus the distribution of arsenic in rice across the geographical regions of Spain and Portugal. Commercial polished rice was also obtained from each region and tested for arsenic speciation, showing a positive correlation with field-obtained rice grain. Commercial polished rice had the lowest i-As content in Andalucia, Murcia and Valencia while Extremadura had the highest concentrations. About 26% of commercial rice samples exceeded the permissible concentration for infant food production as governed by the European Commission. Some cadmium data is also presented, available with ICP-MS analyses, and show low concentration in rice samples.
Variations in admission practices for adolescents with anorexia nervosa: a North American sample.
Schwartz, Beth I; Mansbach, Jonathan M; Marion, Jenna G; Katzman, Debra K; Forman, Sara F
2008-11-01
The purpose of this study was to assess the variability in admission practices and medical inpatient care for adolescent patients with anorexia nervosa (AN). Participants consisted of members of the 2001-2003 Eating Disorder Special Interest Group from the Society for Adolescent Medicine who completed a structured telephone interview about their admission practices and patterns of inpatient care for teens with AN. Questions focused on admission threshold for heart rate (HR), percentage of ideal body weight (% IBW), and refeeding protocols. Case vignettes were used. Of 95 eligible practitioners, 51 (53%) agreed to participate. Participants represented 25 American states, one Canadian province, and 45 different adolescent programs. The majority of physicians reported they would hospitalize an AN patient with HR practices based on number of years in practice, gender of physician, or practice setting. Regional differences in admission practices were noted, with physicians in the western United States less likely to admit patients with HR >or=40 beats per minute (p = .018). Physicians described 28 different methods of advancing a diet during an admission. Only 37% of physicians were aware of a standardized refeeding protocol in their institution. This study indicates variability in admission criteria and refeeding practices and shows evidence of geographic variations of admission standards. These data provide a baseline for outcome trials investigating medical admissions for adolescents with AN.
Duy, Pham K; Chang, Kyeol; Sriphong, Lawan; Chung, Hoeil
2015-03-17
An axially perpendicular offset (APO) scheme that is able to directly acquire reproducible Raman spectra of samples contained in an oval container under variation of container orientation has been demonstrated. This scheme utilized an axially perpendicular geometry between the laser illumination and the Raman photon detection, namely, irradiation through a sidewall of the container and gathering of the Raman photon just beneath the container. In the case of either backscattering or transmission measurements, Raman sampling volumes for an internal sample vary when the orientation of an oval container changes; therefore, the Raman intensities of acquired spectra are inconsistent. The generated Raman photons traverse the same bottom of the container in the APO scheme; the Raman sampling volumes can be relatively more consistent under the same situation. For evaluation, the backscattering, transmission, and APO schemes were simultaneously employed to measure alcohol gel samples contained in an oval polypropylene container at five different orientations and then the accuracies of the determination of the alcohol concentrations were compared. The APO scheme provided the most reproducible spectra, yielding the best accuracy when the axial offset distance was 10 mm. Monte Carlo simulations were performed to study the characteristics of photon propagation in the APO scheme and to explain the origin of the optimal offset distance that was observed. In addition, the utility of the APO scheme was further demonstrated by analyzing samples in a circular glass container.
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Sampath, Srimurali; Selvaraj, Krishna Kumar; Shanmugam, Govindaraj; Krishnamoorthy, Vimalkumar; Chakraborty, Paromita; Ramaswamy, Babu Rajendran
2017-02-01
Usage of phthalates as plasticizers has resulted in worldwide occurrence and is becoming a serious concern to human health and environment. However, studies on phthalates in Indian atmosphere are lacking. Therefore, we studied the spatio-temporal trends of six major phthalates in Tamil Nadu, southern India, using passive air samplers. Phthalates were ubiquitously detected in all the samples and the average total phthalates found in decreasing order is pre-monsoon (61 ng m(-3)) > summer (52 ng m(-3)) > monsoon (17 ng m(-3)). Largely used phthalates, dibutylphthalate (DBP) and diethylhexlphthalate (DEHP) were predominantly found in all the seasons with contribution of 11-31% and 59-68%, respectively. The highest total phthalates was observed in summer at an urban location (836 ng m(-3)). Furthermore, through principal component analysis, potential sources were identified as emissions from additives of plasticizers in the polymer industry and the productions of adhesives, building materials and vinyl flooring. Although inhalation exposure of infants was higher than other population segments (toddlers, children and adults), exposure levels were found to be safe for people belonging to all ages based on reference dose (RfD) and tolerable daily intake (TDI) values. This study first attempted to report seasonal trend based on atmospheric monitoring using passive air sampling technique and exposure risk together.
Field Scale Variation in Water Dispersible Colloids from Aggregates and Intact Soil Samples
Nørgaard, Trine; Møldrup, Per; Ferré, Ty P A
Colloid-facilitated transport can play an important role in the transport of chemicals through the soil profile. The negative surface charge and large surface area makes colloids perfect carriers for strongly sorbing chemicals, like phosphorus and certain pesticides, in highly structured soils....... It is, however, difficult to quantify the amount of colloids ready available to participate in colloid-facilitated transport. In literature, the part of the colloidal fraction that readily disperses into suspension is referred to as water-dispersible clay (WDC). In this study we used two methods...... cm intact soil columns sampled from the same field grid also showed that the largest mass of particles and phosphorus leached from this part of the field. Thus, the presented WDC method comparison and results seem highly relevant in regard to field-scale mapping of leaching risk in regard to colloid...
Field Scale Variation in Water Dispersible Colloids from Aggregates and Intact Soil Samples
Nørgaard, Trine; Møldrup, Per; Ferré, Ty P A;
Colloid-facilitated transport can play an important role in the transport of chemicals through the soil profile. The negative surface charge and large surface area makes colloids perfect carriers for strongly sorbing chemicals, like phosphorus and certain pesticides, in highly structured soils....... It is, however, difficult to quantify the amount of colloids ready available to participate in colloid-facilitated transport. In literature, the part of the colloidal fraction that readily disperses into suspension is referred to as water-dispersible clay (WDC). In this study we used two methods...... cm intact soil columns sampled from the same field grid also showed that the largest mass of particles and phosphorus leached from this part of the field. Thus, the presented WDC method comparison and results seem highly relevant in regard to field-scale mapping of leaching risk in regard to colloid...
Brown, Samuel M; Tate, M Quinn; Jones, Jason P; Kuttler, Kathryn G; Lanspa, Michael J; Rondina, Matthew T; Grissom, Colin K; Mathews, V J
2015-10-01
To determine whether variability of coarsely sampled heart rate and blood pressure early in the course of severe sepsis and septic shock predicts successful resuscitation, defined as vasopressor independence at 24 hours after admission. In an observational study of patients admitted with severe sepsis or septic shock from 2009 to 2011 to either of 2 intensive care units (ICUs) at a tertiary-care hospital, in whom blood pressure was measured via an arterial catheter, we sampled heart rate and blood pressure every 30 seconds over the first 6 hours of ICU admission and calculated the coefficient of variability of those measurements. Primary outcome was vasopressor independence at 24 hours; and secondary outcome was 28-day mortality. We studied 165 patients, of which 97 (59%) achieved vasopressor independence at 24 hours. Overall, 28-day mortality was 15%. Significant predictors of vasopressor independence at 24 hours included the coefficient of variation of heart rate, age, Acute Physiology and Chronic Health Evaluation II, the number of increases in vasopressor dose, mean vasopressin dose, mean blood pressure, and time-pressure integral of mean blood pressure less than 60 mm Hg. Lower sampling frequencies (up to once every 5 minutes) did not affect the findings. Increased variability of coarsely sampled heart rate was associated with vasopressor independence at 24 hours after controlling for possible confounders. Sampling frequencies of once in 5 minutes may be similar to once in 30 seconds. © The Author(s) 2014.
Angelbeck-Schulze, Mandy; Mischke, Reinhard; Rohn, Karl; Hewicker-Trautwein, Marion; Naim, Hassan Y; Bäumer, Wolfgang
2014-07-10
Previously, we evaluated a minimally invasive epidermal lipid sampling method called skin scrub, which achieved reproducible and comparable results to skin scraping. The present study aimed at investigating regional variations in canine epidermal lipid composition using the skin scrub technique and its suitability for collecting skin lipids in dogs suffering from certain skin diseases. Eight different body sites (5 highly and 3 lowly predisposed for atopic lesions) were sampled by skin scrub in 8 control dogs with normal skin. Additionally, lesional and non-lesional skin was sampled from 12 atopic dogs and 4 dogs with other skin diseases by skin scrub. Lipid fractions were separated by high performance thin layer chromatography and analysed densitometrically. No significant differences in total lipid content were found among the body sites tested in the control dogs. However, the pinna, lip and caudal back contained significantly lower concentrations of ceramides, whereas the palmar metacarpus and the axillary region contained significantly higher amounts of ceramides and cholesterol than most other body sites. The amount of total lipids and ceramides including all ceramide classes were significantly lower in both lesional and non-lesional skin of atopic dogs compared to normal skin, with the reduction being more pronounced in lesional skin. The sampling by skin scrub was relatively painless and caused only slight erythema at the sampled areas but no oedema. Histological examinations of skin biopsies at 2 skin scrubbed areas revealed a potential lipid extraction from the transition zone between stratum corneum and granulosum. The present study revealed regional variations in the epidermal lipid and ceramide composition in dogs without skin abnormalities but no connection between lipid composition and predilection sites for canine atopic dermatitis lesions. The skin scrub technique proved to be a practicable sampling method for canine epidermal lipids, revealed
Wu, Di; Wang, Kangcheng; Wei, Dongtao; Chen, Qunlin; Du, Xue; Yang, Junyi; Qiu, Jiang
2017-02-01
In maladaptive respects, perfectionism reflects an individual's concern over making mistakes and doubting the quality of his or her own actions excessively, which would affect one's emotion. However, little is known about the neural mechanisms associated with the perfectionism and negative affect. In this study, voxel-based morphometry was performed to identify the brain regions underlying individual differences in perfectionism, which was measured by the Chinese Frost Multidimensional Perfectionism Scale (CFMPS), in a large sample of nonclinical young adults. Our results showed that the two subdimensions of the perfectionism, concern over mistakes (CM) and doubts about actions (DA), were both positively correlated with the self-reported anxiety and depression as well as the gray matter volume (GMV) in the anterior cingulate cortex (ACC), a pivotal brain region in cognitive control, affective state, and regulation. Moreover, CM, DA, and organization scores were respectively correlated with distributed brain regions involved in multiple cognitive and emotion processes. Our results furthermore revealed that the score of DA acted a mediational mechanism underlying the relationship between the GMV of ACC and self-rating negative affect (anxiety and depression). Taken together, these results might suggest the neuroanatomical basis of perfectionism and the association among the perfectionism, negative emotion, and brain architecture. This study emphasized that perfectionism could play a crucial role in the arousal of negative affect.
Barn owl feathers as biomonitors of mercury: sources of variation in sampling procedures.
Roque, Inês; Lourenço, Rui; Marques, Ana; Coelho, João Pedro; Coelho, Cláudia; Pereira, Eduarda; Rabaça, João E; Roulin, Alexandre
2016-04-01
Given their central role in mercury (Hg) excretion and suitability as reservoirs, bird feathers are useful Hg biomonitors. Nevertheless, the interpretation of Hg concentrations is still questioned as a result of a poor knowledge of feather physiology and mechanisms affecting Hg deposition. Given the constraints of feather availability to ecotoxicological studies, we tested the effect of intra-individual differences in Hg concentrations according to feather type (body vs. flight feathers), position in the wing and size (mass and length) in order to understand how these factors could affect Hg estimates. We measured Hg concentration of 154 feathers from 28 un-moulted barn owls (Tyto alba), collected dead on roadsides. Median Hg concentration was 0.45 (0.076-4.5) mg kg(-1) in body feathers, 0.44 (0.040-4.9) mg kg(-1) in primary and 0.60 (0.042-4.7) mg kg(-1) in secondary feathers, and we found a poor effect of feather type on intra-individual Hg levels. We also found a negative effect of wing feather mass on Hg concentration but not of feather length and of its position in the wing. We hypothesize that differences in feather growth rate may be the main driver of between-feather differences in Hg concentrations, which can have implications in the interpretation of Hg concentrations in feathers. Finally, we recommend that, whenever possible, several feathers from the same individual should be analysed. The five innermost primaries have lowest mean deviations to both between-feather and intra-individual mean Hg concentration and thus should be selected under restrictive sampling scenarios.
Balulla, Shama, E-mail: shamamohammed77@outlook.com; Padmanabhan, E., E-mail: eswaran-padmanabhan@petronas.com.my [Department of Geoscience, Faculty of Geosciencs and Petroleum Engineering Universiti Teknologi PETRONAS, Tronoh (Malaysia); Over, Jeffrey, E-mail: over@geneseo.edu [Department of geological sciences, Geneseo, NY (United States)
2015-07-22
This study demonstrates the significant lithologic variations that occur within the two shale samples from the Chittenango member of the Marcellus shale formation from western New York State in terms of mineralogical composition, type of lamination, pyrite occurrences and fossil content using thin section detailed description and field emission Scanning electron microscope (FESEM) with energy dispersive X-Ray Spectrum (EDX). This study is classified samples as laminated clayshale and fossiliferous carbonaceous shale. The most important detrital constituents of these shales are the clay mineral illite and chlorite, quartz, organic matter, carbonate mineral, and pyrite. The laminated clayshale has a lower amount of quartz and carbonate minerals than fossiliferous carbonaceous shale while it has a higher amount of clay minerals (chlorite and illite) and organic matter. FESEM analysis confirms the presence of chlorite and illite. The fossil content in the laminated clayshale is much lower than the fossiliferous carbonaceous shale. This can provide greater insights about variations in the depositional and environmental factors that influenced its deposition. This result can be compiled with the sufficient data to be helpful for designing the horizontal wells and placement of hydraulic fracturing in shale gas exploration and production.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Ohnaka, Keiichi; Hofmann, Karl-Heinz
2016-01-01
Our recent visible polarimetric images of the well-studied AGB star W Hya taken at pre-maximum light (phase 0.92) with VLT/SPHERE-ZIMPOL have revealed clumpy dust clouds close to the star at ~2 Rstar. We present second-epoch SPHERE-ZIMPOL observations of W Hya at minimum light (phase 0.54) in the continuum (645, 748, and 820 nm), in the Halpha line (656.3 nm), and in the TiO band (717 nm) as well as high-spectral resolution long-baseline interferometric observations in 2.3 micron CO lines with the AMBER instrument at the Very Large Telescope Interferometer (VLTI). The high-spatial resolution polarimetric images have allowed us to detect clear time variations in the clumpy dust clouds as close as 34--50~mas (1.4--2.0 Rstar) to the star. We detected the formation of a new dust cloud and the disappearance of one of the dust clouds detected at the first epoch. The Halpha and TiO emission extends to ~150 mas (~6 Rstar), and the Halpha images reveal time variations. The degree of linear polarization is higher at mi...
Weber, Stefan; Waller, Erik H; Kaiser, Christoph; von Freymann, Georg
2017-06-26
We present a real-time measurement technique, based on time-stretching for measuring the temporal dynamic of ultrafast absorption variations with a sampling-rate of up to 1.1 TS/s. The single-shot captured data are stretched in a resonator-based time-stretch system with a variable stretch-factor of up to 13.8. The time-window of the time-stretch system for capturing the signal of interest is about 800 ps with an update-rate of 10 MHz. An adapted optical backpropagation algorithm is introduced for reconstructing the original unstretched event. As proof-of-principle the temporal characteristic of a picosecond semiconductor saturable absorber mirror is measured: The real-time results agree well with the results of a conventional pump-probe experiment. The time-stretch technique potentially allows to gain access to a large field of ultrafast absorption variations like semiconductor charge carrier dynamics, irreversible polymerization processes, and saturable absorber materials.
Wellens, H L L; Kuijpers-Jagtman, A M; Halazonetis, D J
2013-01-01
This investigation aimed to quantify craniofacial variation in a sample of modern humans. In all, 187 consecutive orthodontic patients were collected, of which 79 were male (mean age 13.3, SD 3.7, range 7.5–40.8) and 99 were female (mean age 12.3, SD 1.9, range 8.7–19.1). The male and female subgroups were tested for differences in mean shapes and ontogenetic trajectories, and shape variability was characterized using principal component analysis. The hypothesis of modularity was tested for six different modularity scenarios. The results showed that there were subtle but significant differences in the male and female Procrustes mean shapes. Males were significantly larger. Mild sexual ontogenetic allometric divergence was noted. Principal component analysis indicated that, of the four retained biologically interpretable components, the two most important sources of variability were (i) vertical shape variation (i.e. dolichofacial vs. brachyfacial growth patterns) and (ii) sagittal relationships (maxillary prognatism vs. mandibular retrognathism, and vice versa). The mandible and maxilla were found to constitute one module, independent of the skull base. Additionally, we were able to confirm the presence of an anterior and posterior craniofacial columnar module, separated by the pterygomaxillary plane, as proposed by Enlow. These modules can be further subdivided into four sub-modules, involving the posterior skull base, the ethmomaxillary complex, a pharyngeal module, and the anterior part of the jaws. PMID:23425043
Wellens, H L L; Kuijpers-Jagtman, A M; Halazonetis, D J
2013-04-01
This investigation aimed to quantify craniofacial variation in a sample of modern humans. In all, 187 consecutive orthodontic patients were collected, of which 79 were male (mean age 13.3, SD 3.7, range 7.5-40.8) and 99 were female (mean age 12.3, SD 1.9, range 8.7-19.1). The male and female subgroups were tested for differences in mean shapes and ontogenetic trajectories, and shape variability was characterized using principal component analysis. The hypothesis of modularity was tested for six different modularity scenarios. The results showed that there were subtle but significant differences in the male and female Procrustes mean shapes. Males were significantly larger. Mild sexual ontogenetic allometric divergence was noted. Principal component analysis indicated that, of the four retained biologically interpretable components, the two most important sources of variability were (i) vertical shape variation (i.e. dolichofacial vs. brachyfacial growth patterns) and (ii) sagittal relationships (maxillary prognatism vs. mandibular retrognathism, and vice versa). The mandible and maxilla were found to constitute one module, independent of the skull base. Additionally, we were able to confirm the presence of an anterior and posterior craniofacial columnar module, separated by the pterygomaxillary plane, as proposed by Enlow. These modules can be further subdivided into four sub-modules, involving the posterior skull base, the ethmomaxillary complex, a pharyngeal module, and the anterior part of the jaws.
Hustedt, Jason T; Vu, Jennifer A; Bargreen, Kaitlin N; Hallam, Rena A; Han, Myae
2017-09-01
The federal Early Head Start program provides a relevant context to examine families' experiences with stress since participants qualify on the basis of poverty and risk. Building on previous research that has shown variations in demographic and economic risks even among qualifying families, we examined possible variations in families' perceptions of stress. Family, parent, and child data were collected to measure stressors and risk across a variety of domains in families' everyday lives, primarily from self-report measures, but also including assay results from child cortisol samples. A cluster analysis was employed to examine potential differences among groups of Early Head Start families. Results showed that there were three distinct subgroups of families, with some families perceiving that they experienced very high levels of stress while others perceived much lower levels of stress despite also experiencing poverty and heightened risk. These findings have important implications in that they provide an initial step toward distinguishing differences in low-income families' experiences with stress, thereby informing interventions focused on promoting responsive caregiving as a possible mechanism to buffer the effects of family and social stressors on young children. © 2017 Michigan Association for Infant Mental Health.
da Silva, J Gomes; Bonfils, X
2010-01-01
We used four known chromospheric activity indicators to measure long-term activity variations in a sample of 23 M-dwarf stars from the HARPS planet search program. We compared the indices using weighted Pearson correlation coefficients and found that in general (i) the correlation between $S_{CaII}$ and \\ion{Na}{i} is very strong and does not depend on the activity level of the stars, (ii) the correlation between our $S_{CaII}$ and H$\\alpha$ seems to depend on the activity level of the stars, and (iii) there is no strong correlation between $S_{CaII}$ and \\ion{He}{i} for these type of stars.
Ohnaka, K.; Weigelt, G.; Hofmann, K.-H.
2017-01-01
Aims: Our recent visible polarimetric images of the well-studied AGB star W Hya taken at pre-maximum light (phase 0.92) with VLT/SPHERE-ZIMPOL have revealed clumpy dust clouds close to the star at 2 R⋆. We present second-epoch SPHERE-ZIMPOL observations of W Hya at minimum light (phase 0.54) as well as high-spectral resolution long-baseline interferometric observations with the AMBER instrument at the Very Large Telescope Interferometer (VLTI). Methods: We observed W Hya with VLT/SPHERE-ZIMPOL at three wavelengths in the continuum (645, 748, and 820 nm), in the Hα line at 656.3 nm, and in the TiO band at 717 nm. The VLTI/AMBER observations were carried out in the wavelength region of the CO first overtone lines near 2.3 μm with a spectral resolution of 12 000. Results: The high-spatial resolution polarimetric images obtained with SPHERE-ZIMPOL have allowed us to detect clear time variations in the clumpy dust clouds as close as 34-50 mas (1.4-2.0 R⋆) to the star. We detected the formation of a new dust cloud as well as the disappearance of one of the dust clouds detected at the first epoch. The Hα and TiO emission extends to 150 mas ( 6 R⋆), and the Hα images obtained at two epochs reveal time variations. The degree of linear polarization measured at minimum light, which ranges from 13 to 18%, is higher than that observed at pre-maximum light. The power-law-type limb-darkened disk fit to the AMBER data in the continuum results in a limb-darkened disk diameter of 49.1 ± 1.5 mas and a limb-darkening parameter of 1.16 ± 0.49, indicating that the atmosphere is more extended with weaker limb-darkening compared to pre-maximum light. Our Monte Carlo radiative transfer modeling shows that the second-epoch SPHERE-ZIMPOL data can be explained by a shell of 0.1 μm grains of Al2O3, Mg2SiO4, and MgSiO3 with a 550 nm optical depth of 0.6 ± 0.2 and an inner and outer radii of 1.3 R⋆ and 10 ± 2R⋆, respectively. Our modeling suggests the predominance of small (0
Variation Trend of Maximum Daily Precipitation in Recent 50 Years in China%近50年中国最大1 d降水量变化趋势分析
陆桂华; 陈金明; 吴志勇; 肖恒
2013-01-01
为了解气候变暖对极端降水的影响,根据1960～2009年气象站实测降水量资料,采用Mann-Kendall趋势检验、线性趋势和反映序列持续性的Hurst指数,分析了四李中国西部、北方和南方三个区域最大1d降水量的变化特征,并探讨了20世纪80年代前后中国四季最大1d降水量的变化趋势.结果表明,近50年来西部四季最大1d降水量均呈增加趋势,北方仅冬季呈上升趋势,南方则冬、夏两季增加,春、秋两季减小；相对80年代前,80年代后冬季三大区域、夏季西部和南方及秋季西部最大1d降水量有所增加,春、秋季北方和南方则减少；各区域四季最大1d降水量时间序列具有明显的持续性,即未来最大1d降水量变化趋势将与近50年的变化趋势保持一致,且持续时间西部最长,北方次之,南方最短.%In order to diagnose impacts of climate change on regional extreme precipitation events, the Mann-Kendall test, linear tendency test and Hurst exponent, which reflects the continuity of hydrology series, are applied in this study. Based on the observed precipitation of meteorological stations from 1960 to 2009, seasonal maximum daily precipitation variations in the western, northern and southern part of China are analyzed. Moreover, the changing characteristics of the maximum daily precipitation before and after the 1980s are further analyzed. The results demonstrate that the maximum daily precipitation increases in each season for the western part, increases in winter for the northern part, while increases in winter and summer but decrease in the other two seasons for southern part of China. Compared with the maximum daily precipitation before the 1980s, precipitation increases in winter for whole China, in summer for the western and southern part of China, and in autumn only for the western part of China after the 1980s. The Hurst exponent indicates that the maximum daily precipitation series has characteristics
Leijala, Ulpu; Björkqvist, Jan-Victor; Johansson, Milla M.; Pellikka, Havu
2017-04-01
Future coastal management continuously strives for more location-exact and precise methods to investigate possible extreme sea level events and to face flooding hazards in the most appropriate way. Evaluating future flooding risks by understanding the behaviour of the joint effect of sea level variations and wind waves is one of the means to make more comprehensive flooding hazard analysis, and may at first seem like a straightforward task to solve. Nevertheless, challenges and limitations such as availability of time series of the sea level and wave height components, the quality of data, significant locational variability of coastal wave height, as well as assumptions to be made depending on the study location, make the task more complicated. In this study, we present a statistical method for combining location-specific probability distributions of water level variations (including local sea level observations and global mean sea level rise) and wave run-up (based on wave buoy measurements). The goal of our method is to obtain a more accurate way to account for the waves when making flooding hazard analysis on the coast compared to the approach of adding a separate fixed wave action height on top of sea level -based flood risk estimates. As a result of our new method, we gain maximum elevation heights with different return periods of the continuous water mass caused by a combination of both phenomena, "the green water". We also introduce a sensitivity analysis to evaluate the properties and functioning of our method. The sensitivity test is based on using theoretical wave distributions representing different alternatives of wave behaviour in relation to sea level variations. As these wave distributions are merged with the sea level distribution, we get information on how the different wave height conditions and shape of the wave height distribution influence the joint results. Our method presented here can be used as an advanced tool to minimize over- and
Kuzuha, Yasuhisa; Sivapalan, Murugesu; Tomosugi, Kunio; Kishii, Tokuo; Komatsu, Yosuke
2006-04-01
Eagleson's classical regional flood frequency model is investigated. Our intention was not to improve the model, but to reveal previously unidentified important and dominant hydrological processes in it. The change of the coefficient of variation (CV) of annual maximum discharge with catchment area can be viewed as representing the spatial variance of floods in a homogeneous region. Several researchers have reported that the CV decreases as the catchment area increases, at least for large areas. On the other hand, Eagleson's classical studies have been known as pioneer efforts that combine the concept of similarity analysis (scaling) with the derived flood frequency approach. As we have shown, the classical model can reproduce the empirical relationship between the mean annual maximum discharge and catchment area, but it cannot reproduce the empirical decreasing CV-catchment area curve. Therefore, we postulate that previously unidentified hydrological processes would be revealed if the classical model were improved to reproduce the decreasing of CV with catchment area. First, we attempted to improve the classical model by introducing a channel network, but this was ineffective. However, the classical model was improved by introducing a two-parameter gamma distribution for rainfall intensity. What is important is not the gamma distribution itself, but those characteristics of spatial variability of rainfall intensity whose CV decreases with increasing catchment area. Introducing the variability of rainfall intensity into the hydrological simulations explains how the CV of rainfall intensity decreases with increasing catchment area. It is difficult to reflect the rainfall-runoff processes in the model while neglecting the characteristics of rainfall intensity from the viewpoint of annual flood discharge variances.
Nguyen, Ha T; Hutyra, Lucy R; Hardiman, Brady S; Raciti, Steve M
2016-03-01
Tropical peat swamp forests (PSF) are one of the most carbon dense ecosystems on the globe and are experiencing substantial natural and anthropogenic disturbances. In this study, we combined direct field sampling and airborne LiDAR to empirically quantify forest structure and aboveground live biomass (AGB) across a large, intact tropical peat dome in Northwestern Borneo. Moving up a 4 m elevational gradient, we observed increasing stem density but decreasing canopy height, crown area, and crown roughness. These findings were consistent with hypotheses that nutrient and hydrological dynamics co-influence forest structure and stature of the canopy individuals, leading to reduced productivity towards the dome interior. Gap frequency as a function of gap size followed a power law distribution with a shape factor (λ) of 1.76 ± 0.06. Ground-based and dome-wide estimates of AGB were 217.7 ± 28.3 Mg C/ha and 222.4 ± 24.4 Mg C/ha, respectively, which were higher than previously reported AGB for PSF and tropical forests in general. However, dome-wide AGB estimates were based on height statistics, and we found the coefficient of variation on canopy height was only 0.08, three times less than stem diameter measurements, suggesting LiDAR height metrics may not be a robust predictor of AGB in tall tropical forests with dense canopies. Our structural characterization of this ecosystem advances the understanding of the ecology of intact tropical peat domes and factors that influence biomass density and landscape-scale spatial variation. This ecological understanding is essential to improve estimates of forest carbon density and its spatial distribution in PSF and to effectively model the effects of disturbance and deforestation in these carbon dense ecosystems.
Wang, Ruiliang; Zhang, Shuichang; Brassell, Simon; Wang, Jiaxue; Lu, Zhengyuan; Ming, Qingzhong; Wang, Xiaomei; Bian, Lizeng
2012-07-01
Stable carbon isotope composition (δ13C) of carbonate sediments and the molecular (biomarker) characteristics of a continuous Permian-Triassic (PT) layer in southern China were studied to obtain geochemical signals of global change at the Permian-Triassic boundary (PTB). Carbonate carbon isotope values shifted toward positive before the end of the Permian period and then shifted negative above the PTB into the Triassic period. Molecular carbon isotope values of biomarkers followed the same trend at and below the PTB and remained negative in the Triassic layer. These biomarkers were acyclic isoprenoids, ranging from C15 to C40, steranes (C27 dominates) and terpenoids that were all significantly more abundant in samples from the Permian layer than those from the Triassic layer. The Triassic layer was distinguished by the dominance of higher molecular weight (waxy) n-alkanes. Stable carbon isotope values of individual components, including n-alkanes and acyclic isoprenoids such as phytane, isop-C25, and squalane, are depleted in δ13C by up to 8-10‰ in the Triassic samples as compared to the Permian. Measured molecular and isotopic variations of organic matter in the PT layers support the generally accepted view of Permian oceanic stagnation followed by a massive upwelling of toxic deep waters at the PTB. A series of large-scale (global) outgassing events may be associated with the carbon isotope shift we measured. This is also consistent with the lithological evidence we observed of white thin-clay layers in this region. Our findings, in context with a generally accepted stagnant Permian ocean, followed by massive upwelling of toxic deep waters might be the major causes of the largest global mass extinction event that occurred at the Permian-Triassic boundary.
Jones Suzanne P
2007-08-01
Full Text Available Abstract Background Historically there has been a wide variation in the proportion of inadequate smears between general practices. Cervical screening in the UK is undergoing a fundamental change by moving from conventional to liquid based cytology (LBC. The main driver for this change has been a predicted reduction in the proportions of inadequate samples. This study investigates the effect of LBC on the variation in the proportion of inadequate samples between general practices using Shewhart's theory of variation and control charts. Methods Routinely collected cervical cytology data was obtained for all general practices in two localities in South Staffordshire for periods before and after the introduction of liquid based cytology. Control charts of the proportion of inadequate smears were plotted for the practices stratified by laboratory. A standardised measure of variation for all of the practices in each laboratory and each time period was also calculated. Results Following the introduction of liquid based cytology the overall proportion of inadequate samples in the two localities fell from 11.8 to 1.3% (p Conclusion A reduction in the proportion of inadequate samples has been realised in these localities. The reduction in the overall proportion of inadequate samples has also been accompanied by a reduction in variation between GP practices.
Zhang, Fujin; He, Jiang; Yao, Yiping; Hou, Dekun; Jiang, Cai; Zhang, Xinxin; Di, Caixia; Otgonbayar, Khureldavaa
2013-08-01
The spatial variability and temporal trend in concentrations of the organochlorine pesticides (OCPs), hexachlorocyclohexane (HCH) and dichlorodiphenyltrichloroethane (DDT), in soils and agricultural corps were investigated on an intensive horticulture area in Hohhot, North-West China, from 2008 to 2011. The most frequently found and abundant pesticides were the metabolites of DDT (p,p'-DDE, p,p'-DDT, o,p'-DDT and p,p'-DDD). Total DDT concentrations ranged from ND (not detectable) to 507.41 ng/g and were higher than the concentration of total HCHs measured for the range of 4.84-281.44 ng/g. There were significantly positive correlations between the ∑DDT and ∑HCH concentrations (r (2)>0.74) in soils, but no significant correlation was found between the concentrations of OCPs in soils and clay content while a relatively strong correlation was found between total OCP concentrations and total organic carbon (TOC). β-HCH was the main isomer of HCHs, and was detected in all samples; the maximum proportion of β-HCH compared to ∑HCHs (mean value 54%) was found, suggesting its persistence. The α/γ-HCH ratio was between 0.89 and 5.39, which signified the combined influence of technical HCHs and lindane. Low p,p'-DDE/p,p'-DDT in N1, N3 and N9 were found, reflecting the fresh input of DDTs, while the relatively high o,p'-DDT/p,p'-DDT ratios indicated the agricultural application of dicofol. Ratios of DDT/(DDE+DDD) in soils do not indicate recent inputs of DDT into Hohhot farmland soil environment. Seasonal variations of OCPs featured higher concentrations in autumn and lower concentrations in spring. This was likely associated with their temperature-driven re-volatilization and application of dicofol in late spring.
Kapp, Joshua R.; Diss, Tim; Spicer, James; Gandy, Michael; Schrijver, Iris; Jennings, Lawrence J.; Li, Marilyn M.; Tsongalis, Gregory J.; de Castro, D G; Bridge, Julia A.; Wallace, Andrew; Deignan, Joshua L; Hing, Sandra; Butler, Rachel; Verghese, Eldo
2014-01-01
AIMS: Mutation detection accuracy has been described extensively; however, it is surprising that pre-PCR processing of formalin-fixed paraffin-embedded (FFPE) samples has not been systematically assessed in clinical context. We designed a RING trial to (i) investigate pre-PCR variability, (ii) correlate pre-PCR variation with EGFR/BRAF mutation testing accuracy and (iii) investigate causes for observed variation.METHODS: 13 molecular pathology laboratories were recruited. 104 blinded FFPE cur...
Fuglsang, Karsten; Pedersen, Niels Hald; Larsen, Anna Warberg;
2014-01-01
emission of fossil CO2 from waste-to-energy plants can be monitored according to carbon trading schemes and renewable energy certificates. Weekly and monthly measurements were performed at five Danish waste incinerators. Significant variations between fractions of biogenic CO2 emitted were observed......, not only over time, but also between plants. From the results of monthly samples at one plant, the annual mean fraction of biogenic CO2 was found to be 69% of the total annual CO2 emissions. From weekly samples, taken every 3 months at the five plants, significant seasonal variations in biogenic CO2...
2009-01-01
The concepts of sample sphere radius and sample density are proposed in this paper to help illustrate that different vector transformations result in diverse sample density with the same sample ensemble, which finally affects their assimilation performance. Several numerical experiments using a onedimensional (1-D) soil water equation and synthetic observations are conducted to evaluate this new theory in land data assimilation.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Fuglsang, Karsten; Pedersen, Niels Hald; Larsen, Anna Warberg; Astrup, Thomas Fruergaard
2014-02-01
A dedicated sampling and measurement method was developed for long-term measurements of biogenic and fossil-derived CO(2) from thermal waste-to-energy processes. Based on long-term sampling of CO(2) and (14)C determination, plant-specific emission factors can be determined more accurately, and the annual emission of fossil CO(2) from waste-to-energy plants can be monitored according to carbon trading schemes and renewable energy certificates. Weekly and monthly measurements were performed at five Danish waste incinerators. Significant variations between fractions of biogenic CO(2) emitted were observed, not only over time, but also between plants. From the results of monthly samples at one plant, the annual mean fraction of biogenic CO(2) was found to be 69% of the total annual CO(2) emissions. From weekly samples, taken every 3 months at the five plants, significant seasonal variations in biogenic CO(2) emissions were observed (between 56% and 71% biogenic CO(2)). These variations confirmed that biomass fractions in the waste can vary considerably, not only from day to day but also from month to month. An uncertainty budget for the measurement method itself showed that the expanded uncertainty of the method was ± 4.0 pmC (95 % confidence interval) at 62 pmC. The long-term sampling method was found to be useful for waste incinerators for determination of annual fossil and biogenic CO(2) emissions with relatively low uncertainty.
Tarte, Stephen R.; Schmidt, A.R.; Sullivan, Daniel J.
1992-01-01
A floating sample-collection platform is described for stream sites where the vertical or horizontal distance between the stream-sampling point and a safe location for the sampler exceed the suction head of the sampler. The platform allows continuous water sampling over the entire storm-runoff hydrogrpah. The platform was developed for a site in southern Illinois.
无
2008-01-01
Ecological systems in the headwaters of the Yellow River, characterized by hash natural environmental conditions, are very vulnerable to climatic change. In the most recent decades, this area greatly attracted the public's attention for its more and more deteriorating environmental conditions. Based on tree-ring samples from the Xiqing Mountain and A'nyêmagên Mountains at the headwaters of the Yellow River in the Northeastern Tibetan Plateau, we reconstructed the minimum temperatures in the winter half year over the last 425 years and the maximum temperatures in the summer half year over the past 700 years in this region. The variation of minimum temperature in the winter half year during the time span of 1578-1940 was a relatively stable trend, which was followed by an abrupt warming trend since 1941. However, there is no significant warming trend for the maximum temperature in the summer half year over the 20th century. The asymmetric variation patterns between the minimum and maximum temperatures were observed in this study over the past 425 years. During the past 425 years, there are similar variation patterns between the minimum and maximum temperatures; however, the minimum temperatures vary about 25 years earlier compared to the maximum temperatures. If such a trend of variation patterns between the minimum and maximum temperatures over the past 425 years continues in the future 30 years, the maximum temperature in this region will increase significantly.
Jacoby; GORDON
2008-01-01
Ecological systems in the headwaters of the Yellow River, characterized by hash natural environmental conditions, are very vulnerable to climatic change. In the most recent decades, this area greatly attracted the public’s attention for its more and more deteriorating environmental conditions. Based on tree-ring samples from the Xiqing Mountain and A’nyêmagên Mountains at the headwaters of the Yellow River in the Northeastern Tibetan Plateau, we reconstructed the minimum temperatures in the winter half year over the last 425 years and the maximum temperatures in the summer half year over the past 700 years in this region. The variation of minimum temperature in the winter half year during the time span of 1578―1940 was a relatively stable trend, which was followed by an abrupt warming trend since 1941. However, there is no significant warming trend for the maximum temperature in the summer half year over the 20th century. The asymmetric variation patterns between the minimum and maximum temperatures were observed in this study over the past 425 years. During the past 425 years, there are similar variation patterns between the minimum and maximum temperatures; however, the minimum temperatures vary about 25 years earlier compared to the maximum temperatures. If such a trend of variation patterns between the minimum and maximum temperatures over the past 425 years continues in the future 30 years, the maximum temperature in this region will increase significantly.
McKinney, Cushla; Fanciulli, Manuela; Merriman, Marilyn E.; Phipps-Green, Amanda; Alizadeh, Behrooz Z.; Koeleman, Bobby P. C.; Dalbeth, Nicola; Gow, Peter J.; Harrison, Andrew A.; Highton, John; Jones, Peter B.; Stamp, Lisa K.; Steer, Sophia; Barrera, Pilar; Coenen, Marieke J. H.; Franke, Barbara; van Riel, Piet L. C. M.; Vyse, Tim J.; Aitman, Tim J.; Radstake, Timothy R. D. J.; Merriman, Tony R.
2010-01-01
Objective There is increasing evidence that variation in gene copy number (CN) influences clinical phenotype. The low-affinity Fc gamma receptor 3B (FCGR3B) located in the FCGR gene cluster is a CN polymorphic gene involved in the recruitment to sites of inflammation and activation of polymorphonucl
Toon Rosseel
Full Text Available In 2011, a novel Orthobunyavirus was identified in cattle and sheep in Germany and The Netherlands. This virus was named Schmallenberg virus (SBV. Later, presence of the virus was confirmed using real time RT-PCR in cases of congenital malformations of bovines and ovines in several European countries, including Belgium. In the absence of specific sequencing protocols for this novel virus we confirmed its presence in RT-qPCR positive field samples using DNase SISPA-next generation sequencing (NGS, a virus discovery method based on random amplification and next generation sequencing. An in vitro transcribed RNA was used to construct a standard curve allowing the quantification of viral RNA in the field samples. Two field samples of aborted lambs containing 7.66 and 7.64 log(10 RNA copies per µL total RNA allowed unambiguous identification of SBV. One sample yielded 192 SBV reads covering about 81% of the L segment, 56% of the M segment and 13% of the S segment. The other sample resulted in 8 reads distributed over the L and M segments. Three weak positive field samples (one from an aborted calf, two from aborted lambs containing virus quantities equivalent to 4.27-4.89 log(10 RNA copies per µL did not allow identification using DNase SISPA-NGS. This partial sequence information was compared to the whole genome sequence of SBV isolated from bovines in Germany, identifying several sequence differences. The applied viral discovery method allowed the confirmation of SBV in RT-qPCR positive brain samples. However, the failure to confirm SBV in weak PCR-positive samples illustrates the importance of the selection of properly targeted and fresh field samples in any virus discovery method. The partial sequences derived from the field samples showed several differences compared to the sequences from bovines in Germany, indicating sequence divergence within the epidemic.
Sedliak, M; Finni, T; Cheng, S; Haikarainen, T; Häkkinen, K
2008-03-01
This study aimed to compare day-to-day repeatability of diurnal variation in strength and power. Thirty-two men were measured at four time points (07 : 00 - 08 : 00, 12 : 00 - 13 : 00, 17 : 00 - 18 : 00, and 20 : 30 - 21 : 30 h) throughout two consecutive days (day 1 and day 2). Power during loaded squat jumps, torque and EMG during maximal (MVC) and submaximal (MVC40) voluntary isometric knee extension contractions were measured. The EMG/torque ratio during MVC and MVC40 was calculated to evaluate neuromuscular efficiency. A significant time-of-day effect with repeatable diurnal patterns was found in power. In MVC, a significant time-of-day effect was present on day 2, whereas day 1 showed a typical but nonsignificant diurnal pattern. EMG and antagonist co-activation during MVC remained statistically unaltered, whereas neuromuscular efficiency improved from day 1 to day 2. A similar trend was observed in MVC40 neuromuscular efficiency with significant time-of-day and day-to-day effects. Unaltered agonist and antagonist activity during MVC suggests that modification at the muscular level was the primary source for the diurnal variation in peak torque. A learning effect seemed to affect data in MVC40. In conclusion, the second consecutive test day showed typical diurnal variation in both maximum strength and power with no day-to-day effect of cumulative fatigue.
Veraart, Almut
and present a new estimator for the asymptotic ‘variance’ of the centered realised variance in the presence of jumps. Next, we compare the finite sample performance of the various estimators by means of detailed Monte Carlo studies where we study the impact of the jump activity, the jump size of the jumps...... in the price and the presence of additional independent or dependent jumps in the volatility on the finite sample performance of the various estimators. We find that the finite sample performance of realised variance, and in particular of the log–transformed realised variance, is generally good, whereas...
Housila P. Singh
2013-05-01
Full Text Available In this paper a double (or two-phase sampling version of (Singh and Tailor, 2005 estimator has been suggested along with its properties under large sample approximation. It is shown that the estimator due to (Kawathekar and Ajgaonkar, 1984 is a member of the proposed class of estimators. Realistic conditions have been obtained under which the proposed estimator is better than usual unbiased estimator, usual double sampling ratio ( tRd product ( tPd estimators and (Kawathekar and Ajgaonkar, 1984 estimator. This fact has been shown also through an empirical study.
Raju, M
2011-11-01
Full Text Available and optimization of core sampling procedure for carbon isotope analysis in eucalyptus and variation in carbon isotope ratios across species and growth conditions Mohan Raju, B#; Nuveshen Naidoo*; Sheshshaayee, M. S; Verryn, S. D*; Kamalkannan, R^; Bindumadhava... isotope analysis in Eucalyptus. Methods Expt 1: * Cores were taken from periphery to pith in 5 year old trees of Eucalyptus * Five half sib families of Eucalyptus grandis & E. urophylla were used ? Cores were further subdivided into 5 fragments...
R. L. Margraf
2012-01-01
Full Text Available Multisample, nonindexed pooling combined with next-generation sequencing (NGS was used to discover RET proto-oncogene sequence variation within a cohort known to be unaffected by multiple endocrine neoplasia type 2 (MEN2. DNA samples (113 Caucasians, 23 persons of other ethnicities were amplified for RET intron 9 to intron 16 and then divided into 5 pools of <30 samples each before library prep and NGS. Two controls were included in this study, a single sample and a pool of 50 samples that had been previously sequenced by the same NGS methods. All 59 variants previously detected in the 50-pool control were present. Of the 61 variants detected in the unaffected cohort, 20 variants were novel changes. Several variants were validated by high-resolution melting analysis and Sanger sequencing, and their allelic frequencies correlated well with those determined by NGS. The results from this unaffected cohort will be added to the RET MEN2 database.
Yelu Zeng
2015-01-01
Full Text Available A sampling strategy to define elementary sampling units (ESUs for an entire site at the kilometer scale is an important step in the validation process for moderate-resolution leaf area index (LAI products. Current LAI-sampling strategies are unable to consider the vegetation seasonal changes and are better suited for single-day LAI product validation, whereas the increasingly used wireless sensor network for LAI measurement (LAINet requires an optimal sampling strategy across both spatial and temporal scales. In this study, we developed an efficient and robust LAI Sampling strategy based on Multi-temporal Prior knowledge (SMP for long-term, fixed-position LAI observations. The SMP approach employed multi-temporal vegetation index (VI maps and the vegetation classification map as a priori knowledge. The SMP approach minimized the multi-temporal bias of the VI frequency histogram between the ESUs and the entire site and maximized the nearest-neighbor index to ensure that ESUs were dispersed in the geographical space. The SMP approach was compared with four sampling strategies including random sampling, systematic sampling, sampling based on the land-cover map and a sampling strategy based on vegetation index prior knowledge using the PROSAIL model-based simulation analysis in the Heihe River basin. The results indicate that the ESUs selected using the SMP method spread more evenly in both the multi-temporal feature space and geographical space over the vegetation cycle. By considering the temporal changes in heterogeneity, the average root-mean-square error (RMSE of the LAI reference maps can be reduced from 0.12 to 0.05, and the relative error can be reduced from 6.1% to 2.2%. The SMP technique was applied to assign the LAINet ESU locations at the Huailai Remote Sensing Experimental Station in Beijing, China, from 4 July to 28 August 2013, to validate three MODIS C5 LAI products. The results suggest that the average R2, RMSE, bias and relative
Siedlecki, Sandra L
2009-03-01
The incidence of chronic pain is similar in African-American and Caucasian populations; however, depression and disability secondary to unrelieved chronic pain is higher in African-American populations. In light of this difference, it is important to understand racial variations in response to chronic pain treatments, including complementary therapies such as music. The purpose of this study was to examine racial variation in response to music in an adult population with chronic pain, and specifically to determine if post treatment pain scores differed by race. Secondary analysis from a previously reported randomized controlled trial (n = 60) was used to answer the research questions. Music interventions consisted of listening to music for 1 hour a day for 7 consecutive days. Pain was measured with the McGill Pain Questionnaire short form and a 100-mm visual analog scale. Univariate and multivariate analysis was used to examine differences between groups. Music groups regardless of race experienced a decrease in pain and depression at posttest compared with the control group. However, this difference was only statistically significant for the Caucasian music group. Although our findings demonstrate that music may be an effective intervention for individuals with chronic nonmalignant pain; individuals from different racial backgrounds may respond differently. Further studies are needed to understand these differences in response to music.
Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B
2017-08-15
In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of
McNeill, A; Jedicke, R; Wainscoat, R; Denneau, L; Veres, P; Magnier, E; Chambers, K C; Kaiser, N; Waters, C
2016-01-01
The rotational state of asteroids is controlled by various physical mechanisms including collisions, internal damping and the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect. We have analysed the changes in magnitude between consecutive detections of approximately 60,000 asteroids measured by the PanSTARRS 1 survey during its first 18 months of operations. We have attempted to explain the derived brightness changes physically and through the application of a simple model. We have found a tendency toward smaller magnitude variations with decreasing diameter for objects of 1 < D < 8 km. Assuming the shape distribution of objects in this size range to be independent of size and composition our model suggests a population with average axial ratios 1 : 0.85 \\pm 0.13 : 0.71 \\pm 0.13, with larger objects more likely to have spin axes perpendicular to the orbital plane.
Siwatt Pongpiachan
2012-01-01
Full Text Available This paper focuses on providing new results relating to the impacts of Diurnal variation, Vertical distribution, and Emission source on sulfur K-edge XANES spectrum of aerosol samples. All aerosol samples used in the diurnal variation experiment were preserved using anoxic preservation stainless cylinders (APSCs and pressure-controlled glove boxes (PCGBs, which were specially designed to prevent oxidation of the sulfur states in PM10. Further investigation of sulfur K-edge XANES spectra revealed that PM10 samples were dominated by S(VI, even when preserved in anoxic conditions. The “Emission source effect” on the sulfur oxidation state of PM10 was examined by comparing sulfur K-edge XANES spectra collected from various emission sources in southern Thailand, while “Vertical distribution effects” on the sulfur oxidation state of PM10 were made with samples collected from three different altitudes from rooftops of the highest buildings in three major cities in Thailand. The analytical results have demonstrated that neither “Emission source” nor “Vertical distribution” appreciably contribute to the characteristic fingerprint of sulfur K-edge XANES spectrum in PM10.
Schuhmacher, M; Agramunt, M C; Bocio, A; Domingo, J L; de Kok, H A M
2003-07-01
In May 2000, the levels of a number of metals (As, Cd, Co, Cr, Cu, Hg, Mn, Ni, Pb, Sn, Tl, V and Zn) and polychlorinated dibenzo-p-dioxins (PCDDs) and polychlorinated dibenzofurans (PCDFs) were determined in soil and herbage samples collected near a cement plant from Sta. Margarida i els Monjos (Catalonia, Spain). To determine the temporal variation in the concentrations of metals and PCDD/PCDFs, in May 2001 soil and herbage samples were again collected at the same sampling points and analyzed for the levels of metals and PCDD/PCDFs. In general terms, metal concentrations in soils did not change between May 2000 and May 2001, while significant decreases in the levels of Cr, Ni and V were found in herbage. On the other hand, no significant differences in the mean I-TEQ values of PCDD/PCDFs were found in soil and herbage samples. The results of this survey show that according to the annual variation in the levels of metals and PCDD/PCDFs the environmental impact of the cement plant on the area under its direct influence is not relevant.
Pae, Chi-Un; Drago, Antonio; Kim, Jung-Jin; Patkar, Ashwin A; Jun, Tae-Youn; Lee, Chul; Mandelli, Laura; De Ronchi, Diana; Paik, In-Ho; Serretti, Alessandro
2008-09-01
We recently reported an association between TAAR6 (trace amine associated receptor 6 gene) variations and schizophrenia (SZ). We now report an association of a set of TAAR6 variations and clinical presentation and outcome in a sample of 240 SZ Korean patients. Patients were selected by a Structured Clinical Interview, DSM-IV Axis I disorders - Clinical Version (SCID-CV). Other psychiatric or neurologic disorders, as well as medical diseases, were exclusion criteria. To assess symptom severity, patients were administered the CGI scale and the PANSS at baseline and at the moment of discharge, 1 month later on average. TAAR6 variations rs6903874, rs7452939, rs8192625 and rs4305745 were investigated; rs6903874, rs7452939 and rs8192625 entered the statistical investigation after LD analysis. Rs8192625 G/G homozygosis was found to be significantly associated both with a worse clinical presentation at PANSS total and positive scores and with a shorter period of illness before hospitalization. No haplotype significant findings were found. The present study stands for a role of the TAAR6 in the clinical presentation of SZ. Moreover, our results show that this genetic effect may be counteracted by a correct treatment. Haplotype analysis was not informative in our sample, probably also because of the incomplete SNPs' coverage of the gene we performed. Further studies in this direction are warranted.
Ruelas-Mayorga, A; Trujillo-Lara, M; Nigoche-Netro, A; Echevarría, J; García, A M; Ramírez-Vélez, J
2016-01-01
In this paper we carry out a preliminary study of the dependence of the Tully-Fisher Relation (TFR) with the width and intensity level of the absolute magnitude interval of a limited sample of 2411 galaxies taken from Mathewson \\& Ford (1996). The galaxies in this sample do not differ significantly in morphological type, and are distributed over an $\\sim11$-magnitude interval ($-24.4 < I < -13.0$). We take as directives the papers by Nigoche-Netro et al. (2008, 2009, 2010) in which they study the dependence of the Kormendy (KR), the Fundamental Plane (FPR) and the Faber-Jackson Relations (FJR) with the magnitude interval within which the observed galaxies used to derive these relations are contained. We were able to characterise the behaviour of the TFR coefficients $(\\alpha, \\beta)$ with respect to the width of the magnitude interval as well as with the brightness of the galaxies within this magnitude interval. We concluded that the TFR for this specific sample of galaxies depends on observational ...
Chauhan, Pooja; Chauhan, Rishi Pal
2014-01-01
People are exposed to ionizing radin from the radionuclides that are present in different types of natural sources, of which phosphate fertilizer is one of the most important sources. Fertilizers are commonly used in agricultural field worldwide to enhance the crop yield. In the present investigation, a control study was carried out on the lady's finger plants grown in earthen pots. To observe the effect of fertilizers their equal amounts were added to the soil just before the plantation. The alpha track densities were measured using solid state nuclear track detectors (SSNTDs), a sensitive detector for alpha particles. The measured alpha track densities (T cm(-2)d(-1)) in lady's finger plants on top and bottom face of leaves after 30, 50 and 70 days of plantation varied from 49 ± 11 to 206 ± 2.6, 49 ± 16 to 248 ± 16 and 57 ± 8.5 to 265 ± 32 respectively in various leaf samples. The alpha track densities were found to vary with nature of fertilizers added to the soil and an increase was also observed with time. The alpha track densities were also measured in soil samples mixed with different fertilizers. The radon exhalation rates in various soil samples and soil to plant transfer factor (TF) of alpha tracks were also calculated.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Chand, H; Srianand, R; Aracil, B; Chand, Hum; Petitjean, Patrick; Srianand, Raghunathan; Aracil, Bastien
2004-01-01
We report a new constraint on the variation of the fine-structure constant based on the analysis of 15 Si IV doublets selected from a ESO-UVES sample. We find \\Delta\\alpha/\\alpha= +(0.15 +/- 0.43) 10^{-5} over a redshift range of 1.59< z < 2.92 which is consistent with no variation in \\alpha. This result represents a factor three improvement on the constraint on \\Delta\\alpha/\\alpha based on Si IV doublets compared to the published results in the literature. Alkali doublet method used here avoids the implicit assumptions used in the many-multiplet method that chemical and ionization homogeneities are negligible and isotopic abundances are close to the terrestrial value.
Allton, J. H.; Kuhlman, K. R.; Allums, K. K.; Gonzalez, C. P.; Jurewicz, A. J. G.; Burnett, D. S.; Woolum, D. S.
2015-01-01
The recovered Genesis collector fragments are heavily contaminated with crash-derived particulate debris. However, megasonic treatment with ultra-pure-water (UPW; resistivity (is) greater than18 meg-ohm-cm) removes essentially all particulate contamination greater than 5 microns in size [e.g.1] and is thus of considerable importance. Optical imaging of Si sample 60336 revealed the presence of a large C-rich particle after UPW treatment that was not present prior to UPW. Such handling contamination is occasionally observed, but such contaminants are normally easily removed by UPW cleaning. The 60336 particle was exceptional in that, surprisingly, it was not removed by additional UPW or by hot xylene or by aqua regia treatment. It was eventually removed by treatment with NH3-H2O2. Our best interpretation of the origin of the 60336 particle was that it was adhesive from the Post-It notes used to stabilize samples for transport from Utah after the hard landing. It is possible that the insoluble nature of the 60336 particle comes from interaction of the Post-It adhesive with UPW. An occasional bit of Post-It adhesive is not a major concern, but C particulate contamination also occurs from the heat shield of the Sample Return Capsule (SRC) and this is mixed with inorganic contamination from the SRC and the Utah landing site. If UPW exposure also produced an insoluble residue from SRC C, this would be a major problem in chemical treatments to produce clean surfaces for analysis. This paper reports experiments to test whether particulate contamination was removed more easily if UPW treatment was not used.
Albuquerque, Antonio Morais de Sa; Fragoso, Maria Conceicao de Farias; Oliveira, Mercia L. [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)
2011-07-01
In the nuclear medicine practice, the accurate knowledge of the activity of radiopharmaceuticals which will be administered to the subjects is an important factor to ensure the success of diagnosis or therapy. The instrument used for this purpose is the radionuclide calibrator. The radiopharmaceuticals are usually contained on glass vials or syringes. However, the radionuclide calibrators response is sensitive to the measurement geometry. In addition, the calibration factors supplied by manufactures are valid only for single sample geometry. To minimize the uncertainty associated with the activity measurements, it is important to use the appropriate corrections factors for the each radionuclide in the specific geometry in which the measurement is to be made. The aims of this work were to evaluate the behavior of radionuclide calibrators varying the geometry of radioactive sources and to determine experimentally the correction factors for different volumes and containers types commonly used in nuclear medicine practice. The measurements were made in two ionization chambers of different manufacturers (Capintec and Biodex), using four radionuclides with different photon energies: {sup 18}F, {sup 99m}Tc, {sup 131}I and {sup 201}Tl. The results confirm the significant dependence of radionuclide calibrators reading on the sample geometry, showing the need of use correction factors in order to minimize the errors which affect the activity measurements. (author)
Naimo, T.J.; Monroe, E.M.
1999-01-01
With the development of techniques to non-lethally biopsy tissue from unionids, a new method is available to measure changes in biochemical, contaminant, and genetic constituents in this imperiled faunal group. However, before its widespread application, information on the variability of biochemical components within and among tissues needs to be evaluated. We measured glycogen concentrations in foot and mantle tissue in Amblema plicata plicata (Say, 1817) to determine if glycogen was evenly distributed within and between tissues and to determine which tissue might be more responsive to the stress associated with relocating mussels. Glycogen was measured in two groups of mussels: those sampled from their native environment (undisturbed mussels) and quickly frozen for analysis and those relocated into an artificial pond (relocated mussels) for 24 months before analysis. In both undisturbed and relocated mussels, glycogen concentrations were evenly distributed within foot, but not within mantle tissue. In mantle tissue, concentrations of glycogen varied about 2-fold among sections. In addition, glycogen varied significantly between tissues in undisturbed mussels, but not in relocated mussels. Twenty-four months after relocation, glycogen concentrations had declined by 80% in mantle tissue and by 56% in foot tissue relative to the undisturbed mussels. These data indicate that representative biopsy samples can be obtained from foot tissue, but not mantle tissue. We hypothesize that mantle tissue could be more responsive to the stress of relocation due to its high metabolic activity associated with shell formation.
Egede, Leonard E; Gebregziabher, Mulugeta; Hunt, Kelly J; Axon, Robert N; Echols, Carrae; Gilbert, Gregory E; Mauldin, Patrick D
2011-04-01
We performed a retrospective analysis of a national cohort of veterans with diabetes to better understand regional, geographic, and racial/ethnic variation in diabetes control as measured by HbA(1c). A retrospective cohort study was conducted in a national cohort of 690,968 veterans with diabetes receiving prescriptions for insulin or oral hypoglycemic agents in 2002 that were followed over a 5-year period. The main outcome measures were HbA(1c) levels (as continuous and dichotomized at ≥8.0%). Relative to non-Hispanic whites (NHWs), HbA(1c) levels remained 0.25% higher in non-Hispanic blacks (NHBs), 0.31% higher in Hispanics, and 0.14% higher in individuals with other/unknown/missing racial/ethnic group after controlling for demographics, type of medication used, medication adherence, and comorbidities. Small but statistically significant geographic differences were also noted with HbA(1c) being lowest in the South and highest in the Mid-Atlantic. Rural/urban location of residence was not associated with HbA(1c) levels. For the dichotomous outcome poor control, results were similar with race/ethnic group being strongly associated with poor control (i.e., odds ratios of 1.33 [95% CI 1.31-1.35] and 1.57 [1.54-1.61] for NHBs and Hispanics vs. NHWs, respectively), geographic region being weakly associated with poor control, and rural/urban residence being negligibly associated with poor control. In a national longitudinal cohort of veterans with diabetes, we found racial/ethnic disparities in HbA(1c) levels and HbA(1c) control; however, these disparities were largely, but not completely, explained by adjustment for demographic characteristics, medication adherence, type of medication used to treat diabetes, and comorbidities.
de la Torre Bartolomé
2006-11-01
Full Text Available Abstract Background Previous studies of the relationship between job strain and blood or saliva cortisol levels have been small and based on selected occupational groups. Our aim was to examine the association between job strain and saliva cortisol levels in a population-based study in which a number of potential confounders could be adjusted for. Methods The material derives from a population-based study in Stockholm on mental health and its potential determinants. Two data collections were performed three years apart with more than 8500 subjects responding to a questionnaire in both waves. In this paper our analyses are based on 529 individuals who held a job, participated in both waves as well as in an interview linked to the second wave. They gave saliva samples at awakening, half an hour later, at lunchtime and before going to bed on a weekday in close connection with the interview. Job control and job demands were assessed from the questionnaire in the second wave. Mixed models were used to analyse the association between the demand control model and saliva cortisol. Results Women in low strain jobs (high control and low demands had significantly lower cortisol levels half an hour after awakening than women in high strain (low control and high demands, active (high control and high demands or passive jobs (low control and low demands. There were no significant differences between the groups during other parts of the day and furthermore there was no difference between the job strain, active and passive groups. For men, no differences were found between demand control groups. Conclusion This population-based study, on a relatively large sample, weakly support the hypothesis that the demand control model is associated with saliva cortisol concentrations.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
TAO Qianye; LI Yumei; WANG Guo'an; QIAO Yuhui; LIU Tung-Sheng
2009-01-01
Semi-sealed preservation of soil samples at difierent moisture of 4%and 23%,respectively, was simulated to observe the variations of soil microbiaI communities and determine the contents and isotopic compositions of the total organic carbon and total nitrogen on the 7th and 30th day, respectively.The results show that during preservation,the quantity of microbial communities tended to increase first and then decrease,with a wider variation range at higher moisture(23%).At the moisture content of 23%,the microbial communities became more active on the 7th day.but less after 30 days,and their activity Was stable with little fluctuation at the moisture content of 4%.However. there were no significant changes in the contents and isotopic compositions of the total organic carbon and total nitrogen.During preservation.the responses of soil microbes to the environment are more sensitive to changes in the total nitrogen and organic carbon contents.It is thus suggested that the variations of microbial communities have not exerted remarkable impacts on the isotope compositions of the total nitrogen and total organic carbon.
变异系数的抽样分布及假设检验%The Sample Distribution and Hypothesis Test of Coefficient of Variation
赵彦晖; 张水若; 邢瑞芳
2011-01-01
变异系数是一项可靠性指标,它应用于既有结构的可靠性、医院统计及保险理论等方面,对变异系数进行假设检验具有现实意义.本文取自一般正态总体的子样,并利用样本均值(X-)、样本标准差S和变异系数v=σ/μ的关系构造了一种含变异系数的抽样分布Z=vX/S～z(v,n-1),同时给出了该分布的密度函数fz(z)=(n-1/2n-1/2/√πv2/2nΓ(n-1/2)∫+∞0yn-1e-1/2[(n-1)y2+(y2-1)2/r2/n]dy.本文给础一种对变异系数的小样本假设检验方法,根据该抽样分布及原假设成立的条件下确定检验统计量Z=v0X/S,然后利用统计量构造一个使备择假设成立的小概率事件,由此得出拒绝域或拒绝条件.%Coefficient of variation is a reliability index. It is widely used in evaluatiou of reliability of existing structure, hospital statistics, insurance theory and so on. Hypothesis testing of the coefficient of variation is realistic siguificance. The sample of normal distribution is extracted, according to the relationship between sample mean (X), sample standard deviation S and coefficient of variation v = σ/μ , the sample distribution of containing coefficient of variation Z = v X/S ～z( v, n - 1 ) was given in this paper, at the same time its density function fz (z) =[(n-1)/2][(n-1)/2]/√πv2/2nΓ[(n-1)/2]∫+∞0y(n-1)e(-1/2)[(n-1)y2+(yz-1)2/(r2/n)]dy is given . A way of hypothesis test of the coefficient of variation is established in small sample. Through using the sample distribution and null hypothesis, test statistics Z = vo (-X)/S is determined.Then a small probability events what makes alternative hypothes right is constructed, using this statistics. Thus rejection regions or rejection conditions are given.
Klitz, W; Brautbar, C; Schito, A M; Barcellos, L F; Oksenberg, J R
2001-05-01
The chemokine receptor 5 (CCR5) serves as a fusion cofactor for macrophage-tropic strains of HIV-1. In addition, CCR5 has been shown to mediate the entry of poxviruses into target cells. Individuals homozygous for the Delta32 deletion-mutation have no surface expression of CCR5 and are highly protected against HIV-1 infection. To gain insights into the evolution of the mutation in modern populations, the relatively high frequency of the Delta32-ccr5 allele in some European and Jewish populations is explored here by examining haplotypes of 3p21.3 constructed of five polymorphic marker loci surrounding CCR5. By sampling Ashkenazi, non-Ashkenazi and non-Jewish populations, we utilize the natural experiment that occurred as a consequence of the Jewish Diaspora, and demonstrate that a single mutation was responsible for all copies of Delta32. This mutation must have moved from Northern European populations to the Ashkenazi Jews where evidence suggests that Delta32 carriers of both groups were favored by repeated occurrence of epidemic small pox beginning in the 8th century AD.
Genomic distribution and inter-sample variation of non-CpG methylation across human cell types.
Michael J Ziller
2011-12-01
Full Text Available DNA methylation plays an important role in development and disease. The primary sites of DNA methylation in vertebrates are cytosines in the CpG dinucleotide context, which account for roughly three quarters of the total DNA methylation content in human and mouse cells. While the genomic distribution, inter-individual stability, and functional role of CpG methylation are reasonably well understood, little is known about DNA methylation targeting CpA, CpT, and CpC (non-CpG dinucleotides. Here we report a comprehensive analysis of non-CpG methylation in 76 genome-scale DNA methylation maps across pluripotent and differentiated human cell types. We confirm non-CpG methylation to be predominantly present in pluripotent cell types and observe a decrease upon differentiation and near complete absence in various somatic cell types. Although no function has been assigned to it in pluripotency, our data highlight that non-CpG methylation patterns reappear upon iPS cell reprogramming. Intriguingly, the patterns are highly variable and show little conservation between different pluripotent cell lines. We find a strong correlation of non-CpG methylation and DNMT3 expression levels while showing statistical independence of non-CpG methylation from pluripotency associated gene expression. In line with these findings, we show that knockdown of DNMTA and DNMT3B in hESCs results in a global reduction of non-CpG methylation. Finally, non-CpG methylation appears to be spatially correlated with CpG methylation. In summary these results contribute further to our understanding of cytosine methylation patterns in human cells using a large representative sample set.
Lind, Lars [Department of Medical Sciences, Cardiovascular Epidemiology, Uppsala University, Uppsala (Sweden); Penell, Johanna [Department of Medical Sciences, Occupational and Environmental Medicine, Uppsala University, Uppsala (Sweden); Syvänen, Anne-Christine; Axelsson, Tomas [Department of Medical Sciences, Molecular Medicine and Science for Life Laboratory, Uppsala University, Uppsala (Sweden); Ingelsson, Erik [Department of Medical Sciences, Molecular Epidemiology and Science for Life Laboratory, Uppsala University, Uppsala (Sweden); Wellcome Trust Centre for Human Genetics, University of Oxford, Oxford (United Kingdom); Morris, Andrew P.; Lindgren, Cecilia [Wellcome Trust Centre for Human Genetics, University of Oxford, Oxford (United Kingdom); Salihovic, Samira; Bavel, Bert van [MTM Research Centre, School of Science and Technology, Örebro University, Örebro (Sweden); Lind, P. Monica, E-mail: monica.lind@medsci.uu.se [Department of Medical Sciences, Occupational and Environmental Medicine, Uppsala University, Uppsala (Sweden)
2014-08-15
Several of the polychlorinated biphenyls (PCBs), i.e. the dioxin-like PCBs, are known to induce the P450 enzymes CYP1A1, CYP1A2 and CYP1B1 by activating the aryl hydrocarbon receptor (Ah)-receptor. We evaluated if circulating levels of PCBs in a population sample were related to genetic variation in the genes encoding these CYPs. In the population-based Prospective Investigation of the Vasculature in Uppsala Seniors (PIVUS) study (1016 subjects all aged 70), 21 SNPs in the CYP1A1, CYP1A2 and CYP1B1 genes were genotyped. Sixteen PCB congeners were analysed by high-resolution chromatography coupled to high-resolution mass spectrometry (HRGC/ HRMS). Of the investigated relationships between SNPs in the CYP1A1, CYP1A2 and CYP1B1 and six PCBs (congeners 118, 126, 156, 169, 170 and 206) that captures >80% of the variation of all PCBs measured, only the relationship between CYP1A1 rs2470893 was significantly related to PCB118 levels following strict adjustment for multiple testing (p=0.00011). However, there were several additional SNPs in the CYP1A2 and CYP1B1 that showed nominally significant associations with PCB118 levels (p-values in the 0.003–0.05 range). Further, several SNPs in the CYP1B1 gene were related to both PCB156 and PCB206 with p-values in the 0.005–0.05 range. Very few associations with p<0.05 were seen for PCB126, PCB169 or PCB170. Genetic variation in the CYP1A1 was related to circulating PCB118 levels in the general elderly population. Genetic variation in CYP1A2 and CYP1B1 might also be associated with other PCBs. - Highlights: • We studied the relationship between PCBs and the genetic variation in the CYP genes. • Cross sectional data from a cohort of elderly were analysed. • The PCB levels were evaluated versus 21 SNPs in three CYP genes. • PCB 118 was related to variation in the CYP1A1 gene.
Lind, Lars; Penell, Johanna; Syvänen, Anne-Christine; Axelsson, Tomas; Ingelsson, Erik; Morris, Andrew P; Lindgren, Cecilia; Salihovic, Samira; van Bavel, Bert; Lind, P Monica
2014-08-01
Several of the polychlorinated biphenyls (PCBs), i.e. the dioxin-like PCBs, are known to induce the P450 enzymes CYP1A1, CYP1A2 and CYP1B1 by activating the aryl hydrocarbon receptor (Ah)-receptor. We evaluated if circulating levels of PCBs in a population sample were related to genetic variation in the genes encoding these CYPs. In the population-based Prospective Investigation of the Vasculature in Uppsala Seniors (PIVUS) study (1016 subjects all aged 70), 21 SNPs in the CYP1A1, CYP1A2 and CYP1B1 genes were genotyped. Sixteen PCB congeners were analysed by high-resolution chromatography coupled to high-resolution mass spectrometry (HRGC/ HRMS). Of the investigated relationships between SNPs in the CYP1A1, CYP1A2 and CYP1B1 and six PCBs (congeners 118, 126, 156, 169, 170 and 206) that captures >80% of the variation of all PCBs measured, only the relationship between CYP1A1 rs2470893 was significantly related to PCB118 levels following strict adjustment for multiple testing (p=0.00011). However, there were several additional SNPs in the CYP1A2 and CYP1B1 that showed nominally significant associations with PCB118 levels (p-values in the 0.003-0.05 range). Further, several SNPs in the CYP1B1 gene were related to both PCB156 and PCB206 with p-values in the 0.005-0.05 range. Very few associations with pPCB126, PCB169 or PCB170. Genetic variation in the CYP1A1 was related to circulating PCB118 levels in the general elderly population. Genetic variation in CYP1A2 and CYP1B1 might also be associated with other PCBs.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Ashok Sahai
2016-02-01
Full Text Available This paper addresses the issue of finding the most efficient estimator of the normal population mean when the population “Coefficient of Variation (C. V.” is ‘Rather-Very-Large’ though unknown, using a small sample (sample-size ≤ 30. The paper proposes an “Efficient Iterative Estimation Algorithm exploiting sample “C. V.” for an efficient Normal Mean estimation”. The MSEs of the estimators per this strategy have very intricate algebraic expression depending on the unknown values of population parameters, and hence are not amenable to an analytical study determining the extent of gain in their relative efficiencies with respect to the Usual Unbiased Estimator (sample mean ~ Say ‘UUE’. Nevertheless, we examine these relative efficiencies of our estimators with respect to the Usual Unbiased Estimator, by means of an illustrative simulation empirical study. MATLAB 7.7.0.471 (R2008b is used in programming this illustrative ‘Simulated Empirical Numerical Study’.DOI: 10.15181/csat.v4i1.1091
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Mabe, Jeffrey A.; Moring, J. Bruce
2008-01-01
The U.S. Geological Survey, in cooperation with the Houston-Galveston Area Council and the Galveston Bay Estuary Program under the authority of the Texas Commission on Environmental Quality, did a study in 2007 to assess the variation in biotic assemblages (benthic macroinvertebrate and fish communities) and stream-habitat data with sampling strategy and method in tidal segments of Highland Bayou and Marchand Bayou in Galveston County. Data were collected once in spring and once in summer 2007 from four stream sites (reaches) (short names Hitchcock, Fairwood, Bayou Dr, and Texas City) of Highland Bayou and from one reach (short name Marchand) in Marchand Bayou. Only stream-habitat data from summer 2007 samples were used for this report. Additional samples were collected at the Hitchcock, Fairwood, and Bayou Dr reaches (multisample reaches) during summer 2007 to evaluate variation resulting from sampling intensity and location. Graphical analysis of benthic macroinvertebrate community data using a multidimensional scaling technique indicates there are taxonomic differences between the spring and summer samples. Seasonal differences in communities primarily were related to decreases in the abundance of chironomids and polychaetes in summer samples. Multivariate Analysis of Similarities tests of additional summer 2007 benthic macroinvertebrate samples from Hitchcock, Fairwood, and Bayou Dr indicated significant taxonomic differences between the sampling locations at all three reaches. In general, the deepwater samples had the smallest numbers for benthic macroinvertebrate taxa richness and abundance. Graphical analysis of species-level fish data indicates no consistent seasonal difference in fish taxa across reaches. Increased seining intensity at the multisample reaches did not result in a statistically significant difference in fish communities. Increased seining resulted in some changes in taxa richness and community diversity metrics. Diversity increases
Emma Lightfoot
Full Text Available Oxygen isotope analysis of archaeological skeletal remains is an increasingly popular tool to study past human migrations. It is based on the assumption that human body chemistry preserves the δ18O of precipitation in such a way as to be a useful technique for identifying migrants and, potentially, their homelands. In this study, the first such global survey, we draw on published human tooth enamel and bone bioapatite data to explore the validity of using oxygen isotope analyses to identify migrants in the archaeological record. We use human δ18O results to show that there are large variations in human oxygen isotope values within a population sample. This may relate to physiological factors influencing the preservation of the primary isotope signal, or due to human activities (such as brewing, boiling, stewing, differential access to water sources and so on causing variation in ingested water and food isotope values. We compare the number of outliers identified using various statistical methods. We determine that the most appropriate method for identifying migrants is dependent on the data but is likely to be the IQR or median absolute deviation from the median under most archaeological circumstances. Finally, through a spatial assessment of the dataset, we show that the degree of overlap in human isotope values from different locations across Europe is such that identifying individuals' homelands on the basis of oxygen isotope analysis alone is not possible for the regions analysed to date. Oxygen isotope analysis is a valid method for identifying first-generation migrants from an archaeological site when used appropriately, however it is difficult to identify migrants using statistical methods for a sample size of less than c. 25 individuals. In the absence of local previous analyses, each sample should be treated as an individual dataset and statistical techniques can be used to identify migrants, but in most cases pinpointing a specific
Sarah Elizabeth Gutowsky
2015-11-01
Full Text Available Marine ecologists and managers need to know the spatial extent of at-sea areas most frequented by the groups of wildlife they study or manage. Defining group-specific ranges and distributions (i.e. space use at the level of species, population, age-class, etc. can help to identify the source or severity of common or distinct threats among different at-risk groups. In biologging studies, this is accomplished by estimating the space use of a group based on a sample of tracked individuals. A major assumption of these studies is consistency in individual movements among members of a group. The implications of scaling up individual-level tracking data to infer higher-level spatial patterns for groups (i.e. size and extent of areas used, overlap or segregation among groups is not well documented for wide-ranging pelagic species with high potential for individual variation in space use. We present a case study exploring the effects of sampling (i.e. number and identity of individuals contributing to an analysis on defining group-specific space use with year-round multi-colony tracking data from two highly vagile species, Laysan (Phoebastria immutabilis and black-footed (P. nigripes albatrosses. The results clearly demonstrate that caution is warranted when defining space use for a specific species-colony-period group based on datasets of small, intermediate, or relatively large sample sizes (ranging from n=3-42 tracked individuals due to a high degree of individual-level variation in movements. Overall, we provide further support to the recommendation that biologging studies aiming to define higher-level patterns in space use exercise restraint in the scope of inference, particularly when pooled Kernel Density Estimation techniques are applied to small datasets for wide-ranging species. Transparent reporting in respect to the potential limitations of the data can in turn better inform both biological interpretations and science-based management
Rowe, Rachel E; Townend, John; Brocklehurst, Peter; Knight, Marian; Macfarlane, Alison; McCourt, Christine; Newburn, Mary; Redshaw, Maggie; Sandall, Jane; Silverton, Louise; Hollowell, Jennifer
2014-05-29
To explore whether service configuration and obstetric unit (OU) characteristics explain variation in OU intervention rates in 'low-risk' women. Ecological study using funnel plots to explore unit-level variations in adjusted intervention rates and simple linear regression, stratified by parity, to investigate possible associations between unit characteristics/configuration and adjusted intervention rates in planned OU births. Characteristics considered: OU size, presence of an alongside midwifery unit (AMU), proportion of births in the National Health Service (NHS) trust planned in midwifery units or at home and midwifery 'under' staffing. 36 OUs in England. 'Low-risk' women with a 'term' pregnancy planning vaginal birth in a stratified, random sample of 36 OUs. Adjusted rates of intrapartum caesarean section, instrumental delivery and two composite measures capturing birth without intervention ('straightforward' and 'normal' birth). Funnel plots showed unexplained variation in adjusted intervention rates. In NHS trusts where proportionately more non-OU births were planned, adjusted intrapartum caesarean section rates in the planned OU births were significantly higher (nulliparous: R(2)=31.8%, coefficient=0.31, p=0.02; multiparous: R(2)=43.2%, coefficient=0.23, p=0.01), and for multiparous women, rates of 'straightforward' (R(2)=26.3%, coefficient=-0.22, p=0.01) and 'normal' birth (R(2)=17.5%, coefficient=0.24, p=0.01) were lower. The size of the OU (number of births), midwifery 'under' staffing levels (the proportion of shifts where there were more women than midwives) and the presence of an AMU were associated with significant variation in some interventions. Trusts with greater provision of non-OU intrapartum care may have higher intervention rates in planned 'low-risk' OU births, but at a trust level this is likely to be more than offset by lower intervention rates in planned non-OU births. Further research using high quality data on unit characteristics and
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Zimmerman, Marc J.; Massey, Andrew J.; Campo, Kimberly W.
2005-01-01
During four periods from April 2002 to June 2003, pore-water samples were taken from river sediment within a gaining reach (Mill Pond) of the Sudbury River in Ashland, Massachusetts, with a temporary pushpoint sampler to determine whether this device is an effective tool for measuring small-scale spatial variations in concentrations of volatile organic compounds and selected field parameters (specific conductance and dissolved oxygen concentration). The pore waters sampled were within a subsurface plume of volatile organic compounds extending from the nearby Nyanza Chemical Waste Dump Superfund site to the river. Samples were collected from depths of 10, 30, and 60 centimeters below the sediment surface along two 10-meter-long, parallel transects extending into the river. Twenty-five volatile organic compounds were detected at concentrations ranging from less than 1 microgram per liter to hundreds of micrograms per liter (for example, 1,2-dichlorobenzene, 490 micrograms per liter; cis-1,2-dichloroethene, 290 micrograms per liter). The most frequently detected compounds were either chlorobenzenes or chlorinated ethenes. Many of the compounds were detected only infrequently. Quality-control sampling indicated a low incidence of trace concentrations of contaminants. Additional samples collected with passive-water-diffusion-bag samplers yielded results comparable to those collected with the pushpoint sampler and to samples collected in previous studies at the site. The results demonstrate that the pushpoint sampler can yield distinct samples from sites in close proximity; in this case, sampling sites were 1 meter apart horizontally and 20 or 30 centimeters apart vertically. Moreover, the pushpoint sampler was able to draw pore water when inserted to depths as shallow as 10 centimeters below the sediment surface without entraining surface water. The simplicity of collecting numerous samples in a short time period (routinely, 20 to 30 per day) validates the use of a
Yadav, Shweta; Tandon, Ankit; Attri, Arun K.
2014-12-01
The detection of nicotine, an organic tracer for Environmental Tobacco Smoke (ETS), in the collected PM10 samples from Delhi region's ambient environment, in a appropriately designed investigation was initiated over four years (2006-2009) to: (1) Comprehend seasonal and inter-annual variations in the nicotine present in PM10; (2) Extract regression based linear trend profile manifested by nicotine in PM10; (3) Determine the non-linear trend timeline from the nicotine data, and compare it with the obtained linear trend; (4) Suggest the possible use of the designed experiment and analysis to have a qualitative appraisal of Tobacco Smoking activity in the sampling region. The PM10 samples were collected in a monthly time-series sequence at a known receptor site. Quantitative estimates of nicotine (ng m-3) were made by using a Thermal Desorption Gas Chromatography Mass Spectrometry (TD-GC/MS). The annual average concentrations of nicotine (ng m-3) were 516 ± 302 (2008) > 494 ± 301 (2009) > 438 ± 250 (2007) > 325 ± 149 (2006). The estimated linear trend of 5.4 ng m-3 month-1 corresponded to 16.3% per annum increase in the PM10 associated nicotine. The industrial production of India's tobacco index normalized to Delhi region's consumption, pegged an increase at 10.5% per annum over this period.
Functional Maximum Autocorrelation Factors
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in\\verb+~+\\$\\backsl......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in...
Annhild Mosdøl
2009-11-01
Full Text Available ABSTRACTSemi-quantitative food frequency data from a nation-wide, representative sample of 2677 Norwegianmen and women were analysed to identify food categories contributing most to absolute intake andbetween-person variation in intake of energy and nine nutrients. The 149 food categories in the questionnairewere ranked according to their contribution to absolute nutrient intake, and categories contributingat least 0.5% to the average absolute intake were included in a stepwise regression model. Thenumber of food categories explaining 90% of the between-person variation varied from 2 categories forb -carotene to 33 for a-tocopherol. The models accounted for 53–76% of the estimated absolute nutrientintakes. These analyses present a meaningful way of restricting the number of food categories inquestionnaires aimed at capturing the between-person variation in energy or specific nutrient intakes.NORSK SAMMENDRAGSemikvantitative matvarefrekvensdata fra et landsrepresentativt utvalg av 2677 norske menn og kvinnerble analysert for å identifisere de matvarekategoriene som bidro mest til absolutt inntak og til variasjoni inntak mellom individer for energi og ni næringsstoffer. De 149 matvarekategoriene ble rangert iforhold til deres bidrag til inntaket av et næringsstoff, og de kategoriene som bidro med minst 0,5% avgjennomsnittlig inntak ble inkludert i en trinnvis regresjonsmodell. Antallet kategorier som forklarte90% av variasjonen mellom individer varierte fra 2 kategorier for b-karoten til 33 for a-tokoferol.Modellene inkluderte 53–76% av det estimerte absoluttinntaket av næringsstoffene. Disse analysenepeker på en meningsfylt måte å redusere antall kostspørsmål i spørreskjema som er rettet mot å fangeopp variasjonen i inntak av energi og utvalgte næringsstoffer mellom personer.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Shi, Zhengguo; Liu, Xiaodong; An, Zhisheng [Chinese Academy of Sciences, State Key Laboratory of Loess Quaternary Geology (SKLLQG), Institute of Earth Environment, Xi' an (China); Yi, Bingqi; Yang, Ping [Texas A and M University, College Station, TX (United States); Mahowald, Natalie [Cornell University, Ithaca, NY (United States)
2011-12-15
Northern Tibetan Plateau uplift and global climate change are regarded as two important factors responsible for a remarkable increase in dust concentration originating from inner Asian deserts during the Pliocene-Pleistocene period. Dust cycles during the mid-Pliocene, last glacial maximum (LGM), and present day are simulated with a global climate model, based on reconstructed dust source scenarios, to evaluate the relative contributions of the two factors to the increment of dust sedimentation fluxes. In the focused downwind regions of the Chinese Loess Plateau/North Pacific, the model generally produces a light eolian dust mass accumulation rate (MAR) of 7.1/0.28 g/cm{sup 2}/kyr during the mid-Pliocene, a heavier MAR of 11.6/0.87 g/cm{sup 2}/kyr at present, and the heaviest MAR of 24.5/1.15 g/cm{sup 2}/kyr during the LGM. Our results are in good agreement with marine and terrestrial observations. These MAR increases can be attributed to both regional tectonic uplift and global climate change. Comparatively, the climatic factors, including the ice sheet and sea surface temperature changes, have modulated the regional surface wind field and controlled the intensity of sedimentation flux over the Loess Plateau. The impact of the Tibetan Plateau uplift, which increased the areas of inland deserts, is more important over the North Pacific. The dust MAR has been widely used in previous studies as an indicator of inland Asian aridity; however, based on the present results, the interpretation needs to be considered with greater caution that the MAR is actually not only controlled by the source areas but the surface wind velocity. (orig.)
Albareti, Franco D; Gutiérrez, Carlos M; Prada, Francisco; Pâris, Isabelle; Schlegel, David; López-Corredoira, Martín; Schneider, Donald P; Manchado, Arturo; García-Hernández, D A; Petitjean, Patrick; Ge, Jian
2015-01-01
From the Sloan Digital Sky Survey Data Release 12, which covers the full Baryonic Oscillation Spectroscopic Survey (BOSS) footprint, we investigate the possible variation of the fine-structure constant over cosmological time scales. We analyze the largest quasar sample considered so far in the literature, which contains 10,363 spectra with $z<1$. All the BOSS quasar spectra are selected from a visually inspected quasar catalog. We apply the emission line method on the [O III] doublet (4960, 5008 A) and obtain $\\Delta\\alpha/\\alpha= \\left(1.4 \\pm 2.3\\right)\\times10^{-5}$ for the relative variation of the fine-structure constant. We also investigate the possible sources of systematics: misidentification of the lines, sky OH lines, H$\\beta$ and broad line contamination, optimal wavelength range for the Gaussian fits, chosen polynomial order for the continuum spectrum, signal-to-noise ratio and good quality of the fits. The uncertainty of the measurement is dominated by the sky subtraction. The results presente...
Prosser, Ryan S; Brain, Richard A; Malia Andrus, J; Hosmer, Alan J; Solomon, Keith R; Hanson, Mark L
2015-08-01
Lotic systems in agriculturally intensive watersheds can experience short-term pulsed exposures of pesticides as a result of runoff associated with rainfall events following field applications. Of special interest are herbicides that could potentially impair communities of primary producers, such as those associated with periphyton. Therefore, this study examined agroecosystem-derived lotic periphyton to assess (1) variation in community sensitivity to, and ability to recover from, acute (48h) exposure to the photosystem II (PSII)-inhibiting herbicide atrazine across sites and time, and (2) attempt to determine the variables (e.g., community structure, hydrology, water quality measures) that were predictive for observed differences in sensitivity and recovery. Periphyton were sampled from six streams in the Midwestern U.S. on four different dates in 2012 (April to August). Field-derived periphyton were exposed in the laboratory to concentrations of atrazine ranging from 10 to 320μg/L for 48h, followed by untreated media for evaluation of recovery for 48h. Effective quantum yield of PSII was measured after 24h and 48h exposure and 24h and 48h after replacement of media. Inhibition of PSII EC50 values ranged from 53 to >320µg/L. The majority of periphyton samples (16 out of 22) exposed to atrazine up to 320µg/L recovered completely by 48h after replacement of media. Percent inhibition of effective quantum yield of PSII in periphyton (6 of 22 samples) exposed to 320µg/L atrazine that were significantly lower than controls after 48h ranged from 2% to 24%. No distinct spatial or temporal trends in sensitivity and recovery potential were observed over the course of the study. Conditional inference forest analysis and variation partitioning were used to investigate potential associations between periphyton sensitivity to and ability to recover from exposure to atrazine. Although certain environmental variables (i.e., proximity of high flow/velocity events and
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
R.C. Clipes
2005-02-01
Full Text Available Avaliaram-se pastagens de capim-elefante e capim-mombaça, por intermédio de amostras de extrusa esofágica e simulação manual de pastejo, estimando-se a composição químico-bromatológica, o fracionamento dos compostos nitrogenados e carboidratos, e a digestibilidade in vitro da matéria seca. Foram utilizados 15 e 13 piquetes de capim-elefante e capim-mombaça, respectivamente, com período de ocupação de três dias. As coletas foram realizadas de forma que se obtivessem amostras relativas ao terceiro, segundo e primeiro dias de ocupação. As metodologias de amostragem foram comparadas dentro de espécie forrageira pelo teste t de Student, com arranjo em pares. Foram observados maiores teores de carboidratos totais, fibra em detergente neutro, fibra em detergente ácido, celulose, lignina e frações de lenta degradação e não degradável dos carboidratos, quando se usou a extrusa esofágica, para ambas as gramíneas. Os teores de carboidratos não-fibrosos foram superiores (PThe methods of esophageal extrusa and hand plucking sample of forage were compared to evaluate elephant grass and mombaça grass pastures, under rotational grazing. The chemical composition, the fractions of nitrogenous and carbohydrates compounds and the in vitro dry matter digestibility were evaluated. For elephant grass and mombaça grass 15 and 13 paddocks were used, respectively, with three days of occupation period and samplings were gotten in the third, second and first days of occupation period. The sampling methodologies were compared within forage species by Student’s t test, in paired arrangement. The contents of total carbohydrates, neutral detergent fiber, acid detergent fiber, cellulose, lignin the slow degradation and undegradable fractions of carbohydrates were higher (P<.05, when esophageal extrusa was used, for both grasses. The non fibrous carbohydrates were higher (P<.05 in hand plucked samples. Higher values (P<.05 were found for
J. J. RAMÍREZ
Full Text Available Spatial and temporal variation of climatic and physical characteristics in a shallow tropical reservoir in the city of São Paulo, Southeastern Brazil, and their possible influence on the dynamics of the phytoplankton population. Samples were taken at 5 depths of the water column (subsurface: 1% Io, 10% Io, 2 m, and bottom and at 4 hour intervals (6:00, 10:00, 14:00, 18:00, 22:00, 2:00, and 6:00 h during summer (March 3-4, fall (June 13-14, winter (August 29-30, and spring (November 29-30 of 1994 at a single sampling station. Garças Reservoir (23º39'S, 46º37'W is a kinetic turbulent system, highly influenced by winds, with stratification that may last for days or weeks, and which undergoes mixing periods more than once in a year. A thermal pattern of this type is comparable to the warm discontinuous polymictic. Considering its optical properties, the water body was classified as an ecosystem with moderate turbidity, which decreases basically due to increased phaeopigment concentration during the spring. Also, the reservoir is an ecosystem whose phytoplanktonic community is subjected to stress, the degree of which depends on level of light penetration.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Zanolli, Clément
2013-01-01
This contribution reports fifteen human fossil dental remains found during the last two decades in the Sangiran Dome area, in Central Java, Indonesia. Among this sample, only one of the specimens had already been briefly described, with the other fourteen remaining unreported. Seven of the fifteen isolated teeth were found in a secured stratigraphic context in the late Lower-early Middle Pleistocene Kabuh Formation. The remaining elements were surface finds which, based on coincidental sources of information, were inferred as coming from the Kabuh Formation. Mainly constituted of permanent molars, but also including one upper incisor and one upper premolar, this dental sample brings additional evidence for a marked degree of size variation and time-related structural reduction in Javanese H. erectus. This is notably expressed by a significant decrease of the mesiodistal diameter, frequently associated to the reduction or even loss of the lower molar distal cusp (hypoconulid) and to a more square occlusal outline. In addition to the hypoconulid reduction or loss, this new sample also exhibits a low frequency of the occlusal Y-groove pattern, with a dominance of the X and, to a lesser extent, of the+patterns. This combination is rare in the Lower and early Middle Pleistocene paleoanthropological record, including in the early Javanese dental assemblage from the Sangiran Dome. On the other hand, similar dental features are found in Chinese H. erectus and in H. heidelbergensis. As a whole, this new record confirms the complex nature of the intermittent exchanges that occurred between continental and insular Southeast Asia through the Pleistocene.
Thelma Suely Okay
2009-03-01
Full Text Available INTRODUCTION: Performance variation among PCR systems in detecting Toxoplasma gondii has been extensively reported and associated with target genes, primer composition, amplification parameters, treatment during pregnancy, host genetic susceptibility and genotypes of different parasites according to geographical characteristics. PATIENTS: A total of 467 amniotic fluid samples from T. gondii IgM- and IgG-positive Brazilian pregnant women being treated for 1 to 6 weeks at the time of amniocentesis (gestational ages of 14 to 25 weeks. METHODS: One nested-B1-PCR and three one-round amplification systems targeted to rDNA, AF146527 and the B1 gene were employed. RESULTS: Of the 467 samples, 189 (40.47% were positive for one-round amplifications: 120 (63.49% for the B1 gene, 24 (12.69% for AF146527, 45 (23.80% for both AF146527 and the B1 gene, and none for rDNA. Fifty previously negative one-round PCR samples were chosen by computer-assisted randomization analysis and re-tested (nested-B1-PCR, during which nine additional cases were detected (9/50 or 18%. DISCUSSION: The B1 gene PCR was far more sensitive than the AF146527 PCR, and the rDNA PCR was the least effective even though the rDNA had the most repetitive sequence. Considering that the four amplification systems were equally affected by treatment, that the amplification conditions were optimized for the target genes and that most of the primers have already been reported, it is plausible that the striking differences found among PCR performances could be associated with genetic diversity in patients and/or with different Toxoplasma gondii genotypes occurring in Brazil. CONCLUSION: The use of PCR for the diagnosis of fetal Toxoplasma infections in Brazil should be targeted to the B1 gene when only one gene can be amplified, preferably by nested amplification with primers B22/B23.
João Tavares Filho
2008-04-01
com o sistema de manejo e a profundidade de amostragem. A melhor representatividade da média dos resultados de resistência do solo à penetração ocorreu para n > 15 (PD e CP ou n > 20 (0-0,10 m e 15 (0,20-0,60 m no caso do PC. A população amostral n > 10 na profundidade 0-0,60 m, nos dois tipos de amostragem e manejos de solo, permitiu alta exatidão dos dados, tornando os parâmetros estatísticos mais confiáveis, com homogeneidade nos resultados e linearidade apresentada nas curvas das populações amostrais a partir do erro amostral de 10 %Monitoring the state of soil compaction periodically by assessing soil penetration resistance is a practical way of evaluating the effects of different management systems on the soil structure and crop root development. This study aimed to evaluate the variation of soil penetration resistance in response to the number of replications (sample population of different field sampling forms of an Oxisol under three management types: non-tillage (PD, perennial crop (CP and conventional tillage (PC. The experiment was carried out in the Northern Paraná State, Brazil. Samples were collected in three sub-areas of 1ha to determine soil penetration resistance at different depths (0-0.10; 0.10-0.20; 0.20-0.40; 0.40-0.60 m. Sampling was carried out as follows: systematic sampling (grid points, spaced 25 m apart and completely randomized sampling, with 1, 3, 5, 10, 15, 20, 30, 40, and 50 replications. For all points and depths it was determined the average value of penetration resistance (MPa, the confidence interval and estimation accuracy (D of the penetrometer measurements through classical statistical theory based on the number of samples (n and standard deviation of the sample (S, at a significance level of 0.05. For the given experimental conditions (sub-areas of 1ha Oxisol under three different managements, results indicated that the number of representative samples to determine soil penetration resistance did not vary
Veraart, Almut
2011-01-01
This paper studies the impact of jumps on volatility estimation and inference based on various realised variation measures such as realised variance, realised multipower variation and truncated realised multipower variation. We review the asymptotic theory of those realised variation measures and...... of a highly active jump process. Finally, we investigate the impact of jumps on inference on volatility empirically, where we study high frequency data from the Standard & Poor’s Depository Receipt (SPY)....
Ogawa, Y; Wada, B; Taniguchi, K; Miyasaka, S; Imaizumi, K
2015-12-01
This study clarifies the anthropometric variations of the Japanese face by presenting large-sample population data of photo anthropometric measurements. The measurements can be used as standard reference data for the personal identification of facial images in forensic practices. To this end, three-dimensional (3D) facial images of 1126 Japanese individuals (865 male and 261 female Japanese individuals, aged 19-60 years) were acquired as samples using an already validated 3D capture system, and normative anthropometric analysis was carried out. In this anthropometric analysis, first, anthropological landmarks (22 items, i.e., entocanthion (en), alare (al), cheilion (ch), zygion (zy), gonion (go), sellion (se), gnathion (gn), labrale superius (ls), stomion (sto), labrale inferius (li)) were positioned on each 3D facial image (the direction of which had been adjusted to the Frankfort horizontal plane as the standard position for appropriate anthropometry), and anthropometric absolute measurements (19 items, i.e., bientocanthion breadth (en-en), nose breadth (al-al), mouth breadth (ch-ch), bizygomatic breadth (zy-zy), bigonial breadth (go-go), morphologic face height (se-gn), upper-lip height (ls-sto), lower-lip height (sto-li)) were exported using computer software for the measurement of a 3D digital object. Second, anthropometric indices (21 items, i.e., (se-gn)/(zy-zy), (en-en)/(al-al), (ls-li)/(ch-ch), (ls-sto)/(sto-li)) were calculated from these exported measurements. As a result, basic statistics, such as the mean values, standard deviations, and quartiles, and details of the distributions of these anthropometric results were shown. All of the results except "upper/lower lip ratio (ls-sto)/(sto-li)" were normally distributed. They were acquired as carefully as possible employing a 3D capture system and 3D digital imaging technologies. The sample of images was much larger than any Japanese sample used before for the purpose of personal identification. The
Jang, Y D; Lindemann, M D; Agudelo-Trujillo, J H; Escobar, C S; Kerr, B J; Inocencio, N; Cromwell, G L
2014-10-01
the TC method and does not provide the same treatment difference as the TC digestibility for energy and nutrients that are not highly impacted by the dietary treatment. For the IM, ATTD values and fecal Cr concentration stabilize at least on d 5 after initial feeding of diets containing Cr2O3. At least 2-d pooling of feces for the IM appears to be needed to provide greater accuracy and lower variations than a single grab sample.
Pressure Stimulated Currents (PSCin marble samples
F. Vallianatos
2004-06-01
Full Text Available The electrical behaviour of marble samples from Penteli Mountain was studied while they were subjected to uniaxial stress. The application of consecutive impulsive variations of uniaxial stress to thirty connatural samples produced Pressure Stimulated Currents (PSC. The linear relationship between the recorded PSC and the applied variation rate was investigated. The main results are the following: as far as the samples were under pressure corresponding to their elastic region, the maximum PSC value obeyed a linear law with respect to pressure variation. In the plastic region deviations were observed which were due to variations of Young s modulus. Furthermore, a special burst form of PSC recordings during failure is presented. The latter is emitted when irregular longitudinal splitting is observed during failure.
Czesla, S.; Klocová, T.; Khalafinejad, S.; Wolter, U.; Schmitt, J. H. M. M.
2015-10-01
The center-to-limb variation (CLV) describes the brightness of the stellar disk as a function of the limb angle. Across strong absorption lines, the CLV can vary quite significantly. We obtained a densely sampled time series of high-resolution transit spectra of the active planet host star HD 189733 with UVES. Using the passing planetary disk of the hot Jupiter HD 189733 b as a probe, we study the CLV in the wings of the Ca ii H and K and Na i D1 and D2 Fraunhofer lines, which are not strongly affected by activity-induced variability. In agreement with model predictions, our analysis shows that the wings of the studied Fraunhofer lines are limb brightened with respect to the (quasi-)continuum. The strength of the CLV-induced effect can be on the same order as signals found for hot Jupiter atmospheres. Therefore, a careful treatment of the wavelength dependence of the stellar CLV in strong absorption lines is highly relevant in the interpretation of planetary transit spectroscopy. Based on observations made with UVES at the ESO VLT Kueyen telescope under program 089.D-0701(A).
Luengo-Oroz, Natividad; Torres, Pedro A.; Moure, David; D'Alessandro, Walter
2014-05-01
On 10 October 2011, a submarine volcanic eruption started 2 km south El Hierro Island (Canary Islands, Spain). Since July 2011 a dense multiparametric monitoring network was deployed all over the island by Instituto Geográfico Nacional (IGN). By the time the eruption started, almost 10000 earthquakes had been located and the deformation analyses showed a maximum deformation of more than 5 cm. After the end of the submarine eruption and up to now, several volcanic unrest processes have taken place in the island. The most relevant ones started on June 2012 and March 2013. Each of these periods has been evidenced by intense seismicity and ground deformation. In the framework of this volcanic surveillance program, the IGN team started to periodically sample five groundwater sampling sites. Some parameters have been determined directly in the field (temperature, pH, electric conductivity and alkalinity) and collected samples have been analysed in the laboratory for major (Na, K, NH4, Ca, Mg, SO4, Cl, HCO3, CO3, NO3, NO2, PO4, SiO2, Br, F) and trace elements (Be, Al, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Se, Mo, Ag, Cd, Ba, Hg, Tl, Pb, Th, U) contents. In a few cases samples for the chemical analysis of dissolved gases and for the determination of the isotopic composition of He have been collected at two of the sites. Significant increases in alkalinity have been recorded in all sampling sites correlated both to the eruptive period and also to the following unrest episodes. Such increases are probably related to the dissolution of magmatic CO2 exsolved from the rising magma batches. The magmatic contribution can be confirmed by the isotopic composition of dissolved He showing values in the range from 7.76 to 8.91 R/Ra. Since July 2011, only one important CO2 soil degassing anomaly has been detected. This anomalous flux (620 g/m2.d) was measured in a small area (0.36 km2) before the beginning of the submarine eruption and has not been detected again after the eruption onset
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Torre de la Sanchez, M. L.; Grande Gil, J. A.; Garrido Morillo, R. [Escuela Politecnica Superior La Rabida. Palos de La Frontera. Huelva (Spain)
1999-05-01
Porous ceramic capsules are a very useful tool for water sampling in non-saturated areas. However, several authors have found variations in pH in the water collected by such capsules. This paper compares the variation in pH distilled water and in different concentrations of saline solution. A large increase in pH was found due to the release process and subsequent precipitation of calcium ions from the ceramic. (Author) 5 refs.
49 CFR 195.406 - Maximum operating pressure.
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for surge pressures and other variations from normal operations, no operator may operate a pipeline at a...
Primary oxidation variation and distribution of uranium and thorium in a lava flow.
Watkins, N D; Holmes, C W; Haggerty, S E
1967-02-03
An Icelandic basalt lava flow has a systematic oxidation variation, formed during the initial cooling, with a resultant maximum oxidation just below the center of the lava. The ratio of thorium to uranium shows a clear dependence on this primary oxidation variation. Between-lava comparisons of thorium and uranium may be critically dependent on the position of the samples in each lava.
A Maximum-Entropy Method for Estimating the Spectrum
无
2007-01-01
Based on the maximum-entropy (ME) principle, a new power spectral estimator for random waves is derived in the form of ~S(ω)=(a/8)-H2(2π)d+1ω-(d+2)exp[-b(2π/ω)n], by solving a variational problem subject to some quite general constraints. This robust method is comprehensive enough to describe the wave spectra even in extreme wave conditions and is superior to periodogram method that is not suitable to process comparatively short or intensively unsteady signals for its tremendous boundary effect and some inherent defects of FFT. Fortunately, the newly derived method for spectral estimation works fairly well, even though the sample data sets are very short and unsteady, and the reliability and efficiency of this spectral estimator have been preliminarily proved.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Wada, Yasuhiko; Koizumi, Akio; Yoshinaga, Takeo; Harada, Kouji; Inoue, Kayoko; Morikawa, Akiko; Muroi, Junko; Inoue, Sumiko; Eslami, Bita; Hirosawa, Iwao; Hirosawa, Akitsu; Fujii, Shigeo; Fujimine, Yoshinori; Hachiya, Noriyuki; Koda, Shigeki; Kusaka, Yukinori; Murata, Katsuyuki; Nakatsuka, Haruo; Omae, Kazuyuki; Saito, Norimitsu; Shimbo, Shinichiro; Takenaka, Katsunobu; Takeshita, Tatsuya; Todoriki, Hidemi; Watanabe, Takao; Ikeda, Masayuki
2005-05-01
A retrospective exposure assessment among the general population for polybrominated diphenyl ethers (PBDEs) was conducted using dietary surveys. We analyzed samples of food duplicate portions collected in the early 1980s (1980 survey: N=40) and the mid 1990s (1995 survey: N=39) from female subjects (5 participants from each of 8 sites per survey except for one site) living throughout Japan, from the north (Hokkaido) to the south (Okinawa). The study populations in the 1980 and 1995 surveys were different, but lived in the same communities. We measured four PBDE congeners [2,2',4,4'-tetrabrominated diphenyl ether (tetraBDE): #47; 2,2',4,4',5-pentaBDE: #99; 2,2',4,4',6-pentaBDE: #100; and 2,2',4,4',5,5'-hexaBDE: #153] in the diet. #99 was the most abundant congener in the diet (49% of the total PBDEs), followed by #47 (33%), #100 (12%) and #153 (6%). Regional variations found in the 1980 survey decreased in the 1995 survey. The total daily intake of PBDEs (ng/d) [GM (GSD)] in the 1980 survey [91.4 (4.1)] was not significantly different from that in the 1995 survey [93.8 (3.4)] for the total population, nor did it differ among the sites including Shimane, in which a 20-fold increase in serum concentrations was observed in the same population1). In consideration of the significant increases in the serum concentration, inhalation may be more important than food ingestion as the route of human exposure to PBDEs.
Neal, R M
2000-01-01
Markov chain sampling methods that automatically adapt to characteristics of the distribution being sampled can be constructed by exploiting the principle that one can sample from a distribution by sampling uniformly from the region under the plot of its density function. A Markov chain that converges to this uniform distribution can be constructed by alternating uniform sampling in the vertical direction with uniform sampling from the horizontal `slice' defined by the current vertical position, or more generally, with some update that leaves the uniform distribution over this slice invariant. Variations on such `slice sampling' methods are easily implemented for univariate distributions, and can be used to sample from a multivariate distribution by updating each variable in turn. This approach is often easier to implement than Gibbs sampling, and more efficient than simple Metropolis updates, due to the ability of slice sampling to adaptively choose the magnitude of changes made. It is therefore attractive f...
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
Kim, Hojin; Li, Ruijiang; Lee, Rena; Goldstein, Thomas; Boyd, Stephen; Candes, Emmanuel; Xing, Lei
2012-07-01
A new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) has recently been proposed to bridge the gap between IMRT and VMAT. By increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields, DASSIM-RT is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency. The fact that DASSIM-RT utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme. The purpose of this work is to provide a practical solution to the DASSIM-RT inverse planning problem. The inverse planning problem is formulated as a fluence-map optimization problem with total-variation (TV) minimization. A newly released L1-solver, template for first-order conic solver (TFOCS), was adopted in this work. TFOCS achieves faster convergence with less memory usage as compared with conventional quadratic programming (QP) for the TV form through the effective use of conic forms, dual-variable updates, and optimal first-order approaches. As such, it is tailored to specifically address the computational challenges of large-scale optimization in DASSIM-RT inverse planning. Two clinical cases (a prostate and a head and neck case) are used to evaluate the effectiveness and efficiency of the proposed planning technique. DASSIM-RT plans with 15 and 30 beams are compared with conventional IMRT plans with 7 beams in terms of plan quality and delivery efficiency, which are quantified by conformation number (CN), the total number of segments and modulation index, respectively. For optimization efficiency, the QP-based approach was compared with the proposed algorithm for the DASSIM-RT plans with 15 beams for both cases. Plan quality improves with an increasing number of incident beams, while the total number of segments is maintained to be about the same in both cases. For the
史海芳; 李树有; 姬永刚
2008-01-01
For two normal populations with u~nown means μi and variances σ2i>0,i=1,2,assume that there is a semi-order restriction between ratios of means and standard deviations and sample numbers of two normal populations are different.A procedure of obtaining the maximum likelihood estimatom of μi's and σ's under the semi-order restrictions is proposed.For i=3 case,some connected results and simulations are given.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Dall'Osto, M.; Querol, X.; Amato, F.; Karanasiou, A.; Lucarelli, F.; Nava, S.; Calzolai, G.; Chiari, M.
2013-04-01
Hourly-resolved aerosol chemical speciation data can be a highly powerful tool to determine the source origin of atmospheric pollutants in urban environments. Aerosol mass concentrations of seventeen elements (Na, Mg, Al, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, Sr and Pb) were obtained by time (1 h) and size (PM2.5 particulate matter < 2.5 μm) resolved aerosol samples analysed by Particle Induced X-ray Emission (PIXE) measurements. In the Marie Curie European Union framework of SAPUSS (Solving Aerosol Problems by Using Synergistic Strategies), the approach used is the simultaneous sampling at two monitoring sites in Barcelona (Spain) during September-October 2010: an urban background site (UB) and a street canyon traffic road site (RS). Elements related to primary non-exhaust traffic emission (Fe, Cu), dust resuspension (Ca) and anthropogenic Cl were found enhanced at the RS, whereas industrial related trace metals (Zn, Pb, Mn) were found at higher concentrations at the more ventilated UB site. When receptor modelling was performed with positive matrix factorization (PMF), nine different aerosol sources were identified at both sites: three types of regional aerosols (regional sulphate (S) - 27%, biomass burning (K) - 5%, sea salt (Na-Mg) - 17%), three types of dust aerosols (soil dust (Al-Ti) - 17%, urban crustal dust (Ca) - 6%, and primary traffic non-exhaust brake dust (Fe-Cu) - 7%), and three types of industrial aerosol plumes-like events (shipping oil combustion (V-Ni) - 17%, industrial smelters (Zn-Mn) - 3%, and industrial combustion (Pb-Cl) - 5%, percentages presented are average source contributions to the total elemental mass measured). The validity of the PMF solution of the PIXE data is supported by very good correlations with external single particle mass spectrometry measurements. Some important conclusions can be drawn about the PM2.5 mass fraction simultaneously measured at the UB and RS sites: (1) the regional aerosol sources impact both
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Nielsen, Søren R. K.; Köyüoglu, H. U.; Cakmak, A. S.
The maximum softening concept is based on the variation of the vibrational periods of a structure during a seismic event. Maximum softening damage indicators, which measure the maximum relative stiffness reduction caused by stiffness and strength deterioration of the actual structure, are calcula......The maximum softening concept is based on the variation of the vibrational periods of a structure during a seismic event. Maximum softening damage indicators, which measure the maximum relative stiffness reduction caused by stiffness and strength deterioration of the actual structure...
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
A Digital Coreless Maximum Power Point Tracking Circuit for Thermoelectric Generators
Kim, Shiho; Cho, Sungkyu; Kim, Namjae; Baatar, Nyambayar; Kwon, Jangwoo
2011-05-01
This paper describes a maximum power point tracking (MPPT) circuit for thermoelectric generators (TEG) without a digital controller unit. The proposed method uses an analog tracking circuit that samples the half point of the open-circuit voltage without a digital signal processor (DSP) or microcontroller unit for calculating the peak power point using iterative methods. The simulation results revealed that the MPPT circuit, which employs a boost-cascaded-with-buck converter, handled rapid variation of temperature and abrupt changes of load current; this method enables stable operation with high power transfer efficiency. The proposed MPPT technique is a useful analog MPPT solution for thermoelectric generators.
董丹宏; 黄刚
2015-01-01
Based on daily maximum and minimum temperature data from 740 homogenized surface meteorological stations, the present study investigates the regional characteristics of the temperature trend and the dependence of maximum and minimum temperature and diurnal temperature range changes on the altitude during the period 1963–2012. It is found that the magnitude of minimum temperature increase is larger than that of the maximum temperature increase. The significant warming areas are located at high altitude, all of which increase remarkably in size during the study period. The maximum and minimum temperature and diurnal temperature range trends increase with altitude, except in spring. The correlation coefficients between the maximum temperature trend and altitude are the highest. At the same altitude, the amplitudes of maximum and minimum temperature show inconsistency: They exhibit increasing trends in the 1990s, with significant change at low altitude; they change minimally in the 1980s; and at high altitudes (above 2000 m), the magnitudes of their changes are weak before the 1990s but stronger in the last 10 years of the study period. The seasonal variability of the diurnal temperature range is large over 2000 m, decreasing in summer but increasing in winter. Before the 1990s, there is no significant variation between maximum and minimum temperature and altitude. However, their trends almost all decrease and then increase with altitude in the last 20 years. Additionally, the response to climate in highland areas is more sensitive than that in lowland areas.%本文利用中国740个气象台站1963～2012年均一化逐日最高温度和最低温度资料，分析了中国地区最高、最低气温和日较差变化趋势的区域特征及其与海拔高度的关系。结果表明：近50年气温的变化趋势无论是年或季节变化，最低温度的增温幅度都高于最高温度，且其增温显著区域都对应我国高海拔地区。除了春季，
Rao, P.P.S.
From lipid fraction of frozen samples of Sargassum johnstonii unsaponifiable part was extracted with diethyl ether to isolate total sterols. The extracted sterols were obtained for a period of nine months and tested against test bacteria...
A. V. Chernyshev
2017-01-01
Full Text Available In carrying out eddy current thickness measurement of two-layer conductive objects one from the interfering factors is the presence of variations in the value of the electrical conductivity of the material of the upper layer (coating when moving from point to point on the surface of object of control or when passing from one object of control to another. The aim of this work is to evaluate the accuracy of determining the thickness of the conductive coating disposed on a conducting ferromagnetic basis, using the phase method of eddy current testing. The reason of the error is variation of the electrical conductivity of the material of coating.Determination of the error is based on calculations using known analytical expressions for the loop with current of sinusoidal form arranged over the infinite half space with a covering as a thin layer. Selected in calculating electromagnetic parameters of coating and substrate approximately correspond to the case -chromium layer on a nickel base. Calculations are performed for different frequencies of current passed through coil.It is shown that at reduction of frequency of the current passes through the coil the error is reduced. The value of the lowest possible operating frequency of the excitation current is determined by the condition of absence influence on the phase introduced into the superimposed transducer emf variations in the thickness of the basis.To reduce the indicated error it is proposed to determine, on the basis of phase method at a relatively high frequency transducer current excitation, conductivity of the material of coating. After this, at a low frequency excitation current and using phase method, the coating thickness is determined, taking into consideration the previously determined value of the conductivity of coating. Also discussed ways to improve the accuracy of phase measurements in the MHz region of the excitation current frequency.
GUAN Hsin; WANG Bo; LU Pingping; XU Liang
2014-01-01
The identification of maximum road friction coefficient and optimal slip ratio is crucial to vehicle dynamics and control. However, it is always not easy to identify the maximum road friction coefficient with high robustness and good adaptability to various vehicle operating conditions. The existing investigations on robust identification of maximum road friction coefficient are unsatisfactory. In this paper, an identification approach based on road type recognition is proposed for the robust identification of maximum road friction coefficient and optimal slip ratio. The instantaneous road friction coefficient is estimated through the recursive least square with a forgetting factor method based on the single wheel model, and the estimated road friction coefficient and slip ratio are grouped in a set of samples in a small time interval before the current time, which are updated with time progressing. The current road type is recognized by comparing the samples of the estimated road friction coefficient with the standard road friction coefficient of each typical road, and the minimum statistical error is used as the recognition principle to improve identification robustness. Once the road type is recognized, the maximum road friction coefficient and optimal slip ratio are determined. The numerical simulation tests are conducted on two typical road friction conditions(single-friction and joint-friction) by using CarSim software. The test results show that there is little identification error between the identified maximum road friction coefficient and the pre-set value in CarSim. The proposed identification method has good robustness performance to external disturbances and good adaptability to various vehicle operating conditions and road variations, and the identification results can be used for the adjustment of vehicle active safety control strategies.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Performance of penalized maximum likelihood in estimation of genetic covariances matrices
Meyer Karin
2011-11-01
Full Text Available Abstract Background Estimation of genetic covariance matrices for multivariate problems comprising more than a few traits is inherently problematic, since sampling variation increases dramatically with the number of traits. This paper investigates the efficacy of regularized estimation of covariance components in a maximum likelihood framework, imposing a penalty on the likelihood designed to reduce sampling variation. In particular, penalties that "borrow strength" from the phenotypic covariance matrix are considered. Methods An extensive simulation study was carried out to investigate the reduction in average 'loss', i.e. the deviation in estimated matrices from the population values, and the accompanying bias for a range of parameter values and sample sizes. A number of penalties are examined, penalizing either the canonical eigenvalues or the genetic covariance or correlation matrices. In addition, several strategies to determine the amount of penalization to be applied, i.e. to estimate the appropriate tuning factor, are explored. Results It is shown that substantial reductions in loss for estimates of genetic covariance can be achieved for small to moderate sample sizes. While no penalty performed best overall, penalizing the variance among the estimated canonical eigenvalues on the logarithmic scale or shrinking the genetic towards the phenotypic correlation matrix appeared most advantageous. Estimating the tuning factor using cross-validation resulted in a loss reduction 10 to 15% less than that obtained if population values were known. Applying a mild penalty, chosen so that the deviation in likelihood from the maximum was non-significant, performed as well if not better than cross-validation and can be recommended as a pragmatic strategy. Conclusions Penalized maximum likelihood estimation provides the means to 'make the most' of limited and precious data and facilitates more stable estimation for multi-dimensional analyses. It should
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Loveday, J; Baldry, I K; Bland-Hawthorn, J; Brough, S; Brown, M J I; Driver, S P; Kelvin, L S; Phillipps, S
2015-01-01
We describe modifications to the joint stepwise maximum likelihood method of Cole (2011) in order to simultaneously fit the GAMA-II galaxy luminosity function (LF), corrected for radial density variations, and its evolution with redshift. The whole sample is reasonably well-fit with luminosity (Qe) and density (Pe) evolution parameters Qe, Pe = 1.0, 1.0 but with significant degeneracies characterized by Qe = 1.4 - 0.4Pe. Blue galaxies exhibit larger luminosity density evolution than red galaxies, as expected. We present the evolution-corrected r-band LF for the whole sample and for blue and red sub-samples, using both Petrosian and Sersic magnitudes. Petrosian magnitudes miss a substantial fraction of the flux of de Vaucouleurs profile galaxies: the Sersic LF is substantially higher than the Petrosian LF at the bright end.
Arain, Mariam S; Afridi, Hassan Imran; Kazi, Tasneem Gul; Kazi, Atif; Naeemullah; Ali, Jamshed; Arain, Salma Aslam; Panhwar, Abdul Haleem
2015-11-01
There is very limited information available on the role of trace elements in psychiatric disorders (PSD). Immense pieces of evidence support the idea that exposure to trace and toxic metals, such as aluminum (Al) and manganese (Mn), may be factors or cofactors in the etiopathogenesis of a variety of psychiatric disorders. The aim of our study was to assess the Al and Mn in scalp hair samples of 102 patients having different types of psychiatric disorder PSD diseases together with 120 referent subjects of male patients in the age group of 45-60 years. The understudy elements in scalp hair samples were assessed by the flame atomic absorption spectrophotometry after microwave-assisted acid digestion method .The validity of methodology was checked by the certified human hair reference material (NCS ZC81002). The recovery of studied elements was found in the range of 98.1-99.2 % of certified reference material. The results of this study showed that the mean values of Al and Mn were significantly higher in scalp hair samples of all types of PSD as compared to referents subjects. The resulted data indicated a significant increase in the contents of Mn and Al in scalp hair samples of psychiatric patients than that of its control counterpart, which may provide prognostic tool for the diagnosis of the mental disorders. However, further work is suggested to examine the exact correlation between trace elements level and the degree of disorder.
Rogan, Joanne C.; Keselman, H. J.
1977-01-01
The effects of variance heterogeneity on the empirical probability of a Type I error for the analysis of variance (ANOVA) F-test are examined. The rate of Type I error varies as a function of the degree of variance heterogeneity, and the ANOVA F-test is not always robust to variance heterogeneity when sample sizes are equal. (Author/JAC)
Devour, Brian M.; Bell, Eric F.
2016-06-01
We study the relative dust attenuation-inclination relation in 78 721 nearby galaxies using the axis ratio dependence of optical-near-IR colour, as measured by the Sloan Digital Sky Survey, the Two Micron All Sky Survey, and the Wide-field Infrared Survey Explorer. In order to avoid to the greatest extent possible attenuation-driven biases, we carefully select galaxies using dust attenuation-independent near- and mid-IR luminosities and colours. Relative u-band attenuation between face-on and edge-on disc galaxies along the star-forming main sequence varies from ˜0.55 mag up to ˜1.55 mag. The strength of the relative attenuation varies strongly with both specific star formation rate and galaxy luminosity (or stellar mass). The dependence of relative attenuation on luminosity is not monotonic, but rather peaks at M3.4 μm ≈ -21.5, corresponding to M* ≈ 3 × 1010 M⊙. This behaviour stands seemingly in contrast to some older studies; we show that older works failed to reliably probe to higher luminosities, and were insensitive to the decrease in attenuation with increasing luminosity for the brightest star-forming discs. Back-of-the-envelope scaling relations predict the strong variation of dust optical depth with specific star formation rate and stellar mass. More in-depth comparisons using the scaling relations to model the relative attenuation require the inclusion of star-dust geometry to reproduce the details of these variations (especially at high luminosities), highlighting the importance of these geometrical effects.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
2014-09-30
a new method using pressurized fluid extraction ( PFE ) and gel permeation chromatography (GPC) to liquid chromatography tandem mass spectrometry (LC...of lipid extraction from tissue using PFE and GPC. The PFE is better able to extract total lipid from solid tissues than other extraction methods...analysis using bottlenose dolphin samples are merited. Cortisol concentrations in the blubber using the PFE -LC-MS/MS method are lower than would be
2014-09-30
in the process of developing and validating a new method using pressurized fluid extraction ( PFE ) and gel permeation chromatography (GPC) to liquid...contaminants in blubber has shown the effectiveness of lipid extraction from tissue using PFE and GPC. The PFE is better able to extract total lipid from...bottlenose dolphin samples are merited. Cortisol concentrations in the blubber using the PFE -LC-MS/MS method are lower than would be expected
Sharpe, Emma; Wallis, Deborah J; Ridout, Nathan
2016-06-30
This study aimed to: (i) determine if the attention bias towards angry faces reported in eating disorders generalises to a non-clinical sample varying in eating disorder-related symptoms; (ii) examine if the bias occurs during initial orientation or later strategic processing; and (iii) confirm previous findings of impaired facial emotion recognition in non-clinical disordered eating. Fifty-two females viewed a series of face-pairs (happy or angry paired with neutral) whilst their attentional deployment was continuously monitored using an eye-tracker. They subsequently identified the emotion portrayed in a separate series of faces. The highest (n=18) and lowest scorers (n=17) on the Eating Disorders Inventory (EDI) were compared on the attention and facial emotion recognition tasks. Those with relatively high scores exhibited impaired facial emotion recognition, confirming previous findings in similar non-clinical samples. They also displayed biased attention away from emotional faces during later strategic processing, which is consistent with previously observed impairments in clinical samples. These differences were related to drive-for-thinness. Although we found no evidence of a bias towards angry faces, it is plausible that the observed impairments in emotion recognition and avoidance of emotional faces could disrupt social functioning and act as a risk factor for the development of eating disorders.
Polivka, Karl; Bennett, Rita L. [USDA Forest Service, Pacific Northwest Research Station, Wenatchee, WA
2009-03-31
We studied variation in productivity in headwater reaches of the Wenatchee subbasin for multiple field seasons with the objective that we could develop methods for monitoring headwater stream conditions at the subcatchment and stream levels, assign a landscape-scale context via the effects of geoclimatic parameters on biological productivity (macroinvertebrates and fish) and use this information to identify how variability in productivity measured in fishless headwaters is transmitted to fish communities in downstream habitats. In 2008, we addressed this final objective. In collaboration with the University of Alaska Fairbanks we found some broad differences in the production of aquatic macroinvertebrates and in fish abundance across categories that combine the effects of climate and management intensity within the subbasin (ecoregions). From a monitoring standpoint, production of benthic macroinvertebrates was not a good predictor of drifting macroinvertebrates and therefore might be a poor predictor of food resources available to fish. Indeed, there is occasionally a correlation between drifting macroinvertebrate abundance and fish abundance which suggests that headwater-derived resources are important. However, fish in the headwaters appeared to be strongly food-limited and there was no evidence that fishless headwaters provided a consistent subsidy to fish in reaches downstream. Fish abundance and population dynamics in first order headwaters may be linked with similar metrics further down the watershed. The relative strength of local dynamics and inputs into productivity may be constrained or augmented by large-scale biogeoclimatic control. Headwater streams are nested within watersheds, which are in turn nested within ecological subregions; thus, we hypothesized that local effects would not necessarily be mutually exclusive from large-scale influence. To test this we examined the density of primarily salmonid fishes at several spatial and temporal scales
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Eduardo Morteo
2012-06-01
Full Text Available Mark-recapture techniques are fundamental for assessing marine mammal population dynamics and individual temporal patterns. Since biases imposed by field conditions are generally unknown, we simulated variations in sampling effort (m and maximum individual catchability (r max to analyze their effects on residency levels measured through the number of recaptures (occurrence, O, duration of stay (permanence, P, and average recurrence (periodicity, I relative to a reference level of exhaustive daily sampling frequency. The number or recorded individuals (Dr was also used to determine the performance of the simulations. Results for standardized (s parameters showed that occurrences (Os were proportional to m and were not influenced by r max. Individual permanence (Ps and individual periodicity (Is were 8-49% and 3-11.74 times lower than expected, respectively, depending on m and r max. Also, Os, Ps, and Is were not influenced by study duration, thus inter-study comparisons are feasible if m and r max are similar. Dr was 68-92% (r max= 0.01 and 1-8% (r max= 1.0 lower than expected depending on m. Longer studies were more accurate but greater effort did not significantly increase Dr estimates. The use of bimonthly sampling frequencies (m= 0.07 was barely accurate and predictions for incomplete datasets were poor. Survey field data were also analyzed from 14 published studies on 4 dolphin species and compared to daily sampling frequencies; resulting values for Os, Ps, and Dr were 62.4-93.3%, 11.6-66.4%, and 2.4-33.8% lower than expected, respectively; also Is was 2.3-7.3 times lower than expected. The model produced Dr values that were similar to population estimates from empirical data, and bias was smaller than 15% in 87.5% of the cases, thus simulation accuracy was deemed acceptable.Las técnicas de marcado-recaptura son fundamentales para evaluar la dinámica poblacional de mamíferos marinos y sus patrones temporales individuales. Se simularon
M. Dall'Osto
2013-04-01
Full Text Available Hourly-resolved aerosol chemical speciation data can be a highly powerful tool to determine the source origin of atmospheric pollutants in urban environments. Aerosol mass concentrations of seventeen elements (Na, Mg, Al, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, Sr and Pb were obtained by time (1 h and size (PM2.5 particulate matter 2.5 mass fraction simultaneously measured at the UB and RS sites: (1 the regional aerosol sources impact both monitoring sites at similar concentrations regardless their different ventilation conditions; (2 by contrast, local industrial aerosol plumes associated with shipping oil combustion and smelters activities have a higher impact on the more ventilated UB site; (3 a unique source of Pb-Cl (associated with combustion emissions is found to be the major (82% source of fine Cl in the urban agglomerate; (4 the mean diurnal variation of PM2.5 primary traffic non-exhaust brake dust (Fe-Cu suggests that this source is mainly emitted and not resuspended, whereas PM2.5 urban dust (Ca is found mainly resuspended by both traffic vortex and sea breeze; (5 urban dust (Ca is found the aerosol source most affected by land wetness, reduced by a factor of eight during rainy days and suggesting that wet roads may be a solution for reducing urban dust concentrations.
Tiago Neves Pereira Valente
2011-07-01
Full Text Available The objective of this study was to evaluate the efficiency of using nylon textiles (50 μm, F57 (Ankom® and non-woven textile (NWT - 100 g/m² on laboratory evaluation of neutral detergent fiber (NDF by using quantitative filter paper as purified cellulose standard and by simulating different composition of samples with additions of corn starch, pectin, casein and soybean oil. The quantitative filter paper was processed in a knife mill with a 1-mm screen sieve and the procedures for analyses of NDF contents were performed in a fiber analyzer (Ankom220®. Four experiments were carried out with additions of different ingredients into the filter paper: corn starch added at the levels of 15 or 50%; pectin, 15 or 50%; casein, 10 or 30%; and soybean oil at 0, 5, 10, 15, 25 or 50% of dry matter, respectively. The ratio 20 mg of dry matter/cm² of surface was followed. When it was relevant, in function of the evaluated treatments, heat-stable α-amylase was used. The use of F57 and NWT resulted in accurate estimates of NDF contents whereas nylon textile caused loss of insoluble fibrous particles, compromising accuracy of the results. For samples containing starch, use of heat-stable α-amylase is recommended in the evaluation of NDF contents. Pectin and casein are completely solubilized by neutral detergent solution. Levels of oil higher than 10% cause overestimation of NDF contents.
Decomposition of spectra using maximum autocorrelation factors
Larsen, Rasmus
2001-01-01
into classification or regression type analyses. A featured method for low dimensional representation of multivariate datasets is Hotellings principal components transform. We will extend the use of principal components analysis incorporating new information into the algorithm. This new information consists......This paper addresses the problem of generating a low dimensional representation of the variation present in a set of spectra, e.g. reflection spectra recorded from a series of objects. The resulting low dimensional description may subseque ntly be input through variable selection schemes...... Fourier decomposition these new variables are located in frequency as well as well wavelength. The proposed algorithm is tested on 100 samples of NIR spectra of wheat....
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
徐志伟; 鹿化煜; 弋双文; 周亚利; Joseph A Mason; 王晓勇; 陈英勇; 朱芳莹; 张瀚之
2013-01-01
Relics of buried paleo-dune deposits and paleosols provide evidence for desert landscape evolution during Late Quaternary.Mu Us dune field located at semi-arid region of North China, near the northern limit of significant summer monsoon rainfall, is sensitive to climate changes.On the basis of extensive field investigations and previous studies in the Mu Us dune field,we found paleo-dunes and sand sheet layers interbedded within thick loess deposits in the loess-desert transition zone and northern Loess Plateau south and east to the dune field, indicating intensive dune migration to the south and east during arid and cold periods.Relics of paleosols are found distributed in the central and north dune field, implying enhanced vegetation and soil formation under relatively warm and wet climate conditions.14 representative aeolian sequences were chosen for OSL dating to attempt to reconstruct spatial and temporal changes of the desert borders.Our preliminary results indicate that dunes migrated to the south and east into the Loess Plateau at around 26 ~ 16ka,with a maximum extension of about 30 ～ 50km relative to the present border, the area of mobile dunes extended at least 10000km2,accounting for about 25% of modern Mu Us dune field area.During Holocene Optimum ( around 9 ～5ka) , most of the mobile dunes in the dune field were stabilized by vegetation and soil.Spatial variations of Mu Us dune field during the Last Glacial Maximum and Holocene Optimum are interpreted as direct responses of the sand dune surface to climate changes.%沙漠/沙地边缘和内部的古风成沙层、砂质古土壤是重建晚第四纪以来地表变化的直接地质证据.位于我国北方半干旱区的毛乌素沙地,其地表过程对气候变化响应敏感.在对毛乌素沙地进行野外拉网式调查以及前人研究的基础上,我们在沙地南部和东部的沙漠-黄土过渡带发现了大量的埋藏古风成沙丘沉积,指示了干冷期沙漠扩张的最南、
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Schleicher, Nina; Norra, Stefan; Chai, Fahe; Chen, Yizhen; Wang, Shulan; Stüben, Doris
2010-02-01
Weekly samples of total suspended particles in air (TSP) were taken in south-east Beijing for a two-year period continuously from August 2005 to August 2007. Mass concentrations varied between 76 and 1028 microg m(-3) with an average concentration of 370 microg m(-3) for the whole period. The chemical composition and the mass concentration of aerosols in combination with meteorological data are reflecting specific influences of distinct aerosol sources on the pollution of Beijing's atmosphere. Lead (Pb), titanium (Ti), zinc (Zn) and copper (Cu) concentrations were chosen as indicator elements for different sources. Their amounts considerably varied over the course of the year. Element ratios, such as Pb/Ti, supported the distinction between periods of predominant geogenic or anthropogenic caused pollution. However, the interactions between aerosols from different sources are numerous and aerosol pollution still is a big and complex challenge for the sustainable development of Beijing.
Brito, Barbara P; Gardner, Ian A; Hietala, Sharon K; Crossley, Beate M
2011-07-01
Bluetongue is a vector-borne viral disease that affects domestic and wild ruminants. The epidemiology of this disease has recently changed, with occurrence in new geographic areas. Various real-time quantitative reverse transcription polymerase chain reaction (real-time qRT-PCR) assays are used to detect Bluetongue virus (BTV); however, the impact of biologic differences between New World camelids and domestic ruminant samples on PCR efficiency, for which the BTV real-time qRT-PCR was initially validated are unknown. New world camelids are known to have important biologic differences in whole blood composition, including hemoglobin concentration, which can alter PCR performance. In the present study, sheep, cattle, and alpaca blood were spiked with BTV serotypes 10, 11, 13, and 17 and analyzed in 10-fold dilutions by real-time qRT-PCR to determine if species affected nucleic acid recovery and assay performance. A separate experiment was performed using spiked alpaca blood subsequently diluted in 10-fold series in sheep blood to assess the influence of alpaca blood on performance efficiency of the BTV real-time qRT-PCR assay. Results showed that BTV-specific nucleic acid detection from alpaca blood was consistently 1-2 logs lower than from sheep and cattle blood, and results were similar for each of the 4 BTV serotypes analyzed.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Yobany Quijano-Blanco
2012-03-01
regarding coronary disease and heart disease in general. One of the more variable fundamental aspects, having the greatest clinical impact, concerns the origin and course of arteries irrigating the sino-atrial node (SAN. Objective. Determining the origin, course and distribution of arteries supplying the SAN in a sample of the Colombian population. Materials and methods. 60 cardiopulmonary and digestive blocks were taken by convenience sampling. Conventional dissection of the genitalia determined gender; the coronary artery was then dissected, specifically the SAN, to establish origin and route. Results. It was found that 75% of the SAN artery's blood supply came from the right coronary artery (RCA, 15% from the circumflex artery and 10% was co-dominant. 86.6% of courses were linear; 13.4% were Y-shaped or Y-and double trident shaped. Conclusions. The prevalence of SAN artery origin in the RCA in this study was consistent with similar research findings, regardless of geographical and racial origin. However, this study report some courses not previously described in the literature, such as Y-, double Y-, inverted K- and trident-shaped forms.
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
赵拥军; 赵勇胜; 赵闯
2016-01-01
This paper investigates the joint estimation of Time Difference Of Arrival (TDOA) and Frequency Difference Of Arrival (FDOA) in passive location system, where the true value of the reference signal is unknown. A novel Maximum Likelihood (ML) estimator of TDOA and FDOA is constructed, and Markov Chain Monte Carlo (MCMC) method is applied to finding the global maximum of likelihood function by generating the realizations of TDOA and FDOA. Unlike the Cross Ambiguity Function (CAF) algorithm or the Expectation Maximization (EM) algorithm, the proposed algorithm can also estimate the TDOA and FDOA of non-integer multiple of the sampling interval and has no dependence on the initial estimate. The Cramer Rao Lower Bound (CRLB) is also derived. Simulation results show that, the proposed algorithm outperforms the CAF and EM algorithm for different SNR conditions with higher accuracy and lower computational complexity.%该文针对无源定位中参考信号真实值未知的时差-频差联合估计问题,构建了一种新的时差-频差最大似然估计模型,并采用马尔科夫链蒙特卡洛(MCMC)方法求解似然函数的全局极大值,得到时差-频差联合估计.算法通过生成时差-频差样本,并统计样本均值得到估计值,克服了传统互模糊函数(CAF)算法只能得到时域和频域采样间隔整数倍估计值的问题,且不存在期望最大化(EM)等迭代算法的初值依赖和收敛问题.推导了时差-频差联合估计的克拉美罗界,并通过仿真实验表明,算法在不同信噪比条件下的估计精度优于CAF算法和EM算法,且计算复杂度较低.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Böhmer, L; Hildebrandt, G
1998-01-01
In contrast to the prevailing automatized chemical analytical methods, classical microbiological techniques are linked with considerable material- and human-dependent sources of errors. These effects must be objectively considered for assessing the reliability and representativeness of a test result. As an example for error analysis, the deviation of bacterial counts and the influence of the time of testing, bacterial species involved (total bacterial count, coliform count) and the detection method used (pour-/spread-plate) were determined in a repeated testing of parallel samples of pasteurized (stored for 8 days at 10 degrees C) and raw (stored for 3 days at 6 degrees C) milk. Separate characterization of deviation components, namely, unavoidable random sampling error as well as methodical error and variation between parallel samples, was made possible by means of a test design where variance analysis was applied. Based on the results of the study, the following conclusions can be drawn: 1. Immediately after filling, the total count deviation in milk mainly followed the POISSON-distribution model and allowed a reliable hygiene evaluation of lots even with few samples. Subsequently, regardless of the examination procedure used, the setting up of parallel dilution series can be disregarded. 2. With increasing storage period, bacterial multiplication especially of psychrotrophs leads to unpredictable changes in the bacterial profile and density. With the increase in errors between samples, it is common to find packages which have acceptable microbiological quality but are already spoiled by the time of the expiry date labeled. As a consequence, a uniform acceptance or rejection of the batch is seldom possible. 3. Because the contamination level of coliforms in certified raw milk mostly lies near the detection limit, coliform counts with high relative deviation are expected to be found in milk directly after filling. Since no bacterial multiplication takes place
Marumoto, Kohji; Sudo, Yasuaki; Nagamatsu, Yoshizumi
2017-07-01
During 2014-2016, the Aso volcano, located in the center of the Kyushu Islands, Japan, erupted and emitted large amounts of volcanic gases and ash. Two episodes of the eruption were observed; firstly Strombolian magmatic eruptive episodes from 25 November 2014 to the middle of May 2015, and secondly phreatomagmatic and phreatic eruptive episodes from September 2015 to February 2016. Bulk chemical analyses on total mercury (Hg) and major ions in water soluble fraction in volcanic ash fall samples were conducted. During the Strombolian magmatic eruptive episodes, total Hg concentrations averaged 1.69 ± 0.87 ng g- 1 (N = 33), with a range from 0.47 to 3.8 ng g- 1. In addition, the temporal variation of total Hg concentrations in volcanic ash varied with the amplitude change of seismic signals. In the Aso volcano, the volcanic tremors are always observed during eruptive stages and quiet interludes, and the amplitudes of tremors increase at eruptive stages. So, the temporal variation of total Hg concentrations could provide an indication of the level of volcanic activity. During the phreatomagmatic and phreatic eruptive episodes, on the other hand, total Hg concentrations in the volcanic ash fall samples averaged 220 ± 88 ng g- 1 (N = 5), corresponding to 100 times higher than those during the Strombolian eruptive episode. Therefore, it is possible that total Hg concentrations in volcanic ash samples are largely varied depending on the eruptive type. In addition, the ash fall amounts were also largely different among the two eruptive episodes. This can be also one of the factors controlling Hg concentrations in volcanic ash.
Propane spectral resolution enhancement by the maximum entropy method
Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.
1990-01-01
The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.
Lind, Ida; Grøn, Peter
1996-01-01
Vertical porosity variations in chalk are generally assumed to result from either a vaguely defined combination of primary sedimentary and diagenetic processes or solely to diagenetic processes. In this study, image analysis of backscatter electron images of polished samples and geochemical...... microprobe mapping were applied to measure the porosity variation in a limited number of chalk samples. Microscope data indicate that in all cases the chalk has been subjected to diagenetic processes, but our data suggest that the variations in porosity originate in primary sedimentary differences....
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
钟堃; 王薇; 何法霖; 王治国
2015-01-01
本文旨在规范人来源样品中铅检测分析前因素的质量控制,减少分析前变异对检测结果的影响.根据美国临床实验室标准化委员会的指南、微量元素检测分析前变异的控制和血铅与尿铅检测的分析规程,以及另外一些相关的文献,制定了人来源样品中铅检测的质量控制方案,包括:用于铅检测所进行的样品采集、运输和处理所考虑的因素,患者采样前的准备,样品采集、运输、接收、保存和处理等方面实验室工作人员所需准备的要素,污染控制和质量保证计划等.%The aims of this article was to provide the quality control requirements of preanalytical variation for the determination of lead in samples of human origin,reduce the influence of preanalytical variation on the test results.According to the Clinical and Laboratory Standards Institute documents,control of preanalytical variation in trace element determinations,analytical procedures for the determination of lead in blood and urine and other references and guidelines,the methods of quality control of lead determination had been made,including:the factors needed to be considered before collection,preservation,transportation and other preanalytical factors,the abilities and considerations of laboratory staff,etc.
Multitime maximum principle approach of minimal submanifolds and harmonic maps
Udriste, Constantin
2011-01-01
Some optimization problems coming from the Differential Geometry, as for example, the minimal submanifolds problem and the harmonic maps problem are solved here via interior solutions of appropriate multitime optimal control problems. Section 1 underlines some science domains where appear multitime optimal control problems. Section 2 (Section 3) recalls the multitime maximum principle for optimal control problems with multiple (curvilinear) integral cost functionals and $m$-flow type constraint evolution. Section 4 shows that there exists a multitime maximum principle approach of multitime variational calculus. Section 5 (Section 6) proves that the minimal submanifolds (harmonic maps) are optimal solutions of multitime evolution PDEs in an appropriate multitime optimal control problem. Section 7 uses the multitime maximum principle to show that of all solids having a given surface area, the sphere is the one having the greatest volume. Section 8 studies the minimal area of a multitime linear flow as optimal c...
The evolution of maximum body size of terrestrial mammals.
Smith, Felisa A; Boyer, Alison G; Brown, James H; Costa, Daniel P; Dayan, Tamar; Ernest, S K Morgan; Evans, Alistair R; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; McCain, Christy; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Stephens, Patrick R; Theodor, Jessica; Uhen, Mark D
2010-11-26
The extinction of dinosaurs at the Cretaceous/Paleogene (K/Pg) boundary was the seminal event that opened the door for the subsequent diversification of terrestrial mammals. Our compilation of maximum body size at the ordinal level by sub-epoch shows a near-exponential increase after the K/Pg. On each continent, the maximum size of mammals leveled off after 40 million years ago and thereafter remained approximately constant. There was remarkable congruence in the rate, trajectory, and upper limit across continents, orders, and trophic guilds, despite differences in geological and climatic history, turnover of lineages, and ecological variation. Our analysis suggests that although the primary driver for the evolution of giant mammals was diversification to fill ecological niches, environmental temperature and land area may have ultimately constrained the maximum size achieved.
陈超; 孟昭为
2016-01-01
The problem of parameter estimation with Gamma distribution occupied a very impor‐tant position in the mathematic .In this paper ,by means of moment estimation through the inde‐pendence of sample coefficient of variation ,we constructed the new estimators of shape parameter and scale parameter of Gamma distribution and estimated distribution parameters by comparing the deviation .%Gamma分布的参数估计问题在数学中占有非常重要的地位。借助矩估计通过样本变异系数独立性构造了Gamma分布的形状参数和尺度参数新的估计量，并通过比较偏差作为评价样本对估计参数下的分布效果。
吴普; 王丽丽; 邵雪梅
2008-01-01
Having analyzed the tree ring width and maximum latewood density of Pinus den-sata from west Sichuan, we obtained different climate information from tree-ring width and maximum latewood density chronology. The growth of tree ring width was responded princi-pally to the precipitation in current May, which might be influenced by the activity of southwest monsoon, whereas the maximum latewood density reflected summer temperature (June-September). According to the correlation relationship, a transfer function had been used to reconstruct summer temperature for the study area. The explained variance of re-construction is 51% (F=-52.099, p<0.0001). In the reconstruction series: before the 1930s, the climate was relatively cold, and relatively warm from 1930 to 1960, this trend was in accor-dance with the cold-warm period of the last 100 years, west Sichuan. Compared with Chengdu, the warming break point in west Sichuan is 3 years ahead of time, indicating that the Tibetan Plateau was more sensitive to temperature change. There was an evident sum-mer warming signal after 1983. Although the last 100-year running average of summer tem-perature in the 1990s was the maximum, the running average of the early 1990s was below the average line and it was cold summer, but summer drought occurred in the late 1990s.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.
2008-01-01
Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Improving predictability of time series using maximum entropy methods
Chliamovitch, G.; Dupuis, A.; Golub, A.; Chopard, B.
2015-04-01
We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly non-stationary processes, which provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates.
Predicting species' maximum dispersal distances from simple plant traits.
Tamme, Riin; Götzenberger, Lars; Zobel, Martin; Bullock, James M; Hooftman, Danny A P; Kaasik, Ants; Pärtel, Meelis
2014-02-01
Many studies have shown plant species' dispersal distances to be strongly related to life-history traits, but how well different traits can predict dispersal distances is not yet known. We used cross-validation techniques and a global data set (576 plant species) to measure the predictive power of simple plant traits to estimate species' maximum dispersal distances. Including dispersal syndrome (wind, animal, ant, ballistic, and no special syndrome), growth form (tree, shrub, herb), seed mass, seed release height, and terminal velocity in different combinations as explanatory variables we constructed models to explain variation in measured maximum dispersal distances and evaluated their power to predict maximum dispersal distances. Predictions are more accurate, but also limited to a particular set of species, if data on more specific traits, such as terminal velocity, are available. The best model (R2 = 0.60) included dispersal syndrome, growth form, and terminal velocity as fixed effects. Reasonable predictions of maximum dispersal distance (R2 = 0.53) are also possible when using only the simplest and most commonly measured traits; dispersal syndrome and growth form together with species taxonomy data. We provide a function (dispeRsal) to be run in the software package R. This enables researchers to estimate maximum dispersal distances with confidence intervals for plant species using measured traits as predictors. Easily obtainable trait data, such as dispersal syndrome (inferred from seed morphology) and growth form, enable predictions to be made for a large number of species.
A New Maximum Likelihood Approach for Free Energy Profile Construction from Molecular Simulations
Lee, Tai-Sung; Radak, Brian K.; Pabis, Anna; York, Darrin M.
2013-01-01
A novel variational method for construction of free energy profiles from molecular simulation data is presented. The variational free energy profile (VFEP) method uses the maximum likelihood principle applied to the global free energy profile based on the entire set of simulation data (e.g from multiple biased simulations) that spans the free energy surface. The new method addresses common obstacles in two major problems usually observed in traditional methods for estimating free energy surfaces: the need for overlap in the re-weighting procedure and the problem of data representation. Test cases demonstrate that VFEP outperforms other methods in terms of the amount and sparsity of the data needed to construct the overall free energy profiles. For typical chemical reactions, only ~5 windows and ~20-35 independent data points per window are sufficient to obtain an overall qualitatively correct free energy profile with sampling errors an order of magnitude smaller than the free energy barrier. The proposed approach thus provides a feasible mechanism to quickly construct the global free energy profile and identify free energy barriers and basins in free energy simulations via a robust, variational procedure that determines an analytic representation of the free energy profile without the requirement of numerically unstable histograms or binning procedures. It can serve as a new framework for biased simulations and is suitable to be used together with other methods to tackle with the free energy estimation problem. PMID:23457427
A New Maximum Likelihood Approach for Free Energy Profile Construction from Molecular Simulations.
Lee, Tai-Sung; Radak, Brian K; Pabis, Anna; York, Darrin M
2013-01-08
A novel variational method for construction of free energy profiles from molecular simulation data is presented. The variational free energy profile (VFEP) method uses the maximum likelihood principle applied to the global free energy profile based on the entire set of simulation data (e.g from multiple biased simulations) that spans the free energy surface. The new method addresses common obstacles in two major problems usually observed in traditional methods for estimating free energy surfaces: the need for overlap in the re-weighting procedure and the problem of data representation. Test cases demonstrate that VFEP outperforms other methods in terms of the amount and sparsity of the data needed to construct the overall free energy profiles. For typical chemical reactions, only ~5 windows and ~20-35 independent data points per window are sufficient to obtain an overall qualitatively correct free energy profile with sampling errors an order of magnitude smaller than the free energy barrier. The proposed approach thus provides a feasible mechanism to quickly construct the global free energy profile and identify free energy barriers and basins in free energy simulations via a robust, variational procedure that determines an analytic representation of the free energy profile without the requirement of numerically unstable histograms or binning procedures. It can serve as a new framework for biased simulations and is suitable to be used together with other methods to tackle with the free energy estimation problem.
MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR
SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM
1994-01-01
In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the est
Variation in commercial smoking mixtures containing third-generation synthetic cannabinoids.
Frinculescu, Anca; Lyall, Catherine L; Ramsey, John; Miserez, Bram
2017-02-01
Variation in ingredients (qualitative variation) and in quantity of active compounds (quantitative variation) in herbal smoking mixtures containing synthetic cannabinoids has been shown for older products. This can be dangerous to the user, as accurate and reproducible dosing is impossible. In this study, 69 packages containing third-generation cannabinoids of seven brands on the UK market in 2014 were analyzed both qualitatively and quantitatively for variation. When comparing the labels to actual active ingredients identified in the sample, only one brand was shown to be correctly labelled. The other six brands contained less, more, or ingredients other than those listed on the label. Only two brands were inconsistent, containing different active ingredients in different samples. Quantitative variation was assessed both within one package and between several packages. Within-package variation was within a 10% range for five of the seven brands, but two brands showed larger variation, up to 25% (Relative Standard Deviation). Variation between packages was significantly higher, with variation up to 38% and maximum concentration up to 2.7 times higher than the minimum concentration. Both qualitative and quantitative variation are common in smoking mixtures and endanger the user, as it is impossible to estimate the dose or to know the compound consumed when smoking commercial mixtures. Copyright © 2016 John Wiley & Sons, Ltd.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Maximum likelihood characterization of rotationally symmetric distributions on the sphere
Duerinckx, Mitia; Ley, Christophe
2012-01-01
A classical characterization result, which can be traced back to Gauss, states that the maximum likelihood estimator (MLE) of the location parameter equals the sample mean for any possible univariate samples of any possible sizes n if and only if the samples are drawn from a Gaussian population. A similar result, in the two-dimensional case, is given in von Mises (1918) for the Fisher-von Mises-Langevin (FVML) distribution, the equivalent of the Gaussian law on the unit circle. Half a century...
Maximum likelihood characterization of rotationally symmetric distributions on the sphere
Duerinckx, Mitia; Ley, Christophe
2012-01-01
A classical characterization result, which can be traced back to Gauss, states that the maximum likelihood estimator (MLE) of the location parameter equals the sample mean for any possible univariate samples of any possible sizes n if and only if the samples are drawn from a Gaussian population. A similar result, in the two-dimensional case, is given in von Mises (1918) for the Fisher-von Mises-Langevin (FVML) distribution, the equivalent of the Gaussian law on the unit circle. Half a century...
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Optimal Control of Polymer Flooding Based on Maximum Principle
Yang Lei
2012-01-01
Full Text Available Polymer flooding is one of the most important technologies for enhanced oil recovery (EOR. In this paper, an optimal control model of distributed parameter systems (DPSs for polymer injection strategies is established, which involves the performance index as maximum of the profit, the governing equations as the fluid flow equations of polymer flooding, and the inequality constraint as the polymer concentration limitation. To cope with the optimal control problem (OCP of this DPS, the necessary conditions for optimality are obtained through application of the calculus of variations and Pontryagin’s weak maximum principle. A gradient method is proposed for the computation of optimal injection strategies. The numerical results of an example illustrate the effectiveness of the proposed method.
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Quality, precision and accuracy of the maximum No. 40 anemometer
Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)
1996-12-31
This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.
Techniques for multivariate sample design
Williamson, M.A.
1990-04-01
In this report we consider sampling methods applicable to the multi-product Annual Fuel Oil and Kerosene Sales Report (Form EIA-821) Survey. For years prior to 1989, the purpose of the survey was to produce state-level estimates of total sales volumes for each of five target variables: residential No. 2 distillate, other retail No. 2 distillate, wholesale No. 2 distillate, retail residual, and wholesale residual. For the year 1989, the other retail No. 2 distillate and wholesale No. 2 distillate variables were replaced by a new variable defined to be the maximum of the two. The strata for this variable were crossed with the strata for the residential No. 2 distillate variable, resulting in a single stratified No. 2 distillate variable. Estimation for 1989 focused on the single No. 2 distillate variable and the two residual variables. Sampling accuracy requirements for each product were specified in terms of the coefficients of variation (CVs) for the various estimates based on data taken from recent surveys. The target population for the Form EIA-821 survey includes companies that deliver or sell fuel oil or kerosene to end-users. The Petroleum Product Sales Identification Survey (Form EIA-863) data base and numerous state and commercial lists provide the basis of the sampling frame, which is updated as new data become available. In addition, company/state-level volumes for distillates fuel oil, residual fuel oil, and motor gasoline are added to aid the design and selection process. 30 refs., 50 figs., 10 tabs.
GA-BASED MAXIMUM POWER DISSIPATION ESTIMATION OF VLSI SEQUENTIAL CIRCUITS OF ARBITRARY DELAY MODELS
Lu Junming; Lin Zhenghui
2002-01-01
In this paper, the glitching activity and process variations in the maximum power dissipation estimation of CMOS circuits are introduced. Given a circuit and the gate library,a new Genetic Algorithm (GA)-based technique is developed to determine the maximum power dissipation from a statistical point of view. The simulation on ISCAS-89 benchmarks shows that the ratio of the maximum power dissipation with glitching activity over the maximum power under zero-delay model ranges from 1.18 to 4.02. Compared with the traditional Monte Carlo-based technique, the new approach presented in this paper is more effective.
GA—BASED MAXIMUM POWER DISSIPATION ESTIMATION OF VLSI SEQUENTIAL CIRCUITS OF ARBITRARY DELAY MODELS
LuJunming; LinZhenghui
2002-01-01
In this paper,the glitching activity and process variations in the maximum power dissipation estimation of CMOS circulits are introduced.Given a circuit and the gate library,a new Genetic Algorithm (GA)-based technique is developed to determine the maximum power dissipation from a statistical point of view.The simulation on ISCAS-89 benchmarks shows that the ratio of the maximum power dissipation with glitching activity over the maximum power under zero-delay model ranges from 1.18 to 4.02.Compared with the traditional Monte Carlo-based technique,the new approach presented in this paper is more effective.
MB Distribution and its application using maximum entropy approach
Bhadra Suman
2016-01-01
Full Text Available Maxwell Boltzmann distribution with maximum entropy approach has been used to study the variation of political temperature and heat in a locality. We have observed that the political temperature rises without generating any political heat when political parties increase their attractiveness by intense publicity, but voters do not shift their loyalties. It has also been shown that political heat is generated and political entropy increases with political temperature remaining constant when parties do not change their attractiveness, but voters shift their loyalties (to more attractive parties.
Kobayashi, Sofie; Berge, Maria; Grout, Brian William Wilson
2017-01-01
This study contributes towards a better understanding of learning dynamics in doctoral supervision by analysing how learning opportunities are created in the interaction between supervisors and PhD students, using the notion of experiencing variation as a key to learning. Empirically, we have based...... were discussed, created more complex patterns of variation. Both PhD students and supervisors can learn from this. Understanding of this mechanism that creates learning opportunities can help supervisors develop their competences in supervisory pedagogy....
Sample size calculation in metabolic phenotyping studies.
Billoir, Elise; Navratil, Vincent; Blaise, Benjamin J
2015-09-01
The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
2016-01-26
This software provides a means for computing only the largest few entries of the product of two matrices, both exactly and approximately (using randomized sampling techniques). The purpose of the code is to demonstrate both the time it takes to solve the problem as well as the accuracy of the approximate approach. It is also meant to serve as a foundation to test the applicability of the sampling technique to related problems in data mining, including maximum inner product search, nearest neighbor search, and maximum cosine similarity.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
Bittel, R.; Mancel, J. [Commissariat a l' Energie Atomique, 92 - Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires, departement de la protection sanitaire
1968-10-01
The most important carriers of radioactive contamination of man are the whole of foodstuffs and not only ingested water or inhaled air. That is the reason why, in accordance with the spirit of the recent recommendations of the ICRP, it is proposed to substitute the idea of maximum levels of contamination of water to the MPC. In the case of aquatic food chains (aquatic organisms and irrigated foodstuffs), the knowledge of the ingested quantities and of the concentration factors food/water permit to determinate these maximum levels, or to find out a linear relation between the maximum levels in the case of two primary carriers of contamination (continental and sea waters). The notion of critical food-consumption, critical radioelements and formula of waste disposal are considered in the same way, taking care to attach the greatest possible importance to local situations. (authors) [French] Les vecteurs essentiels de la contamination radioactive de l'homme sont les aliments dans leur ensemble, et non seulement l'eau ingeree ou l'air inhale. C'est pourquoi, en accord avec l'esprit des recentes recommandations de la C.I.P.R., il est propose de substituer aux CMA la notion de niveaux limites de contamination des eaux. Dans le cas des chaines alimentaires aquatiques (organismes aquatiques et aliments irrigues), la connaissance des quantites ingerees et celle des facteurs de concentration aliments/eau permettent de determiner ces niveaux limites dans le cas de deux vecteurs primaires de contamination (eaux continentales et eaux oceaniques). Les notions de regime alimentaire critique, de radioelement critique et de formule de rejets sont envisagees, dans le meme esprit, avec le souci de tenir compte le plus possible des situations locales. (auteurs)
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
OPTIMAL CONTROL PROBLEM FOR PARABOLIC VARIATIONAL INEQUALITIES
汪更生
2001-01-01
This paper deals with the optimal control problems of systems governed by a parabolic variational inequality coupled with a semilinear parabolic differential equations.The maximum principle and some kind of approximate controllability are studied.
A strong test of a maximum entropy model of trait-based community assembly.
Shipley, Bill; Laughlin, Daniel C; Sonnier, Grégory; Otfinowski, Rafael
2011-02-01
We evaluate the predictive power and generality of Shipley's maximum entropy (maxent) model of community assembly in the context of 96 quadrats over a 120-km2 area having a large (79) species pool and strong gradients. Quadrats were sampled in the herbaceous understory of ponderosa pine forests in the Coconino National Forest, Arizona, U.S.A. The maxent model accurately predicted species relative abundances when observed community-weighted mean trait values were used as model constraints. Although only 53% of the variation in observed relative abundances was associated with a combination of 12 environmental variables, the maxent model based only on the environmental variables provided highly significant predictive ability, accounting for 72% of the variation that was possible given these environmental variables. This predictive ability largely surpassed that of nonmetric multidimensional scaling (NMDS) or detrended correspondence analysis (DCA) ordinations. Using cross-validation with 1000 independent runs, the median correlation between observed and predicted relative abundances was 0.560 (the 2.5% and 97.5% quantiles were 0.045 and 0.825). The qualitative predictions of the model were also noteworthy: dominant species were correctly identified in 53% of the quadrats, 83% of rare species were correctly predicted to have a relative abundance of < 0.05, and the median predicted relative abundance of species actually absent from a quadrat was 5 x 10(-5).
Validation of a sampling plan to generate food composition data.
Sammán, N C; Gimenez, M A; Bassett, N; Lobo, M O; Marcoleri, M E
2016-02-15
A methodology to develop systematic plans for food sampling was proposed. Long life whole and skimmed milk, and sunflower oil were selected to validate the methodology in Argentina. Fatty acid profile in all foods, proximal composition, and calcium's content in milk were determined with AOAC methods. The number of samples (n) was calculated applying Cochran's formula with variation coefficients ⩽12% and an estimate error (r) maximum permissible ⩽5% for calcium content in milks and unsaturated fatty acids in oil. n were 9, 11 and 21 for long life whole and skimmed milk, and sunflower oil respectively. Sample units were randomly collected from production sites and sent to labs. Calculated r with experimental data was ⩽10%, indicating high accuracy in the determination of analyte content of greater variability and reliability of the proposed sampling plan. The methodology is an adequate and useful tool to develop sampling plans for food composition analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.
Marin-Garcia Pablo
2010-05-01
Full Text Available Abstract Background The maturing field of genomics is rapidly increasing the number of sequenced genomes and producing more information from those previously sequenced. Much of this additional information is variation data derived from sampling multiple individuals of a given species with the goal of discovering new variants and characterising the population frequencies of the variants that are already known. These data have immense value for many studies, including those designed to understand evolution and connect genotype to phenotype. Maximising the utility of the data requires that it be stored in an accessible manner that facilitates the integration of variation data with other genome resources such as gene annotation and comparative genomics. Description The Ensembl project provides comprehensive and integrated variation resources for a wide variety of chordate genomes. This paper provides a detailed description of the sources of data and the methods for creating the Ensembl variation databases. It also explores the utility of the information by explaining the range of query options available, from using interactive web displays, to online data mining tools and connecting directly to the data servers programmatically. It gives a good overview of the variation resources and future plans for expanding the variation data within Ensembl. Conclusions Variation data is an important key to understanding the functional and phenotypic differences between individuals. The development of new sequencing and genotyping technologies is greatly increasing the amount of variation data known for almost all genomes. The Ensembl variation resources are integrated into the Ensembl genome browser and provide a comprehensive way to access this data in the context of a widely used genome bioinformatics system. All Ensembl data is freely available at http://www.ensembl.org and from the public MySQL database server at ensembldb.ensembl.org.
IDENTIFICATION OF IDEOTYPES BY CANONICAL ANALYSIS IN Panicum maximum
Janaina Azevedo Martuscello
2015-04-01
Full Text Available Grouping of genotypes by canonical variable analysis is an important tool in breeding. It allows the grouping of individuals with similar characteristics that are associated with superior agronomic performance and may indicate the ideal profile of a plant for the region. The objective of the present study was to define, by canonical analysis, the agronomic profile of Panicum maximum plants adapted to the Agreste region. The experiment was conducted in a completely randomized design with 28 treatments, 22 genotypes of Panicum maximum, and cultivars Mombasa, Tanzania, Massai, Milenio, BRS Zuri, and BRS Tamani in triplicate in 4-m² plots. Plots were harvested five times and the following traits were evaluated: plant height; total, leaf, and stem; dead dry matter yields; leaf:stem ratio; leaf percentage; and volumetric density of forage. The analysis of canonical variables was performed based on the phenotypic means of the evaluated traits and on the residual variance and covariance matrix. Genotype PM34 showed higher mean leaf dry matter yield under the conditions of the Agreste of Alagoas (on average 53% higher than cultivars Mombasa, Tanzania, Milenio and Massai. It was possible to summarize the variation observed in eight agronomic characteristics in only two canonical variables accounting for 81.44 % of the data variation. The ideotype plant adapted to the conditions of the Agreste should be tall and present high leaf yield, leaf percentage, and leaf:stem ratio, and intermediate values of volumetric density of forage.
Moiseiwitsch, B L
2004-01-01
This graduate-level text's primary objective is to demonstrate the expression of the equations of the various branches of mathematical physics in the succinct and elegant form of variational principles (and thereby illuminate their interrelationship). Its related intentions are to show how variational principles may be employed to determine the discrete eigenvalues for stationary state problems and to illustrate how to find the values of quantities (such as the phase shifts) that arise in the theory of scattering. Chapter-by-chapter treatment consists of analytical dynamics; optics, wave mecha
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Shi, Keliang; Hou, Xiaolin; Roos, Per
2013-01-01
The concentration of 99Tc was determined in archived time series seaweed samples collected at Klint (Denmark). The results demonstrate a significantly seasonal variation of 99Tc concentrations in Fucus vesiculosus with maximum values in winter and minimum values in summer. The mechanism driving t...... of (1.9 0.5) 105 L/kg, were obtained. This indicates that F. vesiculosus can be used as a reliable bioindicator to monitor 99Tc concentration in seawater....
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Maximum-Entropy Inference with a Programmable Annealer.
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2016-03-03
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.
Man, E. A.; Sera, D.; Mathe, L.; Schaltz, E.; Rosendahl, L.
2016-03-01
Characterization of thermoelectric generators (TEG) is widely discussed and equipment has been built that can perform such analysis. One method is often used to perform such characterization: constant temperature with variable thermal power input. Maximum power point tracking (MPPT) methods for TEG systems are mostly tested under steady-state conditions for different constant input temperatures. However, for most TEG applications, the input temperature gradient changes, exposing the MPPT to variable tracking conditions. An example is the exhaust pipe on hybrid vehicles, for which, because of the intermittent operation of the internal combustion engine, the TEG and its MPPT controller are exposed to a cyclic temperature profile. Furthermore, there are no guidelines on how fast the MPPT must be under such dynamic conditions. In the work discussed in this paper, temperature gradients for TEG integrated in several applications were evaluated; the results showed temperature variation up to 5°C/s for TEG systems. Electrical characterization of a calcium-manganese oxide TEG was performed at steady-state for different input temperatures and a maximum temperature of 401°C. By using electrical data from characterization of the oxide module, a solar array simulator was emulated to perform as a TEG. A trapezoidal temperature profile with different gradients was used on the TEG simulator to evaluate the dynamic MPPT efficiency. It is known that the perturb and observe (P&O) algorithm may have difficulty accurately tracking under rapidly changing conditions. To solve this problem, a compromise must be found between the magnitude of the increment and the sampling frequency of the control algorithm. The standard P&O performance was evaluated experimentally by using different temperature gradients for different MPPT sampling frequencies, and efficiency values are provided for all cases. The results showed that a tracking speed of 2.5 Hz can be successfully implemented on a TEG
Elena Zubieta
2008-12-01
Full Text Available Desde una mirada psicosocial, el trabajo consiste en un conjunto de creencias y valores hacia el trabajo que los individuos y grupos sociales desarrollan antes y durante el proceso de socialización en el trabajo. Se trata de un conjunto flexible de cogniciones que está sujeto a cambios dependiendo de las vivencias personales y los cambios contextuales (Salanova, Gracia & Peiró;1996. Desde la perspectiva de la socialización en el trabajo y con el objetivo de explorar en probables fuentes de variación a partir de variables sociodemográficas, contextuales y psicosociales, se desarrolló un estudio descriptivo de diferencias de grupo sobre la base de una muestra no probabilística intencional por cuotas compuesta por 290 sujetos activos laboralmente de la Ciudad de Buenos Aires y el Conurbano Bonaerense. Los resultados muestran la presencia de creencias asociadas a la Ética Protestante del Trabajo y la Competitividad, valores de Apertura al Cambio y Autotrascendencia y configuraciones particulares a partir de introducir variables como el sexo, la edad, el nivel de educación y aspectos de trayectoria laboral tales como años de trabajo, permanencia en la organización y el puesto, interrupciones en la actividad laboral y modalidad de trabajo.From a psycho-sociological view, work can be understood as a set of values and beliefs which individuals and groups construct before and during work process socialization. It is a flexible set of cognitions influenced by individuals personal experiences and contextual changes (Salanova, Gracia & Peiró;1996. Taking socialization at work as a starting point and with the aim of exploring variation sources in terms of sociodemographic, contextual and psycho-sociological variables, a descriptive group differences study was carried out based on a convenience sample of 290 working participants from Buenos Aires city and surroundings. Results show the presence of Protestant Work Ethic, Competitive beliefs, Self
Direct maximum parsimony phylogeny reconstruction from genotype data
Ravi R
2007-12-01
Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
无
2011-01-01
[Objective] The research aimed to analyze temporal and spatial variation characteristics of temperature in Shangqiu City during 1961-2010.[Method] Based on temperature data in eight meteorological stations of Shangqiu during 1961-2010,by using trend analysis method,the temporal and spatial evolution characteristics of annual average temperature,annual average maximum and minimum temperatures,annual extreme maximum and minimum temperatures,daily range of annual average temperature in Shangqiu City were analy...
Inverse feasibility problems of the inverse maximum ﬂow problems
Adrian Deaconu; Eleonor Ciurea
2013-04-01
A linear time method to decide if any inverse maximum ﬂow (denoted General Inverse Maximum Flow problems (IMFG)) problem has solution is deduced. If IMFG does not have solution, methods to transform IMFG into a feasible problem are presented. The methods consist of modifying as little as possible the restrictions to the variation of the bounds of the ﬂow. New inverse combinatorial optimization problems are introduced and solved.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Smoothed log-concave maximum likelihood estimation with applications
Chen, Yining
2011-01-01
We study the smoothed log-concave maximum likelihood estimator of a probability distribution on $\\mathbb{R}^d$. This is a fully automatic nonparametric density estimator, obtained as a canonical smoothing of the log-concave maximum likelihood estimator. We demonstrate its attractive features both through an analysis of its theoretical properties and a simulation study. Moreover, we show how the estimator can be used as an intermediate stage of more involved procedures, such as constructing a classifier or estimating a functional of the density. Here again, the use of the estimator can be justified both on theoretical grounds and through its finite sample performance, and we illustrate its use in a breast cancer diagnosis (classification) problem.
$\\ell_0$-penalized maximum likelihood for sparse directed acyclic graphs
van de Geer, Sara
2012-01-01
We consider the problem of regularized maximum likelihood estimation for the structure and parameters of a high-dimensional, sparse directed acyclic graphical (DAG) model with Gaussian distribution, or equivalently, of a Gaussian structural equation model. We show that the $\\ell_0$-penalized maximum likelihood estimator of a DAG has about the same number of edges as the minimal-edge I-MAP (a DAG with minimal number of edges representing the distribution), and that it converges in Frobenius norm. We allow the number of nodes $p$ to be much larger than sample size $n$ but assume a sparsity condition and that any representation of the true DAG has at least a fixed proportion of its non-zero edge weights above the noise level. Our results do not rely on the restrictive strong faithfulness condition which is required for methods based on conditional independence testing such as the PC-algorithm.
Adaptive Parallel Tempering for Stochastic Maximum Likelihood Learning of RBMs
Desjardins, Guillaume; Bengio, Yoshua
2010-01-01
Restricted Boltzmann Machines (RBM) have attracted a lot of attention of late, as one the principle building blocks of deep networks. Training RBMs remains problematic however, because of the intractibility of their partition function. The maximum likelihood gradient requires a very robust sampler which can accurately sample from the model despite the loss of ergodicity often incurred during learning. While using Parallel Tempering in the negative phase of Stochastic Maximum Likelihood (SML-PT) helps address the issue, it imposes a trade-off between computational complexity and high ergodicity, and requires careful hand-tuning of the temperatures. In this paper, we show that this trade-off is unnecessary. The choice of optimal temperatures can be automated by minimizing average return time (a concept first proposed by [Katzgraber et al., 2006]) while chains can be spawned dynamically, as needed, thus minimizing the computational overhead. We show on a synthetic dataset, that this results in better likelihood ...
MaxOcc: a web portal for maximum occurrence analysis.
Bertini, Ivano; Ferella, Lucio; Luchinat, Claudio; Parigi, Giacomo; Petoukhov, Maxim V; Ravera, Enrico; Rosato, Antonio; Svergun, Dmitri I
2012-08-01
The MaxOcc web portal is presented for the characterization of the conformational heterogeneity of two-domain proteins, through the calculation of the Maximum Occurrence that each protein conformation can have in agreement with experimental data. Whatever the real ensemble of conformations sampled by a protein, the weight of any conformation cannot exceed the calculated corresponding Maximum Occurrence value. The present portal allows users to compute these values using any combination of restraints like pseudocontact shifts, paramagnetism-based residual dipolar couplings, paramagnetic relaxation enhancements and small angle X-ray scattering profiles, given the 3D structure of the two domains as input. MaxOcc is embedded within the NMR grid services of the WeNMR project and is available via the WeNMR gateway at http://py-enmr.cerm.unifi.it/access/index/maxocc . It can be used freely upon registration to the grid with a digital certificate.
Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model
Roberts, James S.; Thompson, Vanessa M.
2011-01-01
A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…
Effect of consolidation ratios on maximum dynamic shear modulus of sands
Yuan Xiaoming; Sun Jing; Sun Rui
2005-01-01
The dynamic shear modulus (DSM) is the most basic soil parameter in earthquake or other dynamic loading conditions and can be obtained through testing in the field or in the laboratory. The effect of consolidation ratios on the maximum DSM for two types of sand is investigated by using resonant column tests. And, an increment formula to obtain the maximum DSM for cases of consolidation ratio kc＞1 is presented. The results indicate that the maximum DSM rises rapidly when kc is near 1 and then slows down, which means that the power function of the consolidation ratio increment kc-1 can be used to describe the variation of the maximum DSM due to kc＞1. The results also indicate that the increase in the maximum DSM due to kc＞1 is significantly larger than that predicted by Hardin and Black's formula.
Mating system and seed variation of Acacia hybrid (A. mangium x A. auriculiformis).
Ng, Chin-Hong; Lee, Soon-Leong; Ng, Kevin Kit-Siong; Muhammad, Norwati; Ratnam, Wickneswari
2009-04-01
The mating system and seed variation of Acacia hybrid (A. mangium x A. auriculiformis) were studied using allozymes and random amplified polymorphic DNA (RAPD) markers, respectively. Multi-locus outcrossing rate estimations indicated that the hybrid was predominantly outcrossed (mean+/- s.e. t(m) = 0.86+/-0.01). Seed variation was investigated using 35 polymorphic RAPD fragments. An analysis of molecular variance (AMOVA) revealed the highest genetic variation among seeds within a pod (66%-70%), followed by among pods within inflorescence (29%-37%), and the least variation among inflorescences within tree (1%). In addition, two to four RAPD profiles could be detected among seeds within pod. Therefore, the results suggest that a maximum of four seeds per pod could be sampled for the establishment of a mapping population for further studies.
Mating system and seed variation of Acacia hybrid (A. mangium × A. auriculiformis)
Chin-Hong Ng; Soon-Leong Lee; Kevin Kit-Siong Ng; Norwati Muhammad; Wickneswari Ratnam
2009-04-01
The mating system and seed variation of Acacia hybrid (A. mangium × A. auriculiformis) were studied using allozymes and random amplified polymorphic DNA (RAPD) markers, respectively. Multi-locus outcrossing rate estimations indicated that the hybrid was predominantly outcrossed (mean±s.e. $t_{m} = 0.86\\pm 0.01$). Seed variation was investigated using 35 polymorphic RAPD fragments. An analysis of molecular variance (AMOVA) revealed the highest genetic variation among seeds within a pod (66%–70%), followed by among pods within inflorescence (29%–37%), and the least variation among inflorescences within tree (< 1%). In addition, two to four RAPD profiles could be detected among seeds within pod. Therefore, the results suggest that a maximum of four seeds per pod could be sampled for the establishment of a mapping population for further studies.
Rakesh R. Pathak
2012-02-01
Full Text Available Based on the law of large numbers which is derived from probability theory, we tend to increase the sample size to the maximum. Central limit theorem is another inference from the same probability theory which approves largest possible number as sample size for better validity of measuring central tendencies like mean and median. Sometimes increase in sample-size turns only into negligible betterment or there is no increase at all in statistical relevance due to strong dependence or systematic error. If we can afford a little larger sample, statistically power of 0.90 being taken as acceptable with medium Cohen's d (<0.5 and for that we can take a sample size of 175 very safely and considering problem of attrition 200 samples would suffice. [Int J Basic Clin Pharmacol 2012; 1(1.000: 43-44
Atmospheric diurnal variations observed with GPS radio occultation soundings
F. Xie
2010-07-01
Full Text Available The diurnal variation, driven by solar forcing, is a fundamental mode in the Earth's weather and climate system. Radio occultation (RO measurements from the six COSMIC satellites (Constellation Observing System for Meteorology, Ionosphere and Climate provide nearly uniform global coverage with high vertical resolution, all-weather and diurnal sampling capability. This paper analyzes the diurnal variations of temperature and refractivity from three-year (2007–2009 COSMIC RO measurements in the troposphere and stratosphere between 30° S and 30° N. The RO observations reveal both propagating and trapped vertical structures of diurnal variations, including transition regions near the tropopause where data with high vertical resolution are critical. In the tropics the diurnal amplitude in refractivity shows the minimum around 14 km and increases to a local maximum around 32 km in the stratosphere. The upward propagating component of the migrating diurnal tides in the tropics is clearly captured by the GPS RO measurements, which show a downward progression in phase from stratopause to the upper troposphere with a vertical wavelength of about 25 km. At ~32 km the seasonal variation of the tidal amplitude maximizes at the opposite side of the equator relative to the solar forcing. The vertical structure of tidal amplitude shows strong seasonal variations and becomes asymmetric along the equator and tilted toward the summer hemisphere in the solstice months. Such asymmetry becomes less prominent in equinox months.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Francescon, Paolo, E-mail: paolo.francescon@ulssvicenza.it; Satariano, Ninfa [Department of Radiation Oncology, Ospedale Di Vicenza, Viale Rodolfi, Vicenza 36100 (Italy); Beddar, Sam [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77005 (United States); Das, Indra J. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, Indiana 46202 (United States)
2014-10-15
Purpose: Evaluate the ability of different dosimeters to correctly measure the dosimetric parameters percentage depth dose (PDD), tissue-maximum ratio (TMR), and off-axis ratio (OAR) in water for small fields. Methods: Monte Carlo (MC) simulations were used to estimate the variation of k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} for several types of microdetectors as a function of depth and distance from the central axis for PDD, TMR, and OAR measurements. The variation of k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} enables one to evaluate the ability of a detector to reproduce the PDD, TMR, and OAR in water and consequently determine whether it is necessary to apply correction factors. The correctness of the simulations was verified by assessing the ratios between the PDDs and OARs of 5- and 25-mm circular collimators used with a linear accelerator measured with two different types of dosimeters (the PTW 60012 diode and PTW PinPoint 31014 microchamber) and the PDDs and the OARs measured with the Exradin W1 plastic scintillator detector (PSD) and comparing those ratios with the corresponding ratios predicted by the MC simulations. Results: MC simulations reproduced results with acceptable accuracy compared to the experimental results; therefore, MC simulations can be used to successfully predict the behavior of different dosimeters in small fields. The Exradin W1 PSD was the only dosimeter that reproduced the PDDs, TMRs, and OARs in water with high accuracy. With the exception of the EDGE diode, the stereotactic diodes reproduced the PDDs and the TMRs in water with a systematic error of less than 2% at depths of up to 25 cm; however, they produced OAR values that were significantly different from those in water, especially in the tail region (lower than 20% in some cases). The microchambers could be used for PDD
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Venus atmosphere profile from a maximum entropy principle
L. N. Epele
2007-10-01
Full Text Available The variational method with constraints recently developed by Verkley and Gerkema to describe maximum-entropy atmospheric profiles is generalized to ideal gases but with temperature-dependent specific heats. In so doing, an extended and non standard potential temperature is introduced that is well suited for tackling the problem under consideration. This new formalism is successfully applied to the atmosphere of Venus. Three well defined regions emerge in this atmosphere up to a height of 100 km from the surface: the lowest one up to about 35 km is adiabatic, a transition layer located at the height of the cloud deck and finally a third region which is practically isothermal.
A strong test of the maximum entropy theory of ecology.
Xiao, Xiao; McGlinn, Daniel J; White, Ethan P
2015-03-01
The maximum entropy theory of ecology (METE) is a unified theory of biodiversity that predicts a large number of macroecological patterns using information on only species richness, total abundance, and total metabolic rate of the community. We evaluated four major predictions of METE simultaneously at an unprecedented scale using data from 60 globally distributed forest communities including more than 300,000 individuals and nearly 2,000 species.METE successfully captured 96% and 89% of the variation in the rank distribution of species abundance and individual size but performed poorly when characterizing the size-density relationship and intraspecific distribution of individual size. Specifically, METE predicted a negative correlation between size and species abundance, which is weak in natural communities. By evaluating multiple predictions with large quantities of data, our study not only identifies a mismatch between abundance and body size in METE but also demonstrates the importance of conducting strong tests of ecological theories.
Maximum likelihood method and Fisher's information in physics and econophysics
Syska, Jacek
2012-01-01
Three steps in the development of the maximum likelihood (ML) method are presented. At first, the application of the ML method and Fisher information notion in the model selection analysis is described (Chapter 1). The fundamentals of differential geometry in the construction of the statistical space are introduced, illustrated also by examples of the estimation of the exponential models. At second, the notions of the relative entropy and the information channel capacity are introduced (Chapter 2). The observed and expected structural information principle (IP) and the variational IP of the modified extremal physical information (EPI) method of Frieden and Soffer are presented and discussed (Chapter 3). The derivation of the structural IP based on the analyticity of the logarithm of the likelihood function and on the metricity of the statistical space of the system is given. At third, the use of the EPI method is developed (Chapters 4-5). The information channel capacity is used for the field theory models cl...
Maximum caliber inference and the stochastic Ising model
Cafaro, Carlo; Ali, Sean Alan
2016-11-01
We investigate the maximum caliber variational principle as an inference algorithm used to predict dynamical properties of complex nonequilibrium, stationary, statistical systems in the presence of incomplete information. Specifically, we maximize the path entropy over discrete time step trajectories subject to normalization, stationarity, and detailed balance constraints together with a path-dependent dynamical information constraint reflecting a given average global behavior of the complex system. A general expression for the transition probability values associated with the stationary random Markov processes describing the nonequilibrium stationary system is computed. By virtue of our analysis, we uncover that a convenient choice of the dynamical information constraint together with a perturbative asymptotic expansion with respect to its corresponding Lagrange multiplier of the general expression for the transition probability leads to a formal overlap with the well-known Glauber hyperbolic tangent rule for the transition probability for the stochastic Ising model in the limit of very high temperatures of the heat reservoir.
Shoukri, Mohamed M; Elkum, Nasser; Walter, Stephen D
2006-05-10
In this paper we propose the use of the within-subject coefficient of variation as an index of a measurement's reliability. For continuous variables and based on its maximum likelihood estimation we derive a variance-stabilizing transformation and discuss confidence interval construction within the framework of a one-way random effects model. We investigate sample size requirements for the within-subject coefficient of variation for continuous and binary variables. We investigate the validity of the approximate normal confidence interval by Monte Carlo simulations. In designing a reliability study, a crucial issue is the balance between the number of subjects to be recruited and the number of repeated measurements per subject. We discuss efficiency of estimation and cost considerations for the optimal allocation of the sample resources. The approach is illustrated by an example on Magnetic Resonance Imaging (MRI). We also discuss the issue of sample size estimation for dichotomous responses with two examples. For the continuous variable we found that the variance stabilizing transformation improves the asymptotic coverage probabilities on the within-subject coefficient of variation for the continuous variable. The maximum like estimation and sample size estimation based on pre-specified width of confidence interval are novel contribution to the literature for the binary variable. Using the sample size formulas, we hope to help clinical epidemiologists and practicing statisticians to efficiently design reliability studies using the within-subject coefficient of variation, whether the variable of interest is continuous or binary.
Elkum Nasser
2006-05-01
Full Text Available Abstract Background In this paper we propose the use of the within-subject coefficient of variation as an index of a measurement's reliability. For continuous variables and based on its maximum likelihood estimation we derive a variance-stabilizing transformation and discuss confidence interval construction within the framework of a one-way random effects model. We investigate sample size requirements for the within-subject coefficient of variation for continuous and binary variables. Methods We investigate the validity of the approximate normal confidence interval by Monte Carlo simulations. In designing a reliability study, a crucial issue is the balance between the number of subjects to be recruited and the number of repeated measurements per subject. We discuss efficiency of estimation and cost considerations for the optimal allocation of the sample resources. The approach is illustrated by an example on Magnetic Resonance Imaging (MRI. We also discuss the issue of sample size estimation for dichotomous responses with two examples. Results For the continuous variable we found that the variance stabilizing transformation improves the asymptotic coverage probabilities on the within-subject coefficient of variation for the continuous variable. The maximum like estimation and sample size estimation based on pre-specified width of confidence interval are novel contribution to the literature for the binary variable. Conclusion Using the sample size formulas, we hope to help clinical epidemiologists and practicing statisticians to efficiently design reliability studies using the within-subject coefficient of variation, whether the variable of interest is continuous or binary.
Use of Maximum Entropy Modeling in Wildlife Research
Roger A. Baldwin
2009-11-01
Full Text Available Maximum entropy (Maxent modeling has great potential for identifying distributions and habitat selection of wildlife given its reliance on only presence locations. Recent studies indicate Maxent is relatively insensitive to spatial errors associated with location data, requires few locations to construct useful models, and performs better than other presence-only modeling approaches. Further advances are needed to better define model thresholds, to test model significance, and to address model selection. Additionally, development of modeling approaches is needed when using repeated sampling of known individuals to assess habitat selection. These advancements would strengthen the utility of Maxent for wildlife research and management.
A Maximum Entropy Modelling of the Rain Drop Size Distribution
Francisco J. Tapiador
2011-01-01
Full Text Available This paper presents a maximum entropy approach to Rain Drop Size Distribution (RDSD modelling. It is shown that this approach allows (1 to use a physically consistent rationale to select a particular probability density function (pdf (2 to provide an alternative method for parameter estimation based on expectations of the population instead of sample moments and (3 to develop a progressive method of modelling by updating the pdf as new empirical information becomes available. The method is illustrated with both synthetic and real RDSD data, the latest coming from a laser disdrometer network specifically designed to measure the spatial variability of the RDSD.
Tillé, Yves
2006-01-01
Important progresses in the methods of sampling have been achieved. This book draws up an inventory of methods that can be useful for selecting samples. Forty-six sampling methods are described in the framework of general theory. This book is suitable for experienced statisticians who are familiar with the theory of survey sampling.
Brus, D.J.
2015-01-01
In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling
Ross, Kenneth N.
1987-01-01
This article considers various kinds of probability and non-probability samples in both experimental and survey studies. Throughout, how a sample is chosen is stressed. Size alone is not the determining consideration in sample selection. Good samples do not occur by accident; they are the result of a careful design. (Author/JAZ)
Brus, D.J.
2015-01-01
In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling
Tests of maximum oxygen intake. A critical review.
Shephard, R J
1984-01-01
The determinants of endurance effort vary, depending upon the extent of the muscle mass that is activated. Large muscle work, such as treadmill running, is halted by impending circulatory failure; lack of venous return may compound the basic problem of an excessive cardiac work-load. If the task calls for use of a smaller muscle mass, there is ultimately difficulty in perfusing the active muscles, and glycolysis is halted by an accumulation of acid metabolites. Simple field tests of endurance, such as Cooper's 12-minute run and the Canadian Home Fitness Test, have some value in the rapid screening of large populations, but like other submaximal tests of human performance they lack the precision needed to advise the individual. The directly measured maximum oxygen intake (VO2 max) varies with the type of exercise. The highest values are obtained during uphill treadmill running, but well trained athletes often approach these values during performance of sport-specific tasks. Limitations of methodology and wide interindividual variations of constitutional potential limit the interpretation of maximum oxygen intake data in terms of personal fitness, exercise prescription and the monitoring of training responses. The main practical value of VO2 max measurement is in the functional assessment of patients with cardiorespiratory disease, since changes are then large relative to the precision of the test.
Predicting the solar maximum with the rising rate
Du, Z L
2011-01-01
The growth rate of solar activity in the early phase of a solar cycle has been known to be well correlated with the subsequent amplitude (solar maximum). It provides very useful information for a new solar cycle as its variation reflects the temporal evolution of the dynamic process of solar magnetic activities from the initial phase to the peak phase of the cycle. The correlation coefficient between the solar maximum (Rmax) and the rising rate ({\\beta}a) at {\\Delta}m months after the solar minimum (Rmin) is studied and shown to increase as the cycle progresses with an inflection point (r = 0.83) at about {\\Delta}m = 20 months. The prediction error of Rmax based on {\\beta}a is found within estimation at the 90% level of confidence and the relative prediction error will be less than 20% when {\\Delta}m \\geq 20. From the above relationship, the current cycle (24) is preliminarily predicted to peak around October 2013 with a size of Rmax =84 \\pm 33 at the 90% level of confidence.
Robust stochastic maximum principle: Complete proof and discussions
Poznyak Alex S.
2002-01-01
Full Text Available This paper develops a version of Robust Stochastic Maximum Principle (RSMP applied to the Minimax Mayer Problem formulated for stochastic differential equations with the control-dependent diffusion term. The parametric families of first and second order adjoint stochastic processes are introduced to construct the corresponding Hamiltonian formalism. The Hamiltonian function used for the construction of the robust optimal control is shown to be equal to the Lebesque integral over a parametric set of the standard stochastic Hamiltonians corresponding to a fixed value of the uncertain parameter. The paper deals with a cost function given at finite horizon and containing the mathematical expectation of a terminal term. A terminal condition, covered by a vector function, is also considered. The optimal control strategies, adapted for available information, for the wide class of uncertain systems given by an stochastic differential equation with unknown parameters from a given compact set, are constructed. This problem belongs to the class of minimax stochastic optimization problems. The proof is based on the recent results obtained for Minimax Mayer Problem with a finite uncertainty set [14,43-45] as well as on the variation results of [53] derived for Stochastic Maximum Principle for nonlinear stochastic systems under complete information. The corresponding discussion of the obtain results concludes this study.
Measurement and relevance of maximum metabolic rate in fishes.
Norin, T; Clark, T D
2016-01-01
Maximum (aerobic) metabolic rate (MMR) is defined here as the maximum rate of oxygen consumption (M˙O2max ) that a fish can achieve at a given temperature under any ecologically relevant circumstance. Different techniques exist for eliciting MMR of fishes, of which swim-flume respirometry (critical swimming speed tests and burst-swimming protocols) and exhaustive chases are the most common. Available data suggest that the most suitable method for eliciting MMR varies with species and ecotype, and depends on the propensity of the fish to sustain swimming for extended durations as well as its capacity to simultaneously exercise and digest food. MMR varies substantially (>10 fold) between species with different lifestyles (i.e. interspecific variation), and to a lesser extent (aerobic scope, interest in measuring this trait has spread across disciplines in attempts to predict effects of climate change on fish populations. Here, various techniques used to elicit and measure MMR in different fish species with contrasting lifestyles are outlined and the relevance of MMR to the ecology, fitness and climate change resilience of fishes is discussed.
Maximum hydrogen production from genetically modified microalgae biomass
Vargas, Jose; Kava, Vanessa; Ordonez, Juan
A transient mathematical model for managing microalgae derived H2 production as a source of renewable energy is developed for a well stirred photobioreactor, PBR. The model allows for the determination of microalgae and H2 mass fractions produced by the PBR in time. A Michaelis-Menten expression is proposed for modeling the rate of H2 production, which introduces an expression to calculate the resulting effect on H2 production rate after genetically modifying the microalgae. The indirect biophotolysis process was used. Therefore, an opportunity was found to optimize the aerobic to anaerobic stages time ratio of the cycle for maximum H2 production rate, i.e., the process rhythm. A system thermodynamic optimization is conducted with the model equations to find accurately the optimal system operating rhythm for maximum H2 production rate, and how wild and genetically modified species compare to each other. The maxima found are sharp, showing up to a ~60% variation in hydrogen production rate within 2 days around the optimal rhythm, which highlights the importance of system operation in such condition. Therefore, the model is expected to be useful for design, control and optimization of H2 production. Brazilian National Council of Scientific and Technological Development, CNPq (project 482336/2012-9).
Ferrer Polo, J.; Ardiles Lopez, K. L. (CEDEX, Ministerio de Obras Publicas, Transportes y Medio ambiente, Madrid (Spain))
1994-01-01
Work on the statistical modelling of maximum daily rainfalls is presented, with a view to estimating the quantiles for different return periods. An index flood approach has been adopted in which the local quantiles are a result of rescaling a regional law using the mean of each series of values, that is utilized as a local scale factor. The annual maximum series have been taken from 1.545 meteorological stations over a 30 year period, and these have been classified into 26 regions defined according to meteorological criteria, the homogeneity of wich has been checked by means of a statistical analysis of the coefficients of variation of the samples,using the. An estimation has been made of the parameters for the following four distribution models: Two Component Extreme Value (TCEV); General Extreme Value (GEV); Log-Pearson III (LP3); and SQRT-Exponential Type Distribution of Maximum. The analysis of the quantiles obtained reveals slight differences in the results thus detracting from the importance of the model selection. The last of the above-mentioned distribution has been finally chosen, on the basis of the following: it is defined with fewer parameters it is the only that was proposed specifically for the analysis of daily rainfall maximums; it yields more conservative results than the traditional Gumbel distribution for the high return periods; and it is capable of providing a good description of the main sampling statistics concerning the right-hand tail of the distribution, a fact that has been checked with Montecarlo's simulation techniques. The choice of a distribution model with only two parameters has led to the selection of the regional coefficient of variation as the only determining parameter for the regional quantiles. This has permitted the elimination of the quantiles discontinuity of the classical regional approach, thus smoothing the values of that coefficient by means of an isoline plan on a national scale.
FROG - Fingerprinting Genomic Variation Ontology.
E Abinaya
Full Text Available Genetic variations play a crucial role in differential phenotypic outcomes. Given the complexity in establishing this correlation and the enormous data available today, it is imperative to design machine-readable, efficient methods to store, label, search and analyze this data. A semantic approach, FROG: "FingeRprinting Ontology of Genomic variations" is implemented to label variation data, based on its location, function and interactions. FROG has six levels to describe the variation annotation, namely, chromosome, DNA, RNA, protein, variations and interactions. Each level is a conceptual aggregation of logically connected attributes each of which comprises of various properties for the variant. For example, in chromosome level, one of the attributes is location of variation and which has two properties, allosomes or autosomes. Another attribute is variation kind which has four properties, namely, indel, deletion, insertion, substitution. Likewise, there are 48 attributes and 278 properties to capture the variation annotation across six levels. Each property is then assigned a bit score which in turn leads to generation of a binary fingerprint based on the combination of these properties (mostly taken from existing variation ontologies. FROG is a novel and unique method designed for the purpose of labeling the entire variation data generated till date for efficient storage, search and analysis. A web-based platform is designed as a test case for users to navigate sample datasets and generate fingerprints. The platform is available at http://ab-openlab.csir.res.in/frog.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
Mixed integer linear programming for maximum-parsimony phylogeny inference.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2008-01-01
Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.
A method to predict amplitude and date of maximum sunspot number
无
2000-01-01
A method to predict the amplitude and date of the maximum sunspot number is introduced. The regression analysis of the relationship between the variation rate of monthly sunspot numbers in the initial stage of solar cycles and both of the maximum and the time-length of ascending period of the cycle showed that they are closely correlative. In general, the maximum will be larger and the ascending period will be shorter when the rate is larger. The rate of sunspot numbers in the initial 2 years of the 23rd cycle is thus analyzed based on these grounds and the maximum of the cycle is predicted. For the smoothed monthly sunspot numbers, the maximum will be about 139.2±18.8 and the time-length of ascending period will be about 3.31±0.42 years, that is to say, the maximum will appear around the spring of the year 2000. For the mean monthly ones, the maximum will be near 170.1±22.9 and the time-length of ascending period will be about 3.42±0.46 years, that is to say, the appearing date of the maximum will be later.
Mesfin Dema
2014-05-01
Full Text Available We introduce a novel Maximum Entropy (MaxEnt framework that can generate 3D scenes by incorporating objects’ relevancy, hierarchical and contextual constraints in a unified model. This model is formulated by a Gibbs distribution, under the MaxEnt framework, that can be sampled to generate plausible scenes. Unlike existing approaches, which represent a given scene by a single And-Or graph, the relevancy constraint (defined as the frequency with which a given object exists in the training data require our approach to sample from multiple And-Or graphs, allowing variability in terms of objects’ existence across synthesized scenes. Once an And-Or graph is sampled from the ensemble, the hierarchical constraints are employed to sample the Or-nodes (style variations and the contextual constraints are subsequently used to enforce the corresponding relations that must be satisfied by the And-nodes. To illustrate the proposed methodology, we use desk scenes that are composed of objects whose existence, styles and arrangements (position and orientation can vary from one scene to the next. The relevancy, hierarchical and contextual constraints are extracted from a set of training scenes and utilized to generate plausible synthetic scenes that in turn satisfy these constraints. After applying the proposed framework, scenes that are plausible representations of the training examples are automatically generated.
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
A Study in HRT Resolution: Seeking Maximum Sensitivity Among Variations in Sensing Element Material
Morales, Jeremy M.
2005-01-01
The EXACT (Experiments Along Coexistence near Tricriticality) project endeavors to perform the most rigorous test to date of Renormalization Group theory. In most cases, the theory gives only approximate solutions, but it offers exact predictions in the case of the He-3-He-4 tricritical point. Currently, the project is focused on maximizing the performance of the low-temperature system's HRT (high resolution thermometer) near the tricritical point. The HRT uses a PdMn sensing element, the qualities of which change based on its Mn concentration and whether or not it is annealed. All sensing element combinations will be catalogued, and through the data, the optimum configuration will be reported.
Spectrum characteristics of geoelectric field variation
YE Qing; DU Xue-bin; ZHOU Ke-chang; LI Ning; MA Zhan-hu
2007-01-01
The spectrum characteristics of geoelectric diurnal variation and geoelectric storm have been identified by maximum entropy method, based on geoelectric data from seven stations in the Chinese mainland, including Jiayuguan, Changli and Chongming. The study shows that, in geoelectric diurnal variation, the amplitude of the 12 h semidiurnal wave is the largest, followed in turn by the 24～25 h diurnal wave and the 8 h periodic wave; Geoelectric storm usually occurs in a large-scale space, whose spectrum values are higher than those of geoelectric diurnal variation by 2～3 orders of magnitude. A preliminary interpretation is presented for the generative mechanism of predominant waves in geoelectric field variation.
... several times a day using capillary blood sampling. Disadvantages to capillary blood sampling include: Only a limited ... do not constitute endorsements of those other sites. Copyright 1997-2017, A.D.A.M., Inc. Duplication ...
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Macias B, L.R.; Garcia C, R.M.; De Ita de la Torre, A.; Chavez R, A. [Instituto Nacional de Investigaciones Nucleares, A.P. 18-1027, 11801 Mexico D.F. (Mexico)
2000-07-01
In this work making use of the diffraction and fluorescence techniques its were determined the presence of elements in a known compound ZrSiO{sub 4} under different pressure conditions. At preparing the samples it were applied different pressures from 1600 until 350 k N/m{sup 2} and it is detected the apparent variations in concentration in the Zr and Si elements. (Author)
Probable Maximum Earthquake Magnitudes for the Cascadia Subduction
Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.
2013-12-01
The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc
Estimating the exceedance probability of extreme rainfalls up to the probable maximum precipitation
Nathan, Rory; Jordan, Phillip; Scorah, Matthew; Lang, Simon; Kuczera, George; Schaefer, Melvin; Weinmann, Erwin
2016-12-01
If risk-based criteria are used in the design of high hazard structures (such as dam spillways and nuclear power stations), then it is necessary to estimate the annual exceedance probability (AEP) of extreme rainfalls up to and including the Probable Maximum Precipitation (PMP). This paper describes the development and application of two largely independent methods to estimate the frequencies of such extreme rainfalls. One method is based on stochastic storm transposition (SST), which combines the "arrival" and "transposition" probabilities of an extreme storm using the total probability theorem. The second method, based on "stochastic storm regression" (SSR), combines frequency curves of point rainfalls with regression estimates of local and transposed areal rainfalls; rainfall maxima are generated by stochastically sampling the independent variates, where the required exceedance probabilities are obtained using the total probability theorem. The methods are applied to two large catchments (with areas of 3550 km2 and 15,280 km2) located in inland southern Australia. Both methods were found to provide similar estimates of the frequency of extreme areal rainfalls for the two study catchments. The best estimates of the AEP of the PMP for the smaller and larger of the catchments were found to be 10-7 and 10-6, respectively, but the uncertainty of these estimates spans one to two orders of magnitude. Additionally, the SST method was applied to a range of locations within a meteorologically homogenous region to investigate the nature of the relationship between the AEP of PMP and catchment area.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Maximum mass of a barotropic spherical star
Fujisawa, Atsuhito; Yoo, Chul-Moon; Nambu, Yasusada
2015-01-01
The ratio of total mass $M$ to surface radius $R$ of spherical perfect fluid ball has an upper bound, $M/R < B$. Buchdahl obtained $B = 4/9$ under the assumptions; non-increasing mass density in outward direction, and barotropic equation of states. Barraco and Hamity decreased the Buchdahl's bound to a lower value $B = 3/8$ $(< 4/9)$ by adding the dominant energy condition to Buchdahl's assumptions. In this paper, we further decrease the Barraco-Hamity's bound to $B \\simeq 0.3636403$ $(< 3/8)$ by adding the subluminal (slower-than-light) condition of sound speed. In our analysis, we solve numerically Tolman-Oppenheimer-Volkoff equations, and the mass-to-radius ratio is maximized by variation of mass, radius and pressure inside the fluid ball as functions of mass density.
Shape Modelling Using Maximum Autocorrelation Factors
Larsen, Rasmus
2001-01-01
of the training set are in reality a time series, e.g.\\$\\backslash\\$ snapshots of a beating heart during the cardiac cycle or when the shapes are slices of a 3D structure, e.g. the spinal cord. Second, in almost all applications a natural order of the landmark points along the contour of the shape is introduced......This paper addresses the problems of generating a low dimensional representation of the shape variation present in a training set after alignment using Procrustes analysis and projection into shape tangent space. We will extend the use of principal components analysis in the original formulation...... of Active Shape Models by Timothy Cootes and Christopher Taylor by building new information into the model. This new information consists of two types of prior knowledge. First, in many situation we will be given an ordering of the shapes of the training set. This situation occurs when the shapes...
The effect of natural selection on the performance of maximum parsimony
Ofria Charles
2007-06-01
Full Text Available Abstract Background Maximum parsimony is one of the most commonly used and extensively studied phylogeny reconstruction methods. While current evaluation methodologies such as computer simulations provide insight into how well maximum parsimony reconstructs phylogenies, they tell us little about how well maximum parsimony performs on taxa drawn from populations of organisms that evolved subject to natural selection in addition to the random factors of drift and mutation. It is clear that natural selection has a significant impact on Among Site Rate Variation (ASRV and the rate of accepted substitutions; that is, accepted mutations do not occur with uniform probability along the genome and some substitutions are more likely to occur than other substitutions. However, little is know about how ASRV and non-uniform character substitutions impact the performance of reconstruction methods such as maximum parsimony. To gain insight into these issues, we study how well maximum parsimony performs with data generated by Avida, a digital life platform where populations of digital organisms evolve subject to natural selective pressures. Results We first identify conditions where natural selection does affect maximum parsimony's reconstruction accuracy. In general, as we increase the probability that a significant adaptation will occur in an intermediate ancestor, the performance of maximum parsimony improves. In fact, maximum parsimony can correctly reconstruct small 4 taxa trees on data that have received surprisingly many mutations if the intermediate ancestor has received a significant adaptation. We demonstrate that this improved performance of maximum parsimony is attributable more to ASRV than to non-uniform character substitutions. Conclusion Maximum parsimony, as well as most other phylogeny reconstruction methods, may perform significantly better on actual biological data than is currently suggested by computer simulation studies because of natural
Leadership Criteria under Maximum Performance Conditions
2011-03-01
idealized influence ( Northouse , 2001). Of the “Four I’s” of transformational leadership, idealized influence is described as the charismatic and role model...an ability characteristic that is trait based ( Northouse , 2001). Taken together, conscientiousness, honesty, idealized influence, integrity...tools ( Northouse , 2001). However, when using a military sample, the possibility that external validity can potentially be 49 compromised exists and
Rijkhoff, Jan; Bakker, Dik
1998-01-01
This article has two aims: [1] to present a revised version of the sampling method that was originally proposed in 1993 by Rijkhoff, Bakker, Hengeveld and Kahrel, and [2] to discuss a number of other approaches to language sampling in the light of our own method. We will also demonstrate how our ...... sampling method is used with different genetic classifications (Voegelin & Voegelin 1977, Ruhlen 1987, Grimes ed. 1997) and argue that —on the whole— our sampling technique compares favourably with other methods, especially in the case of exploratory research.......This article has two aims: [1] to present a revised version of the sampling method that was originally proposed in 1993 by Rijkhoff, Bakker, Hengeveld and Kahrel, and [2] to discuss a number of other approaches to language sampling in the light of our own method. We will also demonstrate how our...
Rijkhoff, Jan; Bakker, Dik
1998-01-01
This article has two aims: [1] to present a revised version of the sampling method that was originally proposed in 1993 by Rijkhoff, Bakker, Hengeveld and Kahrel, and [2] to discuss a number of other approaches to language sampling in the light of our own method. We will also demonstrate how our...... sampling method is used with different genetic classifications (Voegelin & Voegelin 1977, Ruhlen 1987, Grimes ed. 1997) and argue that —on the whole— our sampling technique compares favourably with other methods, especially in the case of exploratory research....
Influence of vertical flows in wells on groundwater sampling.
McMillan, Lindsay A; Rivett, Michael O; Tellam, John H; Dumble, Peter; Sharp, Helen
2014-11-15
Pumped groundwater sampling evaluations often assume that horizontal head gradients predominate and the sample comprises an average of water quality variation over the well screen interval weighted towards contributing zones of higher hydraulic conductivity (a permeability-weighted sample). However, the pumping rate used during sampling may not always be sufficient to overcome vertical flows in wells driven by ambient vertical head gradients. Such flows are reported in wells with screens between 3 and 10m in length where lower pumping rates are more likely to be used during sampling. Here, numerical flow and particle transport modeling is used to provide insight into the origin of samples under ambient vertical head gradients and under a range of pumping rates. When vertical gradients are present, sample provenance is sensitive to pump intake position, pumping rate and pumping duration. The sample may not be drawn from the whole screen interval even with extended pumping times. Sample bias is present even when the ambient vertical flow in the wellbore is less than the pumping rate. Knowledge of the maximum ambient vertical flow in the well does, however, allow estimation of the pumping rate that will yield a permeability-weighted sample. This rate may be much greater than that recommended for low-flow sampling. In practice at monitored sites, the sampling bias introduced by ambient vertical flows in wells may often be unrecognized or underestimated when drawing conclusions from sampling results. It follows that care should be taken in the interpretation of sampling data if supporting flow investigations have not been undertaken.
On Global Magnetic ``Monopoly'' Near Solar Cycle Maximums
Kryvodubskyj, V.
During last maximums of the solar activity the both poles of the polar magnetic field had the same polarity. Since in the turbulent α Ω -dynamo model the excitation thresholds of the periodic dipole and quadrupole modes of the poloidal madnetic field (PMF) are rather close [Parker E. N.: 1971, Ap.J. V. 164, p. 491] then it is possible that the quadrupole mode may be excited due to variations of physical parameters in a some regions of the solar convection zone (SCZ). The pattern of the excited modes (dipole, quadrupole, octupole, etc.) is determined by the values of wave number of the Parker's dynamo-wave. We calculated these values for the SCZ model by Stix (1989) [Stix M.: 1989, The Sun. Berlin, p. 200] in the vicinity of solar tachocline (a region of strong shear of angular velocity at the base of the SCZ) with using our estimation of the helical turbulence parameter [Krivodubskij V. N.: 1998, Astron. Reports V. 42, No 1, p. 122] and values of the radial gradient of the angular velocity obtained from the newer helioseismic measurements (during rising phase of 23th solar cycle: 1995-1999) [Howe R.,Christensen-Dalsgaard J., Hill F. et al.: 2000, Science. V. 287, p. 2456]. It is found out that at low latitudes dynamo mechanism produces rather the dipole (wave number ≈ -7), the main antisymmetric, relatively to equatorial plane, mode of the PMF; while at the latitudes higher than 50o the conditions are more favourable for exciting of the quadrupole (wave number ≈ +8), the lowest symmetric mode. Arised north-south magnetic structure asymmetry gives an opportunity to explain the space magnetic anomaly of the PMF (``monopoly'') observed near solar cycle maximums.
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Solar cycle variations in the solar wind
Freeman, John W.; Lopez, Ramon E.
1986-01-01
The solar cycle variations of various solar wind parameters are reviewed. It is shown that there is a gradual decrease in the duration of high-speed streams from the declining phase of solar cycle 20 through the ascending phase of cycle 21 and a corresponding decrease in the annual average of the proton speed toward solar maximum. Beta, the ratio of the proton thermal pressure to magnetic pressure, undergoes a significant solar cycle variation, as expected from the variation in the IMF. Individual hourly averages of beta often exceed unity with 20 cases exceeding 10 and one case as high as 25. The Alfven Mach number shows a solar cycle variation similar to beta, lower aboard solar maximum. High-speed streams can be seen clearly in epsilon and the y component of the interplanetary magnetic field.
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Understanding the Role of Reservoir Size on Probable Maximum Precipitation
Woldemichael, A. T.; Hossain, F.
2011-12-01
This study addresses the question 'Does surface area of an artificial reservoir matter in the estimation of probable maximum precipitation (PMP) for an impounded basin?' The motivation of the study was based on the notion that the stationarity assumption that is implicit in the PMP for dam design can be undermined in the post-dam era due to an enhancement of extreme precipitation patterns by an artificial reservoir. In addition, the study lays the foundation for use of regional atmospheric models as one way to perform life cycle assessment for planned or existing dams to formulate best management practices. The American River Watershed (ARW) with the Folsom dam at the confluence of the American River was selected as the study region and the Dec-Jan 1996-97 storm event was selected for the study period. The numerical atmospheric model used for the study was the Regional Atmospheric Modeling System (RAMS). First, the numerical modeling system, RAMS, was calibrated and validated with selected station and spatially interpolated precipitation data. Best combinations of parameterization schemes in RAMS were accordingly selected. Second, to mimic the standard method of PMP estimation by moisture maximization technique, relative humidity terms in the model were raised to 100% from ground up to the 500mb level. The obtained model-based maximum 72-hr precipitation values were named extreme precipitation (EP) as a distinction from the PMPs obtained by the standard methods. Third, six hypothetical reservoir size scenarios ranging from no-dam (all-dry) to the reservoir submerging half of basin were established to test the influence of reservoir size variation on EP. For the case of the ARW, our study clearly demonstrated that the assumption of stationarity that is implicit the traditional estimation of PMP can be rendered invalid to a large part due to the very presence of the artificial reservoir. Cloud tracking procedures performed on the basin also give indication of the