WorldWideScience

Sample records for correlated sampling methods

  1. An improved correlated sampling method for calculating correction factor of detector

    International Nuclear Information System (INIS)

    Wu Zhen; Li Junli; Cheng Jianping

    2006-01-01

    In the case of a small size detector lying inside a bulk of medium, there are two problems in the correction factors calculation of the detectors. One is that the detector is too small for the particles to arrive at and collide in; the other is that the ratio of two quantities is not accurate enough. The method discussed in this paper, which combines correlated sampling with modified particle collision auto-importance sampling, and has been realized on the MCNP-4C platform, can solve these two problems. Besides, other 3 variance reduction techniques are also combined with correlated sampling respectively to calculate a simple calculating model of the correction factors of detectors. The results prove that, although all the variance reduction techniques combined with correlated sampling can improve the calculating efficiency, the method combining the modified particle collision auto-importance sampling with the correlated sampling is the most efficient one. (authors)

  2. Monte Carlo burnup codes acceleration using the correlated sampling method

    International Nuclear Information System (INIS)

    Dieudonne, C.

    2013-01-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this document we present an original methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time we develop a theoretical model to study the features of the correlated sampling method to understand its effects on depletion calculations. In a third time the implementation of this method in the TRIPOLI-4 code will be discussed, as well as the precise calculation scheme used to bring important speed-up of the depletion calculation. We will begin to validate and optimize the perturbed depletion scheme with the calculation of a REP-like fuel cell depletion. Then this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes. (author) [fr

  3. A method for the estimation of the significance of cross-correlations in unevenly sampled red-noise time series

    Science.gov (United States)

    Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.

    2014-11-01

    We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.

  4. Random Sampling of Correlated Parameters – a Consistent Solution for Unfavourable Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Žerovnik, G., E-mail: gasper.zerovnik@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Trkov, A. [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); International Atomic Energy Agency, PO Box 100, A-1400 Vienna (Austria); Kodeli, I.A. [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Capote, R. [International Atomic Energy Agency, PO Box 100, A-1400 Vienna (Austria); Smith, D.L. [Argonne National Laboratory, 1710 Avenida del Mundo, Coronado, CA 92118-3073 (United States)

    2015-01-15

    Two methods for random sampling according to a multivariate lognormal distribution – the correlated sampling method and the method of transformation of correlation coefficients – are briefly presented. The methods are mathematically exact and enable consistent sampling of correlated inherently positive parameters with given information on the first two distribution moments. Furthermore, a weighted sampling method to accelerate the convergence of parameters with extremely large relative uncertainties is described. However, the method is efficient only for a limited number of correlated parameters.

  5. Inverse Ising inference with correlated samples

    International Nuclear Information System (INIS)

    Obermayer, Benedikt; Levine, Erel

    2014-01-01

    Correlations between two variables of a high-dimensional system can be indicative of an underlying interaction, but can also result from indirect effects. Inverse Ising inference is a method to distinguish one from the other. Essentially, the parameters of the least constrained statistical model are learned from the observed correlations such that direct interactions can be separated from indirect correlations. Among many other applications, this approach has been helpful for protein structure prediction, because residues which interact in the 3D structure often show correlated substitutions in a multiple sequence alignment. In this context, samples used for inference are not independent but share an evolutionary history on a phylogenetic tree. Here, we discuss the effects of correlations between samples on global inference. Such correlations could arise due to phylogeny but also via other slow dynamical processes. We present a simple analytical model to address the resulting inference biases, and develop an exact method accounting for background correlations in alignment data by combining phylogenetic modeling with an adaptive cluster expansion algorithm. We find that popular reweighting schemes are only marginally effective at removing phylogenetic bias, suggest a rescaling strategy that yields better results, and provide evidence that our conclusions carry over to the frequently used mean-field approach to the inverse Ising problem. (paper)

  6. Correlated random sampling for multivariate normal and log-normal distributions

    International Nuclear Information System (INIS)

    Žerovnik, Gašper; Trkov, Andrej; Kodeli, Ivan A.

    2012-01-01

    A method for correlated random sampling is presented. Representative samples for multivariate normal or log-normal distribution can be produced. Furthermore, any combination of normally and log-normally distributed correlated variables may be sampled to any requested accuracy. Possible applications of the method include sampling of resonance parameters which are used for reactor calculations.

  7. Improvement of correlated sampling Monte Carlo methods for reactivity calculations

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Asaoka, Takumi

    1978-01-01

    Two correlated Monte Carlo methods, the similar flight path and the identical flight path methods, have been improved to evaluate up to the second order change of the reactivity perturbation. Secondary fission neutrons produced by neutrons having passed through perturbed regions in both unperturbed and perturbed systems are followed in a way to have a strong correlation between secondary neutrons in both the systems. These techniques are incorporated into the general purpose Monte Carlo code MORSE, so as to be able to estimate also the statistical error of the calculated reactivity change. The control rod worths measured in the FCA V-3 assembly are analyzed with the present techniques, which are shown to predict the measured values within the standard deviations. The identical flight path method has revealed itself more useful than the similar flight path method for the analysis of the control rod worth. (auth.)

  8. Exact sampling of graphs with prescribed degree correlations

    Science.gov (United States)

    Bassler, Kevin E.; Del Genio, Charo I.; Erdős, Péter L.; Miklós, István; Toroczkai, Zoltán

    2015-08-01

    Many real-world networks exhibit correlations between the node degrees. For instance, in social networks nodes tend to connect to nodes of similar degree and conversely, in biological and technological networks, high-degree nodes tend to be linked with low-degree nodes. Degree correlations also affect the dynamics of processes supported by a network structure, such as the spread of opinions or epidemics. The proper modelling of these systems, i.e., without uncontrolled biases, requires the sampling of networks with a specified set of constraints. We present a solution to the sampling problem when the constraints imposed are the degree correlations. In particular, we develop an exact method to construct and sample graphs with a specified joint-degree matrix, which is a matrix providing the number of edges between all the sets of nodes of a given degree, for all degrees, thus completely specifying all pairwise degree correlations, and additionally, the degree sequence itself. Our algorithm always produces independent samples without backtracking. The complexity of the graph construction algorithm is {O}({NM}) where N is the number of nodes and M is the number of edges.

  9. Sampling networks with prescribed degree correlations

    Science.gov (United States)

    Del Genio, Charo; Bassler, Kevin; Erdos, Péter; Miklos, István; Toroczkai, Zoltán

    2014-03-01

    A feature of a network known to affect its structural and dynamical properties is the presence of correlations amongst the node degrees. Degree correlations are a measure of how much the connectivity of a node influences the connectivity of its neighbours, and they are fundamental in the study of processes such as the spreading of information or epidemics, the cascading failures of damaged systems and the evolution of social relations. We introduce a method, based on novel mathematical results, that allows the exact sampling of networks where the number of connections between nodes of any given connectivity is specified. Our algorithm provides a weight associated to each sample, thereby allowing network observables to be measured according to any desired distribution, and it is guaranteed to always terminate successfully in polynomial time. Thus, our new approach provides a preferred tool for scientists to model complex systems of current relevance, and enables researchers to precisely study correlated networks with broad societal importance. CIDG acknowledges support by the European Commission's FP7 through grant No. 288021. KEB acknowledges support from the NSF through grant DMR?1206839. KEB, PE, IM and ZT acknowledge support from AFSOR and DARPA through grant FA?9550-12-1-0405.

  10. Comparison of correlation analysis techniques for irregularly sampled time series

    Directory of Open Access Journals (Sweden)

    K. Rehfeld

    2011-06-01

    Full Text Available Geoscientific measurements often provide time series with irregular time sampling, requiring either data reconstruction (interpolation or sophisticated methods to handle irregular sampling. We compare the linear interpolation technique and different approaches for analyzing the correlation functions and persistence of irregularly sampled time series, as Lomb-Scargle Fourier transformation and kernel-based methods. In a thorough benchmark test we investigate the performance of these techniques.

    All methods have comparable root mean square errors (RMSEs for low skewness of the inter-observation time distribution. For high skewness, very irregular data, interpolation bias and RMSE increase strongly. We find a 40 % lower RMSE for the lag-1 autocorrelation function (ACF for the Gaussian kernel method vs. the linear interpolation scheme,in the analysis of highly irregular time series. For the cross correlation function (CCF the RMSE is then lower by 60 %. The application of the Lomb-Scargle technique gave results comparable to the kernel methods for the univariate, but poorer results in the bivariate case. Especially the high-frequency components of the signal, where classical methods show a strong bias in ACF and CCF magnitude, are preserved when using the kernel methods.

    We illustrate the performances of interpolation vs. Gaussian kernel method by applying both to paleo-data from four locations, reflecting late Holocene Asian monsoon variability as derived from speleothem δ18O measurements. Cross correlation results are similar for both methods, which we attribute to the long time scales of the common variability. The persistence time (memory is strongly overestimated when using the standard, interpolation-based, approach. Hence, the Gaussian kernel is a reliable and more robust estimator with significant advantages compared to other techniques and suitable for large scale application to paleo-data.

  11. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions.

    Science.gov (United States)

    Tao, Guohua; Miller, William H

    2011-07-14

    An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.

  12. Comparison between correlated sampling and the perturbation technique of MCNP5 for fixed-source problems

    International Nuclear Information System (INIS)

    He Tao; Su Bingjing

    2011-01-01

    Highlights: → The performance of the MCNP differential operator perturbation technique is compared with that of the MCNP correlated sampling method for three types of fixed-source problems. → In terms of precision, the MCNP perturbation technique outperforms correlated sampling for one type of problem but performs comparably with or even under-performs correlated sampling for the other two types of problems. → In terms of accuracy, the MCNP perturbation calculations may predict inaccurate results for some of the test problems. However, the accuracy can be improved if the midpoint correction technique is used. - Abstract: Correlated sampling and the differential operator perturbation technique are two methods that enable MCNP (Monte Carlo N-Particle) to simulate small response change between an original system and a perturbed system. In this work the performance of the MCNP differential operator perturbation technique is compared with that of the MCNP correlated sampling method for three types of fixed-source problems. In terms of precision of predicted response changes, the MCNP perturbation technique outperforms correlated sampling for the problem involving variation of nuclide concentrations in the same direction but performs comparably with or even underperforms correlated sampling for the other two types of problems that involve void or variation of nuclide concentrations in opposite directions. In terms of accuracy, the MCNP differential operator perturbation calculations may predict inaccurate results that deviate from the benchmarks well beyond their uncertainty ranges for some of the test problems. However, the accuracy of the MCNP differential operator perturbation can be improved if the midpoint correction technique is used.

  13. Angular correlation methods

    International Nuclear Information System (INIS)

    Ferguson, A.J.

    1974-01-01

    An outline of the theory of angular correlations is presented, and the difference between the modern density matrix method and the traditional wave function method is stressed. Comments are offered on particular angular correlation theoretical techniques. A brief discussion is given of recent studies of gamma ray angular correlations of reaction products recoiling with high velocity into vacuum. Two methods for optimization to obtain the most accurate expansion coefficients of the correlation are discussed. (1 figure, 53 references) (U.S.)

  14. Characteristics and correlation of various radiation measuring methods in spatial radiation measurement

    International Nuclear Information System (INIS)

    Yoneda, Kazuhiro; Tonouchi, Shigemasa

    1992-01-01

    When the survey of the state of natural radiation distribution was carried out, for the purpose of examining the useful measuring method, the comparison of the γ-ray dose rate calculated from survey meter method, in-situ measuring method and the measuring method by sampling soil was carried out. Between the in-situ measuring method and the survey meter method, the correlation Y=0.986X+5.73, r=0.903, n=18, P<0.01 was obtained, and the high correlation having the inclination of nearly 1 was shown. Between the survey meter method and the measuring method by sampling soil, the correlation Y=1.297X-10.30, r=0.966, n=20 P<0.01 was obtained, and the high correlation was shown, but as for the dose rate contribution, the disparities of 36% in U series, 6% in Th series and 20% in K-40 were observed. For the survey of the state of natural radiation distribution, the method of using in combination the survey meter method and the in-situ measuring method or the measuring method by sampling soil is suitable. (author)

  15. Estimation of the biserial correlation and its sampling variance for use in meta-analysis.

    Science.gov (United States)

    Jacobs, Perke; Viechtbauer, Wolfgang

    2017-06-01

    Meta-analyses are often used to synthesize the findings of studies examining the correlational relationship between two continuous variables. When only dichotomous measurements are available for one of the two variables, the biserial correlation coefficient can be used to estimate the product-moment correlation between the two underlying continuous variables. Unlike the point-biserial correlation coefficient, biserial correlation coefficients can therefore be integrated with product-moment correlation coefficients in the same meta-analysis. The present article describes the estimation of the biserial correlation coefficient for meta-analytic purposes and reports simulation results comparing different methods for estimating the coefficient's sampling variance. The findings indicate that commonly employed methods yield inconsistent estimates of the sampling variance across a broad range of research situations. In contrast, consistent estimates can be obtained using two methods that appear to be unknown in the meta-analytic literature. A variance-stabilizing transformation for the biserial correlation coefficient is described that allows for the construction of confidence intervals for individual coefficients with close to nominal coverage probabilities in most of the examined conditions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Uniform Sampling Table Method and its Applications II--Evaluating the Uniform Sampling by Experiment.

    Science.gov (United States)

    Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei

    2015-01-01

    A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.

  17. Analysis method of high-order collective-flow correlations based on the concept of correlative degree

    International Nuclear Information System (INIS)

    Zhang Weigang

    2000-01-01

    Based on the concept of correlative degree, a new method of high-order collective-flow measurement is constructed, with which azimuthal correlations, correlations of final state transverse momentum magnitude and transverse correlations can be inspected respectively. Using the new method the contributions of the azimuthal correlations of particles distribution and the correlations of transverse momentum magnitude of final state particles to high-order collective-flow correlations are analyzed respectively with 4π experimental events for 1.2 A GeV Ar + BaI 2 collisions at the Bevalac stream chamber. Comparing with the correlations of transverse momentum magnitude, the azimuthal correlations of final state particles distribution dominate high-order collective-flow correlations in experimental samples. The contributions of correlations of transverse momentum magnitude of final state particles not only enhance the strength of the high-order correlations of particle group, but also provide important information for the measurement of the collectivity of collective flow within the more constraint district

  18. Turbidity threshold sampling: Methods and instrumentation

    Science.gov (United States)

    Rand Eads; Jack Lewis

    2001-01-01

    Traditional methods for determining the frequency of suspended sediment sample collection often rely on measurements, such as water discharge, that are not well correlated to sediment concentration. Stream power is generally not a good predictor of sediment concentration for rivers that transport the bulk of their load as fines, due to the highly variable routing of...

  19. The study of combining Latin Hypercube Sampling method and LU decomposition method (LULHS method) for constructing spatial random field

    Science.gov (United States)

    WANG, P. T.

    2015-12-01

    Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.

  20. Simulating quantum correlations as a distributed sampling problem

    International Nuclear Information System (INIS)

    Degorre, Julien; Laplante, Sophie; Roland, Jeremie

    2005-01-01

    It is known that quantum correlations exhibited by a maximally entangled qubit pair can be simulated with the help of shared randomness, supplemented with additional resources, such as communication, postselection or nonlocal boxes. For instance, in the case of projective measurements, it is possible to solve this problem with protocols using one bit of communication or making one use of a nonlocal box. We show that this problem reduces to a distributed sampling problem. We give a new method to obtain samples from a biased distribution, starting with shared random variables following a uniform distribution, and use it to build distributed sampling protocols. This approach allows us to derive, in a simpler and unified way, many existing protocols for projective measurements, and extend them to positive operator value measurements. Moreover, this approach naturally leads to a local hidden variable model for Werner states

  1. Development of digital image correlation method to analyse crack ...

    Indian Academy of Sciences (India)

    samples were performed to verify the performance of the digital image correlation method. ... development cannot be measured accurately. ..... Mendelson A 1983 Plasticity: Theory and application (USA: Krieger Publishing company Malabar,.

  2. A Method to Correlate mRNA Expression Datasets Obtained from Fresh Frozen and Formalin-Fixed, Paraffin-Embedded Tissue Samples: A Matter of Thresholds.

    Directory of Open Access Journals (Sweden)

    Dana A M Mustafa

    Full Text Available Gene expression profiling of tumors is a successful tool for the discovery of new cancer biomarkers and potential targets for the development of new therapeutic strategies. Reliable profiling is preferably performed on fresh frozen (FF tissues in which the quality of nucleic acids is better preserved than in formalin-fixed paraffin-embedded (FFPE material. However, since snap-freezing of biopsy materials is often not part of daily routine in pathology laboratories, one may have to rely on archival FFPE material. Procedures to retrieve the RNAs from FFPE materials have been developed and therefore, datasets obtained from FFPE and FF materials need to be made compatible to ensure reliable comparisons are possible.To develop an efficient method to compare gene expression profiles obtained from FFPE and FF samples using the same platform.Twenty-six FFPE-FF sample pairs of the same tumors representing various cancer types, and two FFPE-FF sample pairs of breast cancer cell lines, were included. Total RNA was extracted and gene expression profiling was carried out using Illumina's Whole-Genome cDNA-mediated Annealing, Selection, extension and Ligation (WG-DASL V3 arrays, enabling the simultaneous detection of 24,526 mRNA transcripts. A sample exclusion criterion was created based on the expression of 11 stably expressed reference genes. Pearson correlation at the probe level was calculated for paired FFPE-FF, and three cut-off values were chosen. Spearman correlation coefficients between the matched FFPE and FF samples were calculated for three probe lists with varying levels of significance and compared to the correlation based on all measured probes. Unsupervised hierarchical cluster analysis was performed to verify performance of the included probe lists to compare matched FPPE-FF samples.Twenty-seven FFPE-FF pairs passed the sample exclusion criterion. From the profiles of 27 FFPE and FF matched samples, the best correlating probes were identified

  3. Quantum superposition of the state discrete spectrum of mathematical correlation molecule for small samples of biometric data

    Directory of Open Access Journals (Sweden)

    Vladimir I. Volchikhin

    2017-06-01

    Full Text Available Introduction: The study promotes to decrease a number of errors of calculating the correlation coefficient in small test samples. Materials and Methods: We used simulation tool for the distribution functions of the density values of the correlation coefficient in small samples. A method for quantization of the data, allows obtaining a discrete spectrum states of one of the varieties of correlation functional. This allows us to consider the proposed structure as a mathematical correlation molecule, described by some analogue continuous-quantum Schrödinger equation. Results: The chi-squared Pearson’s molecule on small samples allows enhancing power of classical chi-squared test to 20 times. A mathematical correlation molecule described in the article has similar properties. It allows in the future reducing calculation errors of the classical correlation coefficients in small samples. Discussion and Conclusions: The authors suggest that there are infinitely many mathematical molecules are similar in their properties to the actual physical molecules. Schrödinger equations are not unique, their analogues can be constructed for each mathematical molecule. You can expect a mathematical synthesis of molecules for a large number of known statistical tests and statistical moments. All this should make it possible to reduce calculation errors due to quantum effects that occur in small test samples.

  4. Uncertainty management in stratigraphic well correlation and stratigraphic architectures: A training-based method

    Science.gov (United States)

    Edwards, Jonathan; Lallier, Florent; Caumon, Guillaume; Carpentier, Cédric

    2018-02-01

    We discuss the sampling and the volumetric impact of stratigraphic correlation uncertainties in basins and reservoirs. From an input set of wells, we evaluate the probability for two stratigraphic units to be associated using an analog stratigraphic model. In the presence of multiple wells, this method sequentially updates a stratigraphic column defining the stratigraphic layering for each possible set of realizations. The resulting correlations are then used to create stratigraphic grids in three dimensions. We apply this method on a set of synthetic wells sampling a forward stratigraphic model built with Dionisos. To perform cross-validation of the method, we introduce a distance comparing the relative geological time of two models for each geographic position, and we compare the models in terms of volumes. Results show the ability of the method to automatically generate stratigraphic correlation scenarios, and also highlight some challenges when sampling stratigraphic uncertainties from multiple wells.

  5. Survey of sampling-based methods for uncertainty and sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, J.C.; Johnson, J.D.; Sallaberry, C.J.; Storlie, C.B.

    2006-01-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (i) definition of probability distributions to characterize epistemic uncertainty in analysis inputs (ii) generation of samples from uncertain analysis inputs (iii) propagation of sampled inputs through an analysis (iv) presentation of uncertainty analysis results, and (v) determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two-dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition

  6. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD. (.; .); Storlie, Curt B. (Colorado State University, Fort Collins, CO)

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  7. The Correlation between Obsessive Compulsive Features and Dimensions of Pathological Eating Attitudes in Non-clinical Samples

    Directory of Open Access Journals (Sweden)

    Ali Mohammadzadeh

    2017-01-01

    Full Text Available Background and Objectives: Obsessive compulsive symptoms are prevalent in individuals with eating disorders at clinical level. The purpose of this study was to investigate the correlation between obsessive compulsive features and pathological eating attitudes. Methods: This research is a correlational study. A sample of 790 university students were selected using stratified random sampling method and investigated by Obsessive Compulsive Inventory-Revised (OCI-R, and Eating Attitudes Test (EAT-26 questionnaires. Data were analyzed using multivariate regression analysis. Results: There were a correlation between obsessive-compulsive features and pathological eating attitudes (p<0.001, r=0.38, The results showed that obsessive-compulsive features can predict 15% of pathological eating attitudes (p<0.001, r2=0.15. Conclusion: The identified correlation is possibly related to common components between obsessive compulsive and eating disorders.

  8. Structured sparse canonical correlation analysis for brain imaging genetics: an improved GraphNet method.

    Science.gov (United States)

    Du, Lei; Huang, Heng; Yan, Jingwen; Kim, Sungeun; Risacher, Shannon L; Inlow, Mark; Moore, Jason H; Saykin, Andrew J; Shen, Li

    2016-05-15

    Structured sparse canonical correlation analysis (SCCA) models have been used to identify imaging genetic associations. These models either use group lasso or graph-guided fused lasso to conduct feature selection and feature grouping simultaneously. The group lasso based methods require prior knowledge to define the groups, which limits the capability when prior knowledge is incomplete or unavailable. The graph-guided methods overcome this drawback by using the sample correlation to define the constraint. However, they are sensitive to the sign of the sample correlation, which could introduce undesirable bias if the sign is wrongly estimated. We introduce a novel SCCA model with a new penalty, and develop an efficient optimization algorithm. Our method has a strong upper bound for the grouping effect for both positively and negatively correlated features. We show that our method performs better than or equally to three competing SCCA models on both synthetic and real data. In particular, our method identifies stronger canonical correlations and better canonical loading patterns, showing its promise for revealing interesting imaging genetic associations. The Matlab code and sample data are freely available at http://www.iu.edu/∼shenlab/tools/angscca/ shenli@iu.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions. II. A simplified implementation.

    Science.gov (United States)

    Tao, Guohua; Miller, William H

    2012-09-28

    An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.

  10. A general assignment method for oriented sample (OS) solid-state NMR of proteins based on the correlation of resonances through heteronuclear dipolar couplings in samples aligned parallel and perpendicular to the magnetic field.

    Science.gov (United States)

    Lu, George J; Son, Woo Sung; Opella, Stanley J

    2011-04-01

    A general method for assigning oriented sample (OS) solid-state NMR spectra of proteins is demonstrated. In principle, this method requires only a single sample of a uniformly ¹⁵N-labeled membrane protein in magnetically aligned bilayers, and a previously assigned isotropic chemical shift spectrum obtained either from solution NMR on micelle or isotropic bicelle samples or from magic angle spinning (MAS) solid-state NMR on unoriented proteoliposomes. The sequential isotropic resonance assignments are transferred to the OS solid-state NMR spectra of aligned samples by correlating signals from the same residue observed in protein-containing bilayers aligned with their normals parallel and perpendicular to the magnetic field. The underlying principle is that the resonances from the same residue have heteronuclear dipolar couplings that differ by exactly a factor of two between parallel and perpendicular alignments. The method is demonstrated on the membrane-bound form of Pf1 coat protein in phospholipid bilayers, whose assignments have been previously made using an earlier generation of methods that relied on the preparation of many selectively labeled (by residue type) samples. The new method provides the correct resonance assignments using only a single uniformly ¹⁵N-labeled sample, two solid-state NMR spectra, and a previously assigned isotropic spectrum. Significantly, this approach is equally applicable to residues in alpha helices, beta sheets, loops, and any other elements of tertiary structure. Moreover, the strategy bridges between OS solid-state NMR of aligned samples and solution NMR or MAS solid-state NMR of unoriented samples. In combination with the development of complementary experimental methods, it provides a step towards unifying these apparently different NMR approaches. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Evaluation of Approaches to Analyzing Continuous Correlated Eye Data When Sample Size Is Small.

    Science.gov (United States)

    Huang, Jing; Huang, Jiayan; Chen, Yong; Ying, Gui-Shuang

    2018-02-01

    To evaluate the performance of commonly used statistical methods for analyzing continuous correlated eye data when sample size is small. We simulated correlated continuous data from two designs: (1) two eyes of a subject in two comparison groups; (2) two eyes of a subject in the same comparison group, under various sample size (5-50), inter-eye correlation (0-0.75) and effect size (0-0.8). Simulated data were analyzed using paired t-test, two sample t-test, Wald test and score test using the generalized estimating equations (GEE) and F-test using linear mixed effects model (LMM). We compared type I error rates and statistical powers, and demonstrated analysis approaches through analyzing two real datasets. In design 1, paired t-test and LMM perform better than GEE, with nominal type 1 error rate and higher statistical power. In design 2, no test performs uniformly well: two sample t-test (average of two eyes or a random eye) achieves better control of type I error but yields lower statistical power. In both designs, the GEE Wald test inflates type I error rate and GEE score test has lower power. When sample size is small, some commonly used statistical methods do not perform well. Paired t-test and LMM perform best when two eyes of a subject are in two different comparison groups, and t-test using the average of two eyes performs best when the two eyes are in the same comparison group. When selecting the appropriate analysis approach the study design should be considered.

  12. In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample

    KAUST Repository

    Wang, B.

    2017-11-27

    The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.

  13. In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample

    KAUST Repository

    Wang, B.; Pan, B.; Lubineau, Gilles

    2017-01-01

    The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.

  14. Extension of Latin hypercube samples with correlated variables

    Energy Technology Data Exchange (ETDEWEB)

    Sallaberry, C.J. [Sandia National Laboratories, Department 6784, MS 0776, Albuquerque, NM 87185-0776 (United States); Helton, J.C. [Department of Mathematics and Statistics, Arizona State University, Tempe, AZ 85287-1804 (United States)], E-mail: jchelto@sandia.gov; Hora, S.C. [University of Hawaii at Hilo, Hilo, HI 96720-4091 (United States)

    2008-07-15

    A procedure for extending the size of a Latin hypercube sample (LHS) with rank correlated variables is described and illustrated. The extension procedure starts with an LHS of size m and associated rank correlation matrix C and constructs a new LHS of size 2m that contains the elements of the original LHS and has a rank correlation matrix that is close to the original rank correlation matrix C. The procedure is intended for use in conjunction with uncertainty and sensitivity analysis of computationally demanding models in which it is important to make efficient use of a necessarily limited number of model evaluations.

  15. Extension of Latin hypercube samples with correlated variables

    International Nuclear Information System (INIS)

    Sallaberry, C.J.; Helton, J.C.; Hora, S.C.

    2008-01-01

    A procedure for extending the size of a Latin hypercube sample (LHS) with rank correlated variables is described and illustrated. The extension procedure starts with an LHS of size m and associated rank correlation matrix C and constructs a new LHS of size 2m that contains the elements of the original LHS and has a rank correlation matrix that is close to the original rank correlation matrix C. The procedure is intended for use in conjunction with uncertainty and sensitivity analysis of computationally demanding models in which it is important to make efficient use of a necessarily limited number of model evaluations

  16. Extension of latin hypercube samples with correlated variables.

    Energy Technology Data Exchange (ETDEWEB)

    Hora, Stephen Curtis (University of Hawaii at Hilo, HI); Helton, Jon Craig (Arizona State University, Tempe, AZ); Sallaberry, Cedric J. PhD. (.; .)

    2006-11-01

    A procedure for extending the size of a Latin hypercube sample (LHS) with rank correlated variables is described and illustrated. The extension procedure starts with an LHS of size m and associated rank correlation matrix C and constructs a new LHS of size 2m that contains the elements of the original LHS and has a rank correlation matrix that is close to the original rank correlation matrix C. The procedure is intended for use in conjunction with uncertainty and sensitivity analysis of computationally demanding models in which it is important to make efficient use of a necessarily limited number of model evaluations.

  17. Comparability of river suspended-sediment sampling and laboratory analysis methods

    Science.gov (United States)

    Groten, Joel T.; Johnson, Gregory D.

    2018-03-06

    Accurate measurements of suspended sediment, a leading water-quality impairment in many Minnesota rivers, are important for managing and protecting water resources; however, water-quality standards for suspended sediment in Minnesota are based on grab field sampling and total suspended solids (TSS) laboratory analysis methods that have underrepresented concentrations of suspended sediment in rivers compared to U.S. Geological Survey equal-width-increment or equal-discharge-increment (EWDI) field sampling and suspended sediment concentration (SSC) laboratory analysis methods. Because of this underrepresentation, the U.S. Geological Survey, in collaboration with the Minnesota Pollution Control Agency, collected concurrent grab and EWDI samples at eight sites to compare results obtained using different combinations of field sampling and laboratory analysis methods.Study results determined that grab field sampling and TSS laboratory analysis results were biased substantially low compared to EWDI sampling and SSC laboratory analysis results, respectively. Differences in both field sampling and laboratory analysis methods caused grab and TSS methods to be biased substantially low. The difference in laboratory analysis methods was slightly greater than field sampling methods.Sand-sized particles had a strong effect on the comparability of the field sampling and laboratory analysis methods. These results indicated that grab field sampling and TSS laboratory analysis methods fail to capture most of the sand being transported by the stream. The results indicate there is less of a difference among samples collected with grab field sampling and analyzed for TSS and concentration of fines in SSC. Even though differences are present, the presence of strong correlations between SSC and TSS concentrations provides the opportunity to develop site specific relations to address transport processes not captured by grab field sampling and TSS laboratory analysis methods.

  18. Toward cost-efficient sampling methods

    Science.gov (United States)

    Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie

    2015-09-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.

  19. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    Science.gov (United States)

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  20. L-Band Polarimetric Correlation Radiometer with Subharmonic Sampling

    DEFF Research Database (Denmark)

    Rotbøll, Jesper; Søbjærg, Sten Schmidl; Skou, Niels

    2001-01-01

    A novel L-band radiometer trading analog complexity for digital ditto has been designed and built. It is a fully polarimetric radiometer of the correlation type and it is based on the sub-harmonic sampling principle in which the L-band signal is directly sampled by a fast A to D converter...

  1. Quantitating morphological changes in biological samples during scanning electron microscopy sample preparation with correlative super-resolution microscopy.

    Science.gov (United States)

    Zhang, Ying; Huang, Tao; Jorgens, Danielle M; Nickerson, Andrew; Lin, Li-Jung; Pelz, Joshua; Gray, Joe W; López, Claudia S; Nan, Xiaolin

    2017-01-01

    Sample preparation is critical to biological electron microscopy (EM), and there have been continuous efforts on optimizing the procedures to best preserve structures of interest in the sample. However, a quantitative characterization of the morphological changes associated with each step in EM sample preparation is currently lacking. Using correlative EM and superresolution microscopy (SRM), we have examined the effects of different drying methods as well as osmium tetroxide (OsO4) post-fixation on cell morphology during scanning electron microscopy (SEM) sample preparation. Here, SRM images of the sample acquired under hydrated conditions were used as a baseline for evaluating morphological changes as the sample went through SEM sample processing. We found that both chemical drying and critical point drying lead to a mild cellular boundary retraction of ~60 nm. Post-fixation by OsO4 causes at least 40 nm additional boundary retraction. We also found that coating coverslips with adhesion molecules such as fibronectin prior to cell plating helps reduce cell distortion from OsO4 post-fixation. These quantitative measurements offer useful information for identifying causes of cell distortions in SEM sample preparation and improving current procedures.

  2. Comparison of glomerular filtration rate measured by plasma sample technique, Cockroft Gault method and Gates’ method in voluntary kidney donors and renal transplant recipients

    International Nuclear Information System (INIS)

    Hephzibah, Julie; Shanthly, Nylla; Oommen, Regi

    2013-01-01

    There are numerous methods for calculation of glomerular filtration rate (GFR), which is a crucial measurement to identify patients with renal disease. The aim of this study is to compare four different methods of GFR calculation. Clinical setup, prospective study. Data was collected from routine renal scans done for voluntary kidney donors (VKD) or renal transplant recipients 6 months after transplantation. Following technetium-99m diethylene triamine penta acetic acid injection, venous blood samples were collected from contralateral arm at 120, 180, and 240 min through an indwelling venous cannula and direct collection by syringe. A total volume of 1 ml of plasma from each sample and standards were counted in an automatic gamma counter for 1 min. Blood samples taken at 120 min and 240 min were used for double plasma sample method (DPSM) and a sample taken at 180 min for single plasma sample method (SPSM). Russell's formulae for SPSM and DPSM were used for GFR estimation. Gates’ method GFR was calculated by vendor provided software. Correlation analysis was performed using Pearson's correlation test. SPSM correlated well with DPSM. GFR value in healthy potential kidney donors has a significant role in the selection of donors. The mean GFR ± (standard deviation) in VKD using SPSM, DPSM, camera depth method and Cockroft Gault method was 134.6 (25.9), 137.5 (42.4), 98.6 (15.9), 83.5 (21.1) respectively. Gates’ GFR calculation did not correlate well with plasma sampling method. Calculation of GFR plays a vital role in the management of renal patients, hence it was noted that Gates GFR may not be a reliable method of calculation. SPSM was more reliable. DPSM is reliable but cumbersome. It is difficult to accurately calculate GFR without a gold standard

  3. Libraries for spectrum identification: Method of normalized coordinates versus linear correlation

    International Nuclear Information System (INIS)

    Ferrero, A.; Lucena, P.; Herrera, R.G.; Dona, A.; Fernandez-Reyes, R.; Laserna, J.J.

    2008-01-01

    In this work it is proposed that an easy solution based directly on linear algebra in order to obtain the relation between a spectrum and a spectrum base. This solution is based on the algebraic determination of an unknown spectrum coordinates with respect to a spectral library base. The identification capacity comparison between this algebraic method and the linear correlation method has been shown using experimental spectra of polymers. Unlike the linear correlation (where the presence of impurities may decrease the discrimination capacity), this method allows to detect quantitatively the existence of a mixture of several substances in a sample and, consequently, to beer in mind impurities for improving the identification

  4. Towards Cost-efficient Sampling Methods

    OpenAIRE

    Peng, Luo; Yongli, Li; Chong, Wu

    2014-01-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper presents two new sampling methods based on the perspective that a small part of vertices with high node degree can possess the most structure information of a network. The two proposed sampling methods are efficient in sampling the nodes with high degree. The first new sampling method is improved on the basis of the stratified random sampling method and...

  5. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    Science.gov (United States)

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  6. Coordinate transformation based cryo-correlative methods for electron tomography and focused ion beam milling

    International Nuclear Information System (INIS)

    Fukuda, Yoshiyuki; Schrod, Nikolas; Schaffer, Miroslava; Feng, Li Rebekah; Baumeister, Wolfgang; Lucic, Vladan

    2014-01-01

    Correlative microscopy allows imaging of the same feature over multiple length scales, combining light microscopy with high resolution information provided by electron microscopy. We demonstrate two procedures for coordinate transformation based correlative microscopy of vitrified biological samples applicable to different imaging modes. The first procedure aims at navigating cryo-electron tomography to cellular regions identified by fluorescent labels. The second procedure, allowing navigation of focused ion beam milling to fluorescently labeled molecules, is based on the introduction of an intermediate scanning electron microscopy imaging step to overcome the large difference between cryo-light microscopy and focused ion beam imaging modes. These methods make it possible to image fluorescently labeled macromolecular complexes in their natural environments by cryo-electron tomography, while minimizing exposure to the electron beam during the search for features of interest. - Highlights: • Correlative light microscopy and focused ion beam milling of vitrified samples. • Coordinate transformation based cryo-correlative method. • Improved correlative light microscopy and cryo-electron tomography

  7. Sampling methods

    International Nuclear Information System (INIS)

    Loughran, R.J.; Wallbrink, P.J.; Walling, D.E.; Appleby, P.G.

    2002-01-01

    Methods for the collection of soil samples to determine levels of 137 Cs and other fallout radionuclides, such as excess 210 Pb and 7 Be, will depend on the purposes (aims) of the project, site and soil characteristics, analytical capacity, the total number of samples that can be analysed and the sample mass required. The latter two will depend partly on detector type and capabilities. A variety of field methods have been developed for different field conditions and circumstances over the past twenty years, many of them inherited or adapted from soil science and sedimentology. The use of them inherited or adapted from soil science and sedimentology. The use of 137 Cs in erosion studies has been widely developed, while the application of fallout 210 Pb and 7 Be is still developing. Although it is possible to measure these nuclides simultaneously, it is common for experiments to designed around the use of 137 Cs along. Caesium studies typically involve comparison of the inventories found at eroded or sedimentation sites with that of a 'reference' site. An accurate characterization of the depth distribution of these fallout nuclides is often required in order to apply and/or calibrate the conversion models. However, depending on the tracer involved, the depth distribution, and thus the sampling resolution required to define it, differs. For example, a depth resolution of 1 cm is often adequate when using 137 Cs. However, fallout 210 Pb and 7 Be commonly has very strong surface maxima that decrease exponentially with depth, and fine depth increments are required at or close to the soil surface. Consequently, different depth incremental sampling methods are required when using different fallout radionuclides. Geomorphic investigations also frequently require determination of the depth-distribution of fallout nuclides on slopes and depositional sites as well as their total inventories

  8. Cross-correlation redshift calibration without spectroscopic calibration samples in DES Science Verification Data

    Science.gov (United States)

    Davis, C.; Rozo, E.; Roodman, A.; Alarcon, A.; Cawthon, R.; Gatti, M.; Lin, H.; Miquel, R.; Rykoff, E. S.; Troxel, M. A.; Vielzeuf, P.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Bechtol, K.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Castander, F. J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Doel, P.; Drlica-Wagner, A.; Fausti Neto, A.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gaztanaga, E.; Gerdes, D. W.; Giannantonio, T.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Jain, B.; James, D. J.; Jeltema, T.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Li, T. S.; Lima, M.; March, M.; Marshall, J. L.; Martini, P.; Melchior, P.; Ogando, R. L. C.; Plazas, A. A.; Romer, A. K.; Sanchez, E.; Scarpine, V.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Vikram, V.; Walker, A. R.; Wechsler, R. H.

    2018-06-01

    Galaxy cross-correlations with high-fidelity redshift samples hold the potential to precisely calibrate systematic photometric redshift uncertainties arising from the unavailability of complete and representative training and validation samples of galaxies. However, application of this technique in the Dark Energy Survey (DES) is hampered by the relatively low number density, small area, and modest redshift overlap between photometric and spectroscopic samples. We propose instead using photometric catalogues with reliable photometric redshifts for photo-z calibration via cross-correlations. We verify the viability of our proposal using redMaPPer clusters from the Sloan Digital Sky Survey (SDSS) to successfully recover the redshift distribution of SDSS spectroscopic galaxies. We demonstrate how to combine photo-z with cross-correlation data to calibrate photometric redshift biases while marginalizing over possible clustering bias evolution in either the calibration or unknown photometric samples. We apply our method to DES Science Verification (DES SV) data in order to constrain the photometric redshift distribution of a galaxy sample selected for weak lensing studies, constraining the mean of the tomographic redshift distributions to a statistical uncertainty of Δz ˜ ±0.01. We forecast that our proposal can, in principle, control photometric redshift uncertainties in DES weak lensing experiments at a level near the intrinsic statistical noise of the experiment over the range of redshifts where redMaPPer clusters are available. Our results provide strong motivation to launch a programme to fully characterize the systematic errors from bias evolution and photo-z shapes in our calibration procedure.

  9. Comparisons of methods for generating conditional Poisson samples and Sampford samples

    OpenAIRE

    Grafström, Anton

    2005-01-01

    Methods for conditional Poisson sampling (CP-sampling) and Sampford sampling are compared and the focus is on the efficiency of the methods. The efficiency is investigated by simulation in different sampling situations. It was of interest to compare methods since new methods for both CP-sampling and Sampford sampling were introduced by Bondesson, Traat & Lundqvist in 2004. The new methods are acceptance rejection methods that use the efficient Pareto sampling method. They are found to be ...

  10. Regional cerebral blood flow measurements by a noninvasive microsphere method using 123I-IMP. Comparison with the modified fractional uptake method and the continuous arterial blood sampling method

    International Nuclear Information System (INIS)

    Nakano, Seigo; Matsuda, Hiroshi; Tanizaki, Hiroshi; Ogawa, Masafumi; Miyazaki, Yoshiharu; Yonekura, Yoshiharu

    1998-01-01

    A noninvasive microsphere method using N-isopropyl-p-( 123 I)iodoamphetamine ( 123 I-IMP), developed by Yonekura et al., was performed in 10 patients with neurological diseases to quantify regional cerebral blood flow (rCBF). Regional CBF values by this method were compared with rCBF values simultaneously estimated from both the modified fractional uptake (FU) method using cardiac output developed by Miyazaki et al. and the conventional method with continuous arterial blood sampling. In comparison, we designated the factor which converted raw SPECT voxel counts to rCBF values as a CBF factor. A highly significant correlation (r=0.962, p<0.001) was obtained in the CBF factors between the present method and the continuous arterial blood sampling method. The CBF factors by the present method were only 2.7% higher on the average than those by the continuous arterial blood sampling method. There were significant correlation (r=0.811 and r=O.798, p<0.001) in the CBF factor between modified FU method (threshold for estimating total brain SPECT counts; 10% and 30% respectively) and the continuous arterial blood sampling method. However, the CBF factors of the modified FU method showed 31.4% and 62.3% higher on the average (threshold; 10% and 30% respectively) than those by the continuous arterial blood sampling method. In conclusion, this newly developed method for rCBF measurements was considered to be useful for routine clinical studies without any blood sampling. (author)

  11. An efficient modularized sample-based method to estimate the first-order Sobol' index

    International Nuclear Information System (INIS)

    Li, Chenzhao; Mahadevan, Sankaran

    2016-01-01

    Sobol' index is a prominent methodology in global sensitivity analysis. This paper aims to directly estimate the Sobol' index based only on available input–output samples, even if the underlying model is unavailable. For this purpose, a new method to calculate the first-order Sobol' index is proposed. The innovation is that the conditional variance and mean in the formula of the first-order index are calculated at an unknown but existing location of model inputs, instead of an explicit user-defined location. The proposed method is modularized in two aspects: 1) index calculations for different model inputs are separate and use the same set of samples; and 2) model input sampling, model evaluation, and index calculation are separate. Due to this modularization, the proposed method is capable to compute the first-order index if only input–output samples are available but the underlying model is unavailable, and its computational cost is not proportional to the dimension of the model inputs. In addition, the proposed method can also estimate the first-order index with correlated model inputs. Considering that the first-order index is a desired metric to rank model inputs but current methods can only handle independent model inputs, the proposed method contributes to fill this gap. - Highlights: • An efficient method to estimate the first-order Sobol' index. • Estimate the index from input–output samples directly. • Computational cost is not proportional to the number of model inputs. • Handle both uncorrelated and correlated model inputs.

  12. Sample-averaged biexciton quantum yield measured by solution-phase photon correlation.

    Science.gov (United States)

    Beyler, Andrew P; Bischof, Thomas S; Cui, Jian; Coropceanu, Igor; Harris, Daniel K; Bawendi, Moungi G

    2014-12-10

    The brightness of nanoscale optical materials such as semiconductor nanocrystals is currently limited in high excitation flux applications by inefficient multiexciton fluorescence. We have devised a solution-phase photon correlation measurement that can conveniently and reliably measure the average biexciton-to-exciton quantum yield ratio of an entire sample without user selection bias. This technique can be used to investigate the multiexciton recombination dynamics of a broad scope of synthetically underdeveloped materials, including those with low exciton quantum yields and poor fluorescence stability. Here, we have applied this method to measure weak biexciton fluorescence in samples of visible-emitting InP/ZnS and InAs/ZnS core/shell nanocrystals, and to demonstrate that a rapid CdS shell growth procedure can markedly increase the biexciton fluorescence of CdSe nanocrystals.

  13. A Rapid Identification Method for Calamine Using Near-Infrared Spectroscopy Based on Multi-Reference Correlation Coefficient Method and Back Propagation Artificial Neural Network.

    Science.gov (United States)

    Sun, Yangbo; Chen, Long; Huang, Bisheng; Chen, Keli

    2017-07-01

    As a mineral, the traditional Chinese medicine calamine has a similar shape to many other minerals. Investigations of commercially available calamine samples have shown that there are many fake and inferior calamine goods sold on the market. The conventional identification method for calamine is complicated, therefore as a result of the large scale of calamine samples, a rapid identification method is needed. To establish a qualitative model using near-infrared (NIR) spectroscopy for rapid identification of various calamine samples, large quantities of calamine samples including crude products, counterfeits and processed products were collected and correctly identified using the physicochemical and powder X-ray diffraction method. The NIR spectroscopy method was used to analyze these samples by combining the multi-reference correlation coefficient (MRCC) method and the error back propagation artificial neural network algorithm (BP-ANN), so as to realize the qualitative identification of calamine samples. The accuracy rate of the model based on NIR and MRCC methods was 85%; in addition, the model, which took comprehensive multiple factors into consideration, can be used to identify crude calamine products, its counterfeits and processed products. Furthermore, by in-putting the correlation coefficients of multiple references as the spectral feature data of samples into BP-ANN, a BP-ANN model of qualitative identification was established, of which the accuracy rate was increased to 95%. The MRCC method can be used as a NIR-based method in the process of BP-ANN modeling.

  14. Total focusing method with correlation processing of antenna array signals

    Science.gov (United States)

    Kozhemyak, O. A.; Bortalevich, S. I.; Loginov, E. L.; Shinyakov, Y. A.; Sukhorukov, M. P.

    2018-03-01

    The article proposes a method of preliminary correlation processing of a complete set of antenna array signals used in the image reconstruction algorithm. The results of experimental studies of 3D reconstruction of various reflectors using and without correlation processing are presented in the article. Software ‘IDealSystem3D’ by IDeal-Technologies was used for experiments. Copper wires of different diameters located in a water bath were used as a reflector. The use of correlation processing makes it possible to obtain more accurate reconstruction of the image of the reflectors and to increase the signal-to-noise ratio. The experimental results were processed using an original program. This program allows varying the parameters of the antenna array and sampling frequency.

  15. Distance correlation methods for discovering associations in large astrophysical databases

    International Nuclear Information System (INIS)

    Martínez-Gómez, Elizabeth; Richards, Mercedes T.; Richards, Donald St. P.

    2014-01-01

    High-dimensional, large-sample astrophysical databases of galaxy clusters, such as the Chandra Deep Field South COMBO-17 database, provide measurements on many variables for thousands of galaxies and a range of redshifts. Current understanding of galaxy formation and evolution rests sensitively on relationships between different astrophysical variables; hence an ability to detect and verify associations or correlations between variables is important in astrophysical research. In this paper, we apply a recently defined statistical measure called the distance correlation coefficient, which can be used to identify new associations and correlations between astrophysical variables. The distance correlation coefficient applies to variables of any dimension, can be used to determine smaller sets of variables that provide equivalent astrophysical information, is zero only when variables are independent, and is capable of detecting nonlinear associations that are undetectable by the classical Pearson correlation coefficient. Hence, the distance correlation coefficient provides more information than the Pearson coefficient. We analyze numerous pairs of variables in the COMBO-17 database with the distance correlation method and with the maximal information coefficient. We show that the Pearson coefficient can be estimated with higher accuracy from the corresponding distance correlation coefficient than from the maximal information coefficient. For given values of the Pearson coefficient, the distance correlation method has a greater ability than the maximal information coefficient to resolve astrophysical data into highly concentrated horseshoe- or V-shapes, which enhances classification and pattern identification. These results are observed over a range of redshifts beyond the local universe and for galaxies from elliptical to spiral.

  16. SAMPLING IN EXTERNAL AUDIT - THE MONETARY UNIT SAMPLING METHOD

    Directory of Open Access Journals (Sweden)

    E. Dascalu

    2016-12-01

    Full Text Available This article approaches the general issue of diminishing the evidence investigation space in audit activities, by means of sampling techniques, given that in the instance of a significant data volume an exhaustive examination of the assessed popula¬tion is not possible and/or effective. The general perspective of the presentation involves dealing with sampling risk, in essence, the risk that a selected sample may not be representative for the overall population, in correlation with the audit risk model and with the component parts of this model (inherent risk, control risk and non detection risk and highlights the inter-conditionings between these two models.

  17. Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests

    Directory of Open Access Journals (Sweden)

    Bruno Giacomini Sari

    2017-09-01

    Full Text Available ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix between them. Sixty eight sample sizes were planned for one greenhouse and 48 for another, with the initial sample size of 10 plants, and the others were obtained by adding five plants. For each planned sample size, 3000 estimates of the Pearson correlation coefficient were obtained through bootstrap re-samplings with replacement. The sample size for each correlation coefficient was determined when the 95% confidence interval amplitude value was less than or equal to 0.4. Obtaining estimates of the Pearson correlation coefficient with high precision is difficult for parameters with a weak linear relation. Accordingly, a larger sample size is necessary to estimate them. Linear relations involving variables dealing with size and number of fruits per plant have less precision. To estimate the coefficient of correlation between productivity variables of cherry tomato, with a confidence interval of 95% equal to 0.4, it is necessary to sample 275 plants in a 250m² greenhouse, and 200 plants in a 200m² greenhouse.

  18. Damage evolution analysis of coal samples under cyclic loading based on single-link cluster method

    Science.gov (United States)

    Zhang, Zhibo; Wang, Enyuan; Li, Nan; Li, Xuelong; Wang, Xiaoran; Li, Zhonghui

    2018-05-01

    In this paper, the acoustic emission (AE) response of coal samples under cyclic loading is measured. The results show that there is good positive relation between AE parameters and stress. The AE signal of coal samples under cyclic loading exhibits an obvious Kaiser Effect. The single-link cluster (SLC) method is applied to analyze the spatial evolution characteristics of AE events and the damage evolution process of coal samples. It is found that a subset scale of the SLC structure becomes smaller and smaller when the number of cyclic loading increases, and there is a negative linear relationship between the subset scale and the degree of damage. The spatial correlation length ξ of an SLC structure is calculated. The results show that ξ fluctuates around a certain value from the second cyclic loading process to the fifth cyclic loading process, but spatial correlation length ξ clearly increases in the sixth loading process. Based on the criterion of microcrack density, the coal sample failure process is the transformation from small-scale damage to large-scale damage, which is the reason for changes in the spatial correlation length. Through a systematic analysis, the SLC method is an effective method to research the damage evolution process of coal samples under cyclic loading, and will provide important reference values for studying coal bursts.

  19. Distance sampling methods and applications

    CERN Document Server

    Buckland, S T; Marques, T A; Oedekoven, C S

    2015-01-01

    In this book, the authors cover the basic methods and advances within distance sampling that are most valuable to practitioners and in ecology more broadly. This is the fourth book dedicated to distance sampling. In the decade since the last book published, there have been a number of new developments. The intervening years have also shown which advances are of most use. This self-contained book covers topics from the previous publications, while also including recent developments in method, software and application. Distance sampling refers to a suite of methods, including line and point transect sampling, in which animal density or abundance is estimated from a sample of distances to detected individuals. The book illustrates these methods through case studies; data sets and computer code are supplied to readers through the book’s accompanying website.  Some of the case studies use the software Distance, while others use R code. The book is in three parts.  The first part addresses basic methods, the ...

  20. Detection of protozoa in water samples by formalin/ether concentration method.

    Science.gov (United States)

    Lora-Suarez, Fabiana; Rivera, Raul; Triviño-Valencia, Jessica; Gomez-Marin, Jorge E

    2016-09-01

    Methods to detect protozoa in water samples are expensive and laborious. We evaluated the formalin/ether concentration method to detect Giardia sp., Cryptosporidium sp. and Toxoplasma in water. In order to test the properties of the method, we spiked water samples with different amounts of each protozoa (0, 10 and 50 cysts or oocysts) in a volume of 10 L of water. Immunofluorescence assay was used for detection of Giardia and Cryptosporidium. Toxoplasma oocysts were identified by morphology. The mean percent of recovery in 10 repetitions of the entire method, in 10 samples spiked with ten parasites and read by three different observers, were for Cryptosporidium 71.3 ± 12, for Giardia 63 ± 10 and for Toxoplasma 91.6 ± 9 and the relative standard deviation of the method was of 17.5, 17.2 and 9.8, respectively. Intraobserver variation as measured by intraclass correlation coefficient, was fair for Toxoplasma, moderate for Cryptosporidium and almost perfect for Giardia. The method was then applied in 77 samples of raw and drinkable water in three different plant of water treatment. Cryptosporidium was found in 28 of 77 samples (36%) and Giardia in 31 of 77 samples (40%). Theses results identified significant differences in treatment process to reduce the presence of Giardia and Cryptosporidium. In conclusion, the formalin ether method to concentrate protozoa in water is a new alternative for low resources countries, where is urgently need to monitor and follow the presence of theses protozoa in drinkable water. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Experimental study on deformation field evolution in rock sample with en echelon faults using digital speckle correlation method

    Science.gov (United States)

    Ma, S.; Ma, J.; Liu, L.; Liu, P.

    2007-12-01

    Digital speckle correlation method (DSCM) is one kind of photomechanical deformation measurement method. DSCM could obtain continuous deformation field contactlessly by just capturing speckle images from specimen surface. Therefore, it is suitable to observe high spatial resolution deformation field in tectonophysical experiment. However, in the general DSCM experiment, the inspected surface of specimen needs to be painted to bear speckle grains in order to obtain the high quality speckle image. This also affects the realization of other measurement techniques. In this study, an improved DSCM system is developed and utilized to measure deformation field of rock specimen without surface painting. The granodiorite with high contrast nature grains is chosen to manufacture the specimen, and a specially designed DSCM algorithm is developed to analyze this kind of nature speckle images. Verification and calibration experiments show that the system could inspect a continuous (about 15Hz) high resolution displacement field (with resolution of 5μm) and strain field (with resolution of 50μɛ), dispensing with any preparation on rock specimen. Therefore, it could be conveniently utilized to study the failure of rock structure. Samples with compressive en echelon faults and extensional en echelon faults are studied on a two-direction servo-control test machine. The failure process of the samples is discussed based on the DSCM results. Experiment results show that: 1) The contours of displacement field could clearly indicate the activities of faults and new cracks. The displacement gradient adjacent to active faults and cracks is much greater than other areas. 2) Before failure of the samples, the mean strain of the jog area is largest for the compressive en echelon fault, while that is smallest for the extensional en echelon fault. This consists with the understanding that the jog area of compressive fault subjects to compression and that of extensional fault subjects to

  2. Extreme eigenvalues of sample covariance and correlation matrices

    DEFF Research Database (Denmark)

    Heiny, Johannes

    This thesis is concerned with asymptotic properties of the eigenvalues of high-dimensional sample covariance and correlation matrices under an infinite fourth moment of the entries. In the first part, we study the joint distributional convergence of the largest eigenvalues of the sample covariance...... matrix of a p-dimensional heavy-tailed time series when p converges to infinity together with the sample size n. We generalize the growth rates of p existing in the literature. Assuming a regular variation condition with tail index ... eigenvalues are essentially determined by the extreme order statistics from an array of iid random variables. The asymptotic behavior of the extreme eigenvalues is then derived routinely from classical extreme value theory. The resulting approximations are strikingly simple considering the high dimension...

  3. Do sampling methods differ in their utility for ecological monitoring? Comparison of line-point intercept, grid-point intercept, and ocular estimate methods

    Science.gov (United States)

    This study compared the utility of three sampling methods for ecological monitoring based on: interchangeability of data (rank correlations), precision (coefficient of variation), cost (minutes/transect), and potential of each method to generate multiple indicators. Species richness and foliar cover...

  4. Efficiency of cleaning and disinfection of surfaces: correlation between assessment methods

    OpenAIRE

    Frota, Oleci Pereira; Ferreira, Adriano Menis; Guerra, Odanir Garcia; Rigotti, Marcelo Alessandro; Andrade, Denise de; Borges, Najla Moreira Amaral; Almeida, Margarete Teresa Gottardo de

    2017-01-01

    ABSTRACT Objective: to assess the correlation among the ATP-bioluminescence assay, visual inspection and microbiological culture in monitoring the efficiency of cleaning and disinfection (C&D) of high-touch clinical surfaces (HTCS) in a walk-in emergency care unit. Method: a prospective and comparative study was carried out from March to June 2015, in which five HTCS were sampled before and after C&D by means of the three methods. The HTCS were considered dirty when dust, waste, humidity an...

  5. Communication: importance sampling including path correlation in semiclassical initial value representation calculations for time correlation functions.

    Science.gov (United States)

    Pan, Feng; Tao, Guohua

    2013-03-07

    Full semiclassical (SC) initial value representation (IVR) for time correlation functions involves a double phase space average over a set of two phase points, each of which evolves along a classical path. Conventionally, the two initial phase points are sampled independently for all degrees of freedom (DOF) in the Monte Carlo procedure. Here, we present an efficient importance sampling scheme by including the path correlation between the two initial phase points for the bath DOF, which greatly improves the performance of the SC-IVR calculations for large molecular systems. Satisfactory convergence in the study of quantum coherence in vibrational relaxation has been achieved for a benchmark system-bath model with up to 21 DOF.

  6. Independent random sampling methods

    CERN Document Server

    Martino, Luca; Míguez, Joaquín

    2018-01-01

    This book systematically addresses the design and analysis of efficient techniques for independent random sampling. Both general-purpose approaches, which can be used to generate samples from arbitrary probability distributions, and tailored techniques, designed to efficiently address common real-world practical problems, are introduced and discussed in detail. In turn, the monograph presents fundamental results and methodologies in the field, elaborating and developing them into the latest techniques. The theory and methods are illustrated with a varied collection of examples, which are discussed in detail in the text and supplemented with ready-to-run computer code. The main problem addressed in the book is how to generate independent random samples from an arbitrary probability distribution with the weakest possible constraints or assumptions in a form suitable for practical implementation. The authors review the fundamental results and methods in the field, address the latest methods, and emphasize the li...

  7. A method of language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik; Hengeveld, Kees

    1993-01-01

    In recent years more attention is paid to the quality of language samples in typological work. Without an adequate sampling strategy, samples may suffer from various kinds of bias. In this article we propose a sampling method in which the genetic criterion is taken as the most important: samples...... created with this method will reflect optimally the diversity of the languages of the world. On the basis of the internal structure of each genetic language tree a measure is computed that reflects the linguistic diversity in the language families represented by these trees. This measure is used...... to determine how many languages from each phylum should be selected, given any required sample size....

  8. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  9. Image correlation method for DNA sequence alignment.

    Science.gov (United States)

    Curilem Saldías, Millaray; Villarroel Sassarini, Felipe; Muñoz Poblete, Carlos; Vargas Vásquez, Asticio; Maureira Butler, Iván

    2012-01-01

    The complexity of searches and the volume of genomic data make sequence alignment one of bioinformatics most active research areas. New alignment approaches have incorporated digital signal processing techniques. Among these, correlation methods are highly sensitive. This paper proposes a novel sequence alignment method based on 2-dimensional images, where each nucleic acid base is represented as a fixed gray intensity pixel. Query and known database sequences are coded to their pixel representation and sequence alignment is handled as object recognition in a scene problem. Query and database become object and scene, respectively. An image correlation process is carried out in order to search for the best match between them. Given that this procedure can be implemented in an optical correlator, the correlation could eventually be accomplished at light speed. This paper shows an initial research stage where results were "digitally" obtained by simulating an optical correlation of DNA sequences represented as images. A total of 303 queries (variable lengths from 50 to 4500 base pairs) and 100 scenes represented by 100 x 100 images each (in total, one million base pair database) were considered for the image correlation analysis. The results showed that correlations reached very high sensitivity (99.01%), specificity (98.99%) and outperformed BLAST when mutation numbers increased. However, digital correlation processes were hundred times slower than BLAST. We are currently starting an initiative to evaluate the correlation speed process of a real experimental optical correlator. By doing this, we expect to fully exploit optical correlation light properties. As the optical correlator works jointly with the computer, digital algorithms should also be optimized. The results presented in this paper are encouraging and support the study of image correlation methods on sequence alignment.

  10. A novel PFIB sample preparation protocol for correlative 3D X-ray CNT and FIB-TOF-SIMS tomography

    Energy Technology Data Exchange (ETDEWEB)

    Priebe, Agnieszka, E-mail: agnieszka.priebe@gmail.com [Univ. Grenoble Alpes, F-38000 Grenoble (France); CEA, LETI, MINATEC Campus, F-38054 Grenoble (France); Audoit, Guillaume; Barnes, Jean-Paul [Univ. Grenoble Alpes, F-38000 Grenoble (France); CEA, LETI, MINATEC Campus, F-38054 Grenoble (France)

    2017-02-15

    We present a novel sample preparation method that allows correlative 3D X-ray Computed Nano-Tomography (CNT) and Focused Ion Beam Time-Of-Flight Secondary Ion Mass Spectrometry (FIB-TOF-SIMS) tomography to be performed on the same sample. In addition, our invention ensures that samples stay unmodified structurally and chemically between the subsequent experiments. The main principle is based on modifying the topography of the X-ray CNT experimental setup before FIB-TOF-SIMS measurements by incorporating a square washer around the sample. This affects the distribution of extraction field lines and therefore influences the trajectories of secondary ions that are now guided more efficiently towards the detector. As the result, secondary ion detection is significantly improved and higher, i.e. statistically better, signals are obtained. - Highlights: • Novel sample preparation for correlative 3D X-ray CNT and FIB-TOF-SIMS is presented. • Two experiments are conducted on exactly the same sample without any modifications. • Introduction of a square washer around the sample leads to increased ion detection.

  11. SWOT ANALYSIS ON SAMPLING METHOD

    Directory of Open Access Journals (Sweden)

    CHIS ANCA OANA

    2014-07-01

    Full Text Available Audit sampling involves the application of audit procedures to less than 100% of items within an account balance or class of transactions. Our article aims to study audit sampling in audit of financial statements. As an audit technique largely used, in both its statistical and nonstatistical form, the method is very important for auditors. It should be applied correctly for a fair view of financial statements, to satisfy the needs of all financial users. In order to be applied correctly the method must be understood by all its users and mainly by auditors. Otherwise the risk of not applying it correctly would cause loose of reputation and discredit, litigations and even prison. Since there is not a unitary practice and methodology for applying the technique, the risk of incorrectly applying it is pretty high. The SWOT analysis is a technique used that shows the advantages, disadvantages, threats and opportunities. We applied SWOT analysis in studying the sampling method, from the perspective of three players: the audit company, the audited entity and users of financial statements. The study shows that by applying the sampling method the audit company and the audited entity both save time, effort and money. The disadvantages of the method are difficulty in applying and understanding its insight. Being largely used as an audit method and being a factor of a correct audit opinion, the sampling method’s advantages, disadvantages, threats and opportunities must be understood by auditors.

  12. Sampling system and method

    Science.gov (United States)

    Decker, David L.; Lyles, Brad F.; Purcell, Richard G.; Hershey, Ronald Lee

    2013-04-16

    The present disclosure provides an apparatus and method for coupling conduit segments together. A first pump obtains a sample and transmits it through a first conduit to a reservoir accessible by a second pump. The second pump further conducts the sample from the reservoir through a second conduit.

  13. A method of language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik; Hengeveld, Kees

    1993-01-01

    In recent years more attention is paid to the quality of language samples in typological work. Without an adequate sampling strategy, samples may suffer from various kinds of bias. In this article we propose a sampling method in which the genetic criterion is taken as the most important: samples...... to determine how many languages from each phylum should be selected, given any required sample size....

  14. Efficiency of cleaning and disinfection of surfaces: correlation between assessment methods

    Directory of Open Access Journals (Sweden)

    Oleci Pereira Frota

    Full Text Available ABSTRACT Objective: to assess the correlation among the ATP-bioluminescence assay, visual inspection and microbiological culture in monitoring the efficiency of cleaning and disinfection (C&D of high-touch clinical surfaces (HTCS in a walk-in emergency care unit. Method: a prospective and comparative study was carried out from March to June 2015, in which five HTCS were sampled before and after C&D by means of the three methods. The HTCS were considered dirty when dust, waste, humidity and stains were detected in visual inspection; when ≥2.5 colony forming units per cm2 were found in culture; when ≥5 relative light units per cm2 were found at the ATP-bioluminescence assay. Results: 720 analyses were performed, 240 per method. The overall rates of clean surfaces per visual inspection, culture and ATP-bioluminescence assay were 8.3%, 20.8% and 44.2% before C&D, and 92.5%, 50% and 84.2% after C&D, respectively (p<0.001. There were only occasional statistically significant relationships between methods. Conclusion: the methods did not present a good correlation, neither quantitative nor qualitatively.

  15. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    Science.gov (United States)

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  16. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    International Nuclear Information System (INIS)

    Beer, M.

    1980-01-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates

  17. Comparison of the acetyl bromide spectrophotometric method with other analytical lignin methods for determining lignin concentration in forage samples.

    Science.gov (United States)

    Fukushima, Romualdo S; Hatfield, Ronald D

    2004-06-16

    Present analytical methods to quantify lignin in herbaceous plants are not totally satisfactory. A spectrophotometric method, acetyl bromide soluble lignin (ABSL), has been employed to determine lignin concentration in a range of plant materials. In this work, lignin extracted with acidic dioxane was used to develop standard curves and to calculate the derived linear regression equation (slope equals absorptivity value or extinction coefficient) for determining the lignin concentration of respective cell wall samples. This procedure yielded lignin values that were different from those obtained with Klason lignin, acid detergent acid insoluble lignin, or permanganate lignin procedures. Correlations with in vitro dry matter or cell wall digestibility of samples were highest with data from the spectrophotometric technique. The ABSL method employing as standard lignin extracted with acidic dioxane has the potential to be employed as an analytical method to determine lignin concentration in a range of forage materials. It may be useful in developing a quick and easy method to predict in vitro digestibility on the basis of the total lignin content of a sample.

  18. Fast methods for spatially correlated multilevel functional data

    KAUST Repository

    Staicu, A.-M.

    2010-01-19

    We propose a new methodological framework for the analysis of hierarchical functional data when the functions at the lowest level of the hierarchy are correlated. For small data sets, our methodology leads to a computational algorithm that is orders of magnitude more efficient than its closest competitor (seconds versus hours). For large data sets, our algorithm remains fast and has no current competitors. Thus, in contrast to published methods, we can now conduct routine simulations, leave-one-out analyses, and nonparametric bootstrap sampling. Our methods are inspired by and applied to data obtained from a state-of-the-art colon carcinogenesis scientific experiment. However, our models are general and will be relevant to many new data sets where the object of inference are functions or images that remain dependent even after conditioning on the subject on which they are measured. Supplementary materials are available at Biostatistics online.

  19. Subrandom methods for multidimensional nonuniform sampling.

    Science.gov (United States)

    Worley, Bradley

    2016-08-01

    Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Some connections between importance sampling and enhanced sampling methods in molecular dynamics.

    Science.gov (United States)

    Lie, H C; Quer, J

    2017-11-21

    In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.

  1. Acute and chronic alcohol use correlated with methods of suicide in a Swiss national sample.

    Science.gov (United States)

    Pfeifer, P; Bartsch, C; Hemmer, A; Reisch, T

    2017-09-01

    Chronic and acute alcohol use are highly associated risk factors for suicides worldwide. Therefore, we examined suicide cases with and without alcohol use disorder (AUD) using data from the SNSF project "Suicide in Switzerland: A detailed national survey". Our investigations focus on correlations between acute and chronic alcohol use with reference to suicide and potential interactions with the methods of suicide. We used data from the SNSF project in which all cases of registered completed suicide in Switzerland reported to any of the seven Swiss institutes of legal and forensic medicine between 2000 and 2010 were collected. We extracted cases that were tested for blood alcohol to use in our analysis. We compared clinical characteristics, blood alcohol concentrations, and methods of suicide in cases with and without AUD. Out of 6497 cases, 2946 subjects were tested for acute alcohol use and included in our analysis. Of the latter, 366 (12.4%) persons had a medical history of AUD. Subjects with AUD significantly had higher blood alcohol concentrations and were more often in medical treatment before suicide. Drug intoxication as method of suicide was more frequent in cases with AUD compared to NAUD. Overall, we found a high incidence of acute alcohol use at the time of death in chronic alcohol misusers (AUD). The five methods of suicide most commonly used in Switzerland differed considerably between individuals with and without AUD. Blood alcohol concentrations varied across different methods of suicide independently from the medical history in both groups. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Perpendicular distance sampling: an alternative method for sampling downed coarse woody debris

    Science.gov (United States)

    Michael S. Williams; Jeffrey H. Gove

    2003-01-01

    Coarse woody debris (CWD) plays an important role in many forest ecosystem processes. In recent years, a number of new methods have been proposed to sample CWD. These methods select individual logs into the sample using some form of unequal probability sampling. One concern with most of these methods is the difficulty in estimating the volume of each log. A new method...

  3. Sampling method of environmental radioactivity monitoring

    International Nuclear Information System (INIS)

    1984-01-01

    This manual provides sampling methods of environmental samples of airborne dust, precipitated dust, precipitated water (rain or snow), fresh water, soil, river sediment or lake sediment, discharged water from a nuclear facility, grains, tea, milk, pasture grass, limnetic organisms, daily diet, index organisms, sea water, marine sediment, marine organisms, and that for tritium and radioiodine determination for radiation monitoring from radioactive fallout or radioactivity release by nuclear facilities. This manual aims at the presentation of standard sampling procedures for environmental radioactivity monitoring regardless of monitoring objectives, and shows preservation method of environmental samples acquired at the samplingpoint for radiation counting for those except human body. Sampling techniques adopted in this manual is decided by the criteria that they are suitable for routine monitoring and any special skillfulness is not necessary. Based on the above-mentioned principle, this manual presents outline and aims of sampling, sampling position or object, sampling quantity, apparatus, equipment or vessel for sampling, sampling location, sampling procedures, pretreatment and preparation procedures of a sample for radiation counting, necessary recording items for sampling and sample transportation procedures. Special attention is described in the chapter of tritium and radioiodine because these radionuclides might be lost by the above-mentioned sample preservation method for radiation counting of less volatile radionuclides than tritium or radioiodine. (Takagi, S.)

  4. Comparison of Two Different Methods Used for Semen Evaluation: Analysis of Semen Samples from 1,055 Men.

    Science.gov (United States)

    Dinçer, Murat; Kucukdurmaz, Faruk; Salabas, Emre; Ortac, Mazhar; Aktan, Gulsan; Kadioglu, Ates

    2017-01-01

    The aim of this study was to evaluate whether there is a difference between gravimetrically and volumetrically measured semen samples and to assess the impact of semen volume, density, and sperm count on the discrepancy between gravimetric and volumetric methods. This study was designed in an andrology laboratory setting and performed on semen samples of 1,055 men receiving infertility treatment. Semen volume was calculated by gravimetric and volumetric methods. The total sperm count, semen density and sperm viability were also examined according to recent version of World Health Organization manual. The median values for gravimetric and volumetric measurements were 3.44 g and 2.96 ml respectively. The numeric difference in semen volume between 2 methods was 0.48. The mean density of samples was 1.01 ± 0.46 g/ml (range 0.90-2.0 g/ml). The numeric difference between 2 methods gets higher as semen volume increases (p semen volume measurements were strongly correlated for all samples and for each subgroup of semen volume, semen density and sperm count, with minimum correlation coefficient of 0.895 (p semen volume increases. However, further studies are needed to offer the use of gravimetrical method, which was thought to minimize laboratory errors, particularly for a high amount of semen samples. © 2016 S. Karger AG, Basel.

  5. Finite element formulation for a digital image correlation method

    International Nuclear Information System (INIS)

    Sun Yaofeng; Pang, John H. L.; Wong, Chee Khuen; Su Fei

    2005-01-01

    A finite element formulation for a digital image correlation method is presented that will determine directly the complete, two-dimensional displacement field during the image correlation process on digital images. The entire interested image area is discretized into finite elements that are involved in the common image correlation process by use of our algorithms. This image correlation method with finite element formulation has an advantage over subset-based image correlation methods because it satisfies the requirements of displacement continuity and derivative continuity among elements on images. Numerical studies and a real experiment are used to verify the proposed formulation. Results have shown that the image correlation with the finite element formulation is computationally efficient, accurate, and robust

  6. Simulated Tempering Distributed Replica Sampling, Virtual Replica Exchange, and Other Generalized-Ensemble Methods for Conformational Sampling.

    Science.gov (United States)

    Rauscher, Sarah; Neale, Chris; Pomès, Régis

    2009-10-13

    Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.

  7. Isotope correlations for safeguards surveillance and accountancy methods

    International Nuclear Information System (INIS)

    Persiani, P.J.; Kalimullah.

    1982-01-01

    Isotope correlations corroborated by experiments, coupled with measurement methods for nuclear material in the fuel cycle have the potential as a safeguards surveillance and accountancy system. The ICT allows the verification of: fabricator's uranium and plutonium content specifications, shipper/receiver differences between fabricator output and reactor input, reactor plant inventory changes, reprocessing batch specifications and shipper/receiver differences between reactor output and reprocessing plant input. The investigation indicates that there exist predictable functional relationships (i.e. correlations) between isotopic concentrations over a range of burnup. Several cross-correlations serve to establish the initial fuel assembly-averaged compositions. The selection of the more effective correlations will depend not only on the level of reliability of ICT for verification, but also on the capability, accuracy and difficulty of developing measurement methods. The propagation of measurement errors through the correlations have been examined to identify the sensitivity of the isotope correlations to measurement errors, and to establish criteria for measurement accuracy in the development and selection of measurement methods. 6 figures, 3 tables

  8. Comparison of fine particle measurements from a direct-reading instrument and a gravimetric sampling method.

    Science.gov (United States)

    Kim, Jee Young; Magari, Shannon R; Herrick, Robert F; Smith, Thomas J; Christiani, David C

    2004-11-01

    Particulate air pollution, specifically the fine particle fraction (PM2.5), has been associated with increased cardiopulmonary morbidity and mortality in general population studies. Occupational exposure to fine particulate matter can exceed ambient levels by a large factor. Due to increased interest in the health effects of particulate matter, many particle sampling methods have been developed In this study, two such measurement methods were used simultaneously and compared. PM2.5 was sampled using a filter-based gravimetric sampling method and a direct-reading instrument, the TSI Inc. model 8520 DUSTTRAK aerosol monitor. Both sampling methods were used to determine the PM2.5 exposure in a group of boilermakers exposed to welding fumes and residual fuel oil ash. The geometric mean PM2.5 concentration was 0.30 mg/m3 (GSD 3.25) and 0.31 mg/m3 (GSD 2.90)from the DUSTTRAK and gravimetric method, respectively. The Spearman rank correlation coefficient for the gravimetric and DUSTTRAK PM2.5 concentrations was 0.68. Linear regression models indicated that log, DUSTTRAK PM2.5 concentrations significantly predicted loge gravimetric PM2.5 concentrations (p gravimetric PM2.5 concentrations was found to be modified by surrogate measures for seasonal variation and type of aerosol. PM2.5 measurements from the DUSTTRAK are well correlated and highly predictive of measurements from the gravimetric sampling method for the aerosols in these work environments. However, results from this study suggest that aerosol particle characteristics may affect the relationship between the gravimetric and DUSTTRAK PM2.5 measurements. Recalibration of the DUSTTRAK for the specific aerosol, as recommended by the manufacturer, may be necessary to produce valid measures of airborne particulate matter.

  9. A Novel Method to Handle the Effect of Uneven Sampling Effort in Biodiversity Databases

    Science.gov (United States)

    Pardo, Iker; Pata, María P.; Gómez, Daniel; García, María B.

    2013-01-01

    How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses. PMID:23326357

  10. An improved sampling method of complex network

    Science.gov (United States)

    Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing

    2014-12-01

    Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.

  11. Comparison of sampling methods for animal manure

    NARCIS (Netherlands)

    Derikx, P.J.L.; Ogink, N.W.M.; Hoeksma, P.

    1997-01-01

    Currently available and recently developed sampling methods for slurry and solid manure were tested for bias and reproducibility in the determination of total phosphorus and nitrogen content of samples. Sampling methods were based on techniques in which samples were taken either during loading from

  12. Correlations fo Sc, rare earths and other elements in selected rock samples from Arrua-i

    Energy Technology Data Exchange (ETDEWEB)

    Facetti, J F; Prats, M [Asuncion Nacional Univ. (Paraguay). Inst. de Ciencias

    1972-01-01

    The Sc and Eu contents in selected rocks samples from the stock of Arrua-i have been determined and correlations established with other elements and with the relative amount of some rare earths. These correlations suggest metasomatic phenomena for the formation of the rock samples.

  13. Correlations fo Sc, rare earths and other elements in selected rock samples from Arrua-i

    International Nuclear Information System (INIS)

    Facetti, J.F.; Prats, M.

    1972-01-01

    The Sc and Eu contents in selected rocks samples from the stock of Arrua-i have been determined and correlations established with other elements and with the relative amount of some rare earths. These correlations suggest metasomatic phenomena for the formation of the rock samples

  14. Correlation between k-space sampling pattern and MTF in compressed sensing MRSI.

    Science.gov (United States)

    Heikal, A A; Wachowicz, K; Fallone, B G

    2016-10-01

    To investigate the relationship between the k-space sampling patterns used for compressed sensing MR spectroscopic imaging (CS-MRSI) and the modulation transfer function (MTF) of the metabolite maps. This relationship may allow the desired frequency content of the metabolite maps to be quantitatively tailored when designing an undersampling pattern. Simulations of a phantom were used to calculate the MTF of Nyquist sampled (NS) 32 × 32 MRSI, and four-times undersampled CS-MRSI reconstructions. The dependence of the CS-MTF on the k-space sampling pattern was evaluated for three sets of k-space sampling patterns generated using different probability distribution functions (PDFs). CS-MTFs were also evaluated for three more sets of patterns generated using a modified algorithm where the sampling ratios are constrained to adhere to PDFs. Strong visual correlation as well as high R 2 was found between the MTF of CS-MRSI and the product of the frequency-dependant sampling ratio and the NS 32 × 32 MTF. Also, PDF-constrained sampling patterns led to higher reproducibility of the CS-MTF, and stronger correlations to the above-mentioned product. The relationship established in this work provides the user with a theoretical solution for the MTF of CS MRSI that is both predictable and customizable to the user's needs.

  15. Resampling-based methods in single and multiple testing for equality of covariance/correlation matrices.

    Science.gov (United States)

    Yang, Yang; DeGruttola, Victor

    2012-06-22

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.

  16. rCBF measurement by one-point venous sampling with the ARG method

    International Nuclear Information System (INIS)

    Yoshida, Nobuhiro; Okamoto, Toshiaki; Takahashi, Hidekado; Hattori, Teruo

    1997-01-01

    We investigated the possibility of using venous blood sampling instead of arterial blood sampling for the current method of ARG (autoradiography) used to determine regional cerebral blood flow (rCBF) on the basis of one session of arterial blood sampling and SPECT. For this purpose, the ratio of the arterial blood radioactivity count to the venous blood radioactivity count, the coefficient of variation, and the correlation and differences between arterial blood-based rCBF and venous blood-based rCBF were analyzed. The coefficient of variation was lowest (4.1%) 20 minutes after injection into the dorsum manus. When the relationship between venous and arterial blood counts was analyzed, arterial blood counts correlated well with venous blood counts collected at the dorsum manus 20 or 30 minutes after intravenous injection and with venous blood counts collected at the wrist 20 minutes after intravenous injection (r=0.97 or higher). The difference from rCBF determined on the basis of arterial blood was smallest (0.7) for rCBF determined on the basis of venous blood collected at the dorsum manus 20 minutes after intravenous injection. (author)

  17. Correlation Coefficients Between Different Methods of Expressing Bacterial Quantification Using Real Time PCR

    Directory of Open Access Journals (Sweden)

    Bahman Navidshad

    2012-02-01

    Full Text Available The applications of conventional culture-dependent assays to quantify bacteria populations are limited by their dependence on the inconsistent success of the different culture-steps involved. In addition, some bacteria can be pathogenic or a source of endotoxins and pose a health risk to the researchers. Bacterial quantification based on the real-time PCR method can overcome the above-mentioned problems. However, the quantification of bacteria using this approach is commonly expressed as absolute quantities even though the composition of samples (like those of digesta can vary widely; thus, the final results may be affected if the samples are not properly homogenized, especially when multiple samples are to be pooled together before DNA extraction. The objective of this study was to determine the correlation coefficients between four different methods of expressing the output data of real-time PCR-based bacterial quantification. The four methods were: (i the common absolute method expressed as the cell number of specific bacteria per gram of digesta; (ii the Livak and Schmittgen, ΔΔCt method; (iii the Pfaffl equation; and (iv a simple relative method based on the ratio of cell number of specific bacteria to the total bacterial cells. Because of the effect on total bacteria population in the results obtained using ΔCt-based methods (ΔΔCt and Pfaffl, these methods lack the acceptable consistency to be used as valid and reliable methods in real-time PCR-based bacterial quantification studies. On the other hand, because of the variable compositions of digesta samples, a simple ratio of cell number of specific bacteria to the corresponding total bacterial cells of the same sample can be a more accurate method to quantify the population.

  18. Verification of spectrophotometric method for nitrate analysis in water samples

    Science.gov (United States)

    Kurniawati, Puji; Gusrianti, Reny; Dwisiwi, Bledug Bernanti; Purbaningtias, Tri Esti; Wiyantoko, Bayu

    2017-12-01

    The aim of this research was to verify the spectrophotometric method to analyze nitrate in water samples using APHA 2012 Section 4500 NO3-B method. The verification parameters used were: linearity, method detection limit, level of quantitation, level of linearity, accuracy and precision. Linearity was obtained by using 0 to 50 mg/L nitrate standard solution and the correlation coefficient of standard calibration linear regression equation was 0.9981. The method detection limit (MDL) was defined as 0,1294 mg/L and limit of quantitation (LOQ) was 0,4117 mg/L. The result of a level of linearity (LOL) was 50 mg/L and nitrate concentration 10 to 50 mg/L was linear with a level of confidence was 99%. The accuracy was determined through recovery value was 109.1907%. The precision value was observed using % relative standard deviation (%RSD) from repeatability and its result was 1.0886%. The tested performance criteria showed that the methodology was verified under the laboratory conditions.

  19. 7 CFR 29.110 - Method of sampling.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Method of sampling. 29.110 Section 29.110 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Regulations Inspectors, Samplers, and Weighers § 29.110 Method of sampling. In sampling tobacco...

  20. Sampling Transition Pathways in Highly Correlated Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Chandler, David

    2004-10-20

    This research grant supported my group's efforts to apply and extend the method of transition path sampling that we invented during the late 1990s. This methodology is based upon a statistical mechanics of trajectory space. Traditional statistical mechanics focuses on state space, and with it, one can use Monte Carlo methods to facilitate importance sampling of states. With our formulation of a statistical mechanics of trajectory space, we have succeeded at creating algorithms by which importance sampling can be done for dynamical processes. In particular, we are able to study rare but important events without prior knowledge of transition states or mechanisms. In perhaps the most impressive application of transition path sampling, my group combined forces with Michele Parrinello and his coworkers to unravel the dynamics of auto ionization of water [5]. This dynamics is the fundamental kinetic step of pH. Other applications concern nature of dynamics far from equilibrium [1, 7], nucleation processes [2], cluster isomerization, melting and dissociation [3, 6], and molecular motors [10]. Research groups throughout the world are adopting transition path sampling. In part this has been the result of our efforts to provide pedagogical presentations of the technique [4, 8, 9], as well as providing new procedures for interpreting trajectories of complex systems [11].

  1. Measurement of regional cerebral blood flow using one-point venous blood sampling and causality model. Evaluation by comparing with conventional continuous arterial blood sampling method

    International Nuclear Information System (INIS)

    Mimura, Hiroaki; Sone, Teruki; Takahashi, Yoshitake

    2008-01-01

    Optimal setting of the input function is essential for the measurement of regional cerebral blood flow (rCBF) based on the microsphere model using N-isopropyl-4-[ 123 I]iodoamphetamine ( 123 I-IMP), and usually the arterial 123 I-IMP concentration (integral value) in the initial 5 min is used for this purpose. We have developed a new convenient method in which 123 I-IMP concentration in arterial blood sample is estimated from that in venous blood sample. Brain perfusion single photon emission computed tomography (SPECT) with 123 I-IMP was performed in 110 cases of central nervous system disorders. The causality was analyzed between the various parameters of SPECT data and the ratio of octanol-extracted arterial radioactivity concentration during the first 5 min (Caoct) to octanol-extracted venous radioactivity concentration at 27 min after intravenous injection of 123 I-IMP (Cvoct). A high correlation was observed between the measured and estimated values of Caoct/Cvoct (r=0.856) when the following five parameters were included in the regression formula: radioactivity concentration in venous blood sampled at 27 min (Cv), Cvoct, Cvoct/Cv, and total brain radioactivity counts that were measured by a four-head gamma camera 5 min and 28 min after 123 I-IMP injection. Furthermore, the rCBF values obtained using the input parameters estimated by this method were also highly correlated with the rCBF values measured using the continuous arterial blood sampling method (r=0.912). These results suggest that this method would serve as the new, convenient and less invasive method of rCBF measurement in clinical setting. (author)

  2. An improved method for bivariate meta-analysis when within-study correlations are unknown.

    Science.gov (United States)

    Hong, Chuan; D Riley, Richard; Chen, Yong

    2018-03-01

    Multivariate meta-analysis, which jointly analyzes multiple and possibly correlated outcomes in a single analysis, is becoming increasingly popular in recent years. An attractive feature of the multivariate meta-analysis is its ability to account for the dependence between multiple estimates from the same study. However, standard inference procedures for multivariate meta-analysis require the knowledge of within-study correlations, which are usually unavailable. This limits standard inference approaches in practice. Riley et al proposed a working model and an overall synthesis correlation parameter to account for the marginal correlation between outcomes, where the only data needed are those required for a separate univariate random-effects meta-analysis. As within-study correlations are not required, the Riley method is applicable to a wide variety of evidence synthesis situations. However, the standard variance estimator of the Riley method is not entirely correct under many important settings. As a consequence, the coverage of a function of pooled estimates may not reach the nominal level even when the number of studies in the multivariate meta-analysis is large. In this paper, we improve the Riley method by proposing a robust variance estimator, which is asymptotically correct even when the model is misspecified (ie, when the likelihood function is incorrect). Simulation studies of a bivariate meta-analysis, in a variety of settings, show a function of pooled estimates has improved performance when using the proposed robust variance estimator. In terms of individual pooled estimates themselves, the standard variance estimator and robust variance estimator give similar results to the original method, with appropriate coverage. The proposed robust variance estimator performs well when the number of studies is relatively large. Therefore, we recommend the use of the robust method for meta-analyses with a relatively large number of studies (eg, m≥50). When the

  3. Air sampling procedures to evaluate microbial contamination: a comparison between active and passive methods in operating theatres

    Directory of Open Access Journals (Sweden)

    Napoli Christian

    2012-08-01

    Full Text Available Abstract Background Since air can play a central role as a reservoir for microorganisms, in controlled environments such as operating theatres regular microbial monitoring is useful to measure air quality and identify critical situations. The aim of this study is to assess microbial contamination levels in operating theatres using both an active and a passive sampling method and then to assess if there is a correlation between the results of the two different sampling methods. Methods The study was performed in 32 turbulent air flow operating theatres of a University Hospital in Southern Italy. Active sampling was carried out using the Surface Air System and passive sampling with settle plates, in accordance with ISO 14698. The Total Viable Count (TVC was evaluated at rest (in the morning before the beginning of surgical activity and in operational (during surgery. Results The mean TVC at rest was 12.4 CFU/m3 and 722.5 CFU/m2/h for active and passive samplings respectively. The mean in operational TVC was 93.8 CFU/m3 (SD = 52.69; range = 22-256 and 10496.5 CFU/m2/h (SD = 7460.5; range = 1415.5-25479.7 for active and passive samplings respectively. Statistical analysis confirmed that the two methods correlate in a comparable way with the quality of air. Conclusion It is possible to conclude that both methods can be used for general monitoring of air contamination, such as routine surveillance programs. However, the choice must be made between one or the other to obtain specific information.

  4. Tailored two-photon correlation and fair-sampling: a cautionary tale

    Science.gov (United States)

    Romero, J.; Giovannini, D.; Tasca, D. S.; Barnett, S. M.; Padgett, M. J.

    2013-08-01

    We demonstrate an experimental test of the Clauser-Horne- Shimony-Holt (CHSH) Bell inequality which seemingly exhibits correlations beyond the limits imposed by quantum mechanics. Inspired by the idea of Fourier synthesis, we design analysers that measure specific superpositions of orbital angular momentum (OAM) states, such that when one analyser is rotated with respect to the other, the resulting coincidence curves are similar to a square-wave. Calculating the CHSH Bell parameter, S, from these curves result to values beyond the Tsirelson bound of S_{ {QM}}=2\\sqrt {2} . We obtain S = 3.99 ± 0.02, implying almost perfect nonlocal Popescu-Rohrlich correlations. The ‘super-quantum’ values of S is only possible in our experiment because our experiment, subtly, does not comply with fair-sampling. The way our Bell test fails fair-sampling is not immediately obvious and requires knowledge of the states being measured. Our experiment highlights the caution needed in Bell-type experiments based on measurements within high-dimensional state spaces such as that of OAM, especially in the advent of device-independent quantum protocols.

  5. Air sampling procedures to evaluate microbial contamination: a comparison between active and passive methods in operating theatres.

    Science.gov (United States)

    Napoli, Christian; Marcotrigiano, Vincenzo; Montagna, Maria Teresa

    2012-08-02

    Since air can play a central role as a reservoir for microorganisms, in controlled environments such as operating theatres regular microbial monitoring is useful to measure air quality and identify critical situations. The aim of this study is to assess microbial contamination levels in operating theatres using both an active and a passive sampling method and then to assess if there is a correlation between the results of the two different sampling methods. The study was performed in 32 turbulent air flow operating theatres of a University Hospital in Southern Italy. Active sampling was carried out using the Surface Air System and passive sampling with settle plates, in accordance with ISO 14698. The Total Viable Count (TVC) was evaluated at rest (in the morning before the beginning of surgical activity) and in operational (during surgery). The mean TVC at rest was 12.4 CFU/m3 and 722.5 CFU/m2/h for active and passive samplings respectively. The mean in operational TVC was 93.8 CFU/m3 (SD = 52.69; range = 22-256) and 10496.5 CFU/m2/h (SD = 7460.5; range = 1415.5-25479.7) for active and passive samplings respectively. Statistical analysis confirmed that the two methods correlate in a comparable way with the quality of air. It is possible to conclude that both methods can be used for general monitoring of air contamination, such as routine surveillance programs. However, the choice must be made between one or the other to obtain specific information.

  6. Validity studies among hierarchical methods of cluster analysis using cophenetic correlation coefficient

    International Nuclear Information System (INIS)

    Carvalho, Priscilla R.; Munita, Casimiro S.; Lapolli, André L.

    2017-01-01

    The literature presents many methods for partitioning of data base, and is difficult choose which is the most suitable, since the various combinations of methods based on different measures of dissimilarity can lead to different patterns of grouping and false interpretations. Nevertheless, little effort has been expended in evaluating these methods empirically using an archaeological data base. In this way, the objective of this work is make a comparative study of the different cluster analysis methods and identify which is the most appropriate. For this, the study was carried out using a data base of the Archaeometric Studies Group from IPEN-CNEN/SP, in which 45 samples of ceramic fragments from three archaeological sites were analyzed by instrumental neutron activation analysis (INAA) which were determinate the mass fraction of 13 elements (As, Ce, Cr, Eu, Fe, Hf, La, Na, Nd, Sc, Sm, Th, U). The methods used for this study were: single linkage, complete linkage, average linkage, centroid and Ward. The validation was done using the cophenetic correlation coefficient and comparing these values the average linkage method obtained better results. A script of the statistical program R with some functions was created to obtain the cophenetic correlation. By means of these values was possible to choose the most appropriate method to be used in the data base. (author)

  7. Validity studies among hierarchical methods of cluster analysis using cophenetic correlation coefficient

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, Priscilla R.; Munita, Casimiro S.; Lapolli, André L., E-mail: prii.ramos@gmail.com, E-mail: camunita@ipen.br, E-mail: alapolli@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    The literature presents many methods for partitioning of data base, and is difficult choose which is the most suitable, since the various combinations of methods based on different measures of dissimilarity can lead to different patterns of grouping and false interpretations. Nevertheless, little effort has been expended in evaluating these methods empirically using an archaeological data base. In this way, the objective of this work is make a comparative study of the different cluster analysis methods and identify which is the most appropriate. For this, the study was carried out using a data base of the Archaeometric Studies Group from IPEN-CNEN/SP, in which 45 samples of ceramic fragments from three archaeological sites were analyzed by instrumental neutron activation analysis (INAA) which were determinate the mass fraction of 13 elements (As, Ce, Cr, Eu, Fe, Hf, La, Na, Nd, Sc, Sm, Th, U). The methods used for this study were: single linkage, complete linkage, average linkage, centroid and Ward. The validation was done using the cophenetic correlation coefficient and comparing these values the average linkage method obtained better results. A script of the statistical program R with some functions was created to obtain the cophenetic correlation. By means of these values was possible to choose the most appropriate method to be used in the data base. (author)

  8. 19 CFR 151.83 - Method of sampling.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Method of sampling. 151.83 Section 151.83 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Cotton § 151.83 Method of sampling. For...

  9. ALARA ASSESSMENT OF SETTLER SLUDGE SAMPLING METHODS

    International Nuclear Information System (INIS)

    Nelsen, L.A.

    2009-01-01

    The purpose of this assessment is to compare underwater and above water settler sludge sampling methods to determine if the added cost for underwater sampling for the sole purpose of worker dose reductions is justified. Initial planning for sludge sampling included container, settler and knock-out-pot (KOP) sampling. Due to the significantly higher dose consequence of KOP sludge, a decision was made to sample KOP underwater to achieve worker dose reductions. Additionally, initial plans were to utilize the underwater sampling apparatus for settler sludge. Since there are no longer plans to sample KOP sludge, the decision for underwater sampling for settler sludge needs to be revisited. The present sampling plan calls for spending an estimated $2,500,000 to design and construct a new underwater sampling system (per A21 C-PL-001 RevOE). This evaluation will compare and contrast the present method of above water sampling to the underwater method that is planned by the Sludge Treatment Project (STP) and determine if settler samples can be taken using the existing sampling cart (with potentially minor modifications) while maintaining doses to workers As Low As Reasonably Achievable (ALARA) and eliminate the need for costly redesigns, testing and personnel retraining

  10. ALARA ASSESSMENT OF SETTLER SLUDGE SAMPLING METHODS

    Energy Technology Data Exchange (ETDEWEB)

    NELSEN LA

    2009-01-30

    The purpose of this assessment is to compare underwater and above water settler sludge sampling methods to determine if the added cost for underwater sampling for the sole purpose of worker dose reductions is justified. Initial planning for sludge sampling included container, settler and knock-out-pot (KOP) sampling. Due to the significantly higher dose consequence of KOP sludge, a decision was made to sample KOP underwater to achieve worker dose reductions. Additionally, initial plans were to utilize the underwater sampling apparatus for settler sludge. Since there are no longer plans to sample KOP sludge, the decision for underwater sampling for settler sludge needs to be revisited. The present sampling plan calls for spending an estimated $2,500,000 to design and construct a new underwater sampling system (per A21 C-PL-001 RevOE). This evaluation will compare and contrast the present method of above water sampling to the underwater method that is planned by the Sludge Treatment Project (STP) and determine if settler samples can be taken using the existing sampling cart (with potentially minor modifications) while maintaining doses to workers As Low As Reasonably Achievable (ALARA) and eliminate the need for costly redesigns, testing and personnel retraining.

  11. A method to determine density in wood samples using attenuation of 59.5 KeV gamma radiation

    International Nuclear Information System (INIS)

    Dinator, M.I.; Morales, J.R.; Aliaga, N.; Karsulovic, J.T.; Sanchez, J.; Leon, L.A.

    1996-01-01

    A nondestructive method to determine the density of wood samples is presented. The photon mass attenuation coefficient in samples of Pino Radiata was measured at 59.5 KeV with a radioactive source of Am-241. The value of 0.192 ± 0.002 cm 2 /g was obtained with a gamma spectroscopy system and later used on the determination of the mass density in sixteen samples of the same species. Comparison of these results with those of gravimetric method through a linear regression showed a slope of 1.001 and correlation factor of 0.94. (author)

  12. Manifold Regularized Correlation Object Tracking.

    Science.gov (United States)

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2018-05-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.

  13. Method of measuring the disinteration rate of beta-emitting radionuclide in a liquid sample

    International Nuclear Information System (INIS)

    Horrocks, D.L.

    1977-01-01

    A method of measuring the disintegration rate of a beta-emitting radionuclide in a liquid sample by counting at least two differently quenched versions of the sample is described. In each counting operation the sample is counted in the presence of and in the absence of a standard radioactive source. A pulse height (PH) corresponding to a unique point on the pulse height spectrum generated in the presence of the standard is determined. A zero threshold sample count rate (CPM) is derived by counting the sample once in a counting window having a zero threshold lower limit. Normalized values of the measured pulse heights (PH) are developed and correlated with the corresponding pulse counts (CPM) to determine the pulse count for a normalized pulse height value of zero and hence the sample disintegration rate

  14. Measurement of GFR by Tc-99m DTPA: Comparison of 5 plasma sample and 2 plasma sample methods in North Indian population

    International Nuclear Information System (INIS)

    Mittal, B.R.; Bhattacharya, A.; Singh, B.; Jha, V.; Sarika, Kumar R.

    2007-01-01

    Assessment of Glomerular filtration rate (GFR) has significant impact on both prognosis and treatment of patients with renal disease. In this study we compared the two-plasma-sample method (G2S) using a MS excel spreadsheet based program, with a manual five-plasmasample method (GS) used to measure GFR by determining Tc-99m-diethylenetriamine penta-acetic acid (Tc-99m DTPA) clearance in patients with chronic kidney disease (CKD) and healthy renal donors. The study was conducted in 148 subjects (64 men and 84 women; age range 14 to 70 yr); 59 patients of CKD and 89 prospective healthy kidney donors. Tc-99m DTPA (74-100 MBq) was injected intravenously and thereafter blood samples were obtained at 60, 90, 120, 150 and 180 min via the patent venflon. Radioactivity in the injection syringe and plasma was measured by means of a multi-well gamma counter. The correlation coefficient between the 2 methods was 0.9453, with a slope of 0.90 and an intercept of 14.72 mL/min. Bland Altman plot of disagreement showed that G2S underestimated the GFR values by 9.0 ml/min, 11.3 ml/min and 6.9 ml/min, in the entire study, CKD and healthy donor groups respectively. Our results indicate that in spite of good correlation between GS and G2S method, the G2S method constantly underestimated GFR in our study population. However, regression equation may be applied to the GFR values estimated by G2S method to match the GFR determined by GS method. (author)

  15. Dark Energy Survey Year 1 results: cross-correlation redshifts - methods and systematics characterization

    Science.gov (United States)

    Gatti, M.; Vielzeuf, P.; Davis, C.; Cawthon, R.; Rau, M. M.; DeRose, J.; De Vicente, J.; Alarcon, A.; Rozo, E.; Gaztanaga, E.; Hoyle, B.; Miquel, R.; Bernstein, G. M.; Bonnett, C.; Carnero Rosell, A.; Castander, F. J.; Chang, C.; da Costa, L. N.; Gruen, D.; Gschwend, J.; Hartley, W. G.; Lin, H.; MacCrann, N.; Maia, M. A. G.; Ogando, R. L. C.; Roodman, A.; Sevilla-Noarbe, I.; Troxel, M. A.; Wechsler, R. H.; Asorey, J.; Davis, T. M.; Glazebrook, K.; Hinton, S. R.; Lewis, G.; Lidman, C.; Macaulay, E.; Möller, A.; O'Neill, C. R.; Sommer, N. E.; Uddin, S. A.; Yuan, F.; Zhang, B.; Abbott, T. M. C.; Allam, S.; Annis, J.; Bechtol, K.; Brooks, D.; Burke, D. L.; Carollo, D.; Carrasco Kind, M.; Carretero, J.; Cunha, C. E.; D'Andrea, C. B.; DePoy, D. L.; Desai, S.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Goldstein, D. A.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Hoormann, J. K.; Jain, B.; James, D. J.; Jarvis, M.; Jeltema, T.; Johnson, M. W. G.; Johnson, M. D.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Li, T. S.; Lima, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Nichol, R. C.; Nord, B.; Plazas, A. A.; Reil, K.; Rykoff, E. S.; Sako, M.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sheldon, E.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Tucker, B. E.; Tucker, D. L.; Vikram, V.; Walker, A. R.; Weller, J.; Wester, W.; Wolf, R. C.

    2018-06-01

    We use numerical simulations to characterize the performance of a clustering-based method to calibrate photometric redshift biases. In particular, we cross-correlate the weak lensing source galaxies from the Dark Energy Survey Year 1 sample with redMaGiC galaxies (luminous red galaxies with secure photometric redshifts) to estimate the redshift distribution of the former sample. The recovered redshift distributions are used to calibrate the photometric redshift bias of standard photo-z methods applied to the same source galaxy sample. We apply the method to two photo-z codes run in our simulated data: Bayesian Photometric Redshift and Directional Neighbourhood Fitting. We characterize the systematic uncertainties of our calibration procedure, and find that these systematic uncertainties dominate our error budget. The dominant systematics are due to our assumption of unevolving bias and clustering across each redshift bin, and to differences between the shapes of the redshift distributions derived by clustering versus photo-zs. The systematic uncertainty in the mean redshift bias of the source galaxy sample is Δz ≲ 0.02, though the precise value depends on the redshift bin under consideration. We discuss possible ways to mitigate the impact of our dominant systematics in future analyses.

  16. Application of the digital image correlation method in the study of cohesive coarse soil deformations

    Science.gov (United States)

    Kogut, Janusz P.; Tekieli, Marcin

    2018-04-01

    Non-contact video measurement methods are used to extend the capabilities of standard measurement systems, based on strain gauges or accelerometers. In most cases, they are able to provide more accurate information about the material or construction being tested than traditional sensors, while maintaining a high resolution and measurement stability. With the use of optical methods, it is possible to generate a full field of displacement on the surface of the test sample. The displacement value is the basic (primary) value determined using optical methods, and it is possible to determine the size of the derivative in the form of a sample deformation. This paper presents the application of a non-contact optical method to investigate the deformation of coarse soil material. For this type of soil, it is particularly difficult to obtain basic strength parameters. The use of a non-contact optical method, followed by a digital image correlation (DIC) study of the sample obtained during the tests, effectively completes the description of the behaviour of this type of material.

  17. Nuclear spin measurement using the angular correlation method

    International Nuclear Information System (INIS)

    Schapira, J.-P.

    The double angular correlation method is defined by a semi-classical approach (Biendenharn). The equivalence formula in quantum mechanics are discussed for coherent and incoherent angular momentum mixing; the correlations are described from the density and efficiency matrices (Fano). The ambiguities in double angular correlations can be sometimes suppressed (emission of particles with a high orbital momentum l), using triple correlations between levels with well defined spin and parity. Triple correlations are applied to the case where the direction of linear polarization of γ-rays is detected [fr

  18. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    Science.gov (United States)

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Standard methods for sampling and sample preparation for gamma spectroscopy

    International Nuclear Information System (INIS)

    Taskaeva, M.; Taskaev, E.; Nikolov, P.

    1993-01-01

    The strategy for sampling and sample preparation is outlined: necessary number of samples; analysis and treatment of the results received; quantity of the analysed material according to the radionuclide concentrations and analytical methods; the minimal quantity and kind of the data needed for making final conclusions and decisions on the base of the results received. This strategy was tested in gamma spectroscopic analysis of radionuclide contamination of the region of Eleshnitsa Uranium Mines. The water samples was taken and stored according to the ASTM D 3370-82. The general sampling procedures were in conformity with the recommendations of ISO 5667. The radionuclides was concentrated by coprecipitation with iron hydroxide and ion exchange. The sampling of soil samples complied with the rules of ASTM C 998, and their sample preparation - with ASTM C 999. After preparation the samples were sealed hermetically and measured. (author)

  20. Sampling bee communities using pan traps: alternative methods increase sample size

    Science.gov (United States)

    Monitoring of the status of bee populations and inventories of bee faunas require systematic sampling. Efficiency and ease of implementation has encouraged the use of pan traps to sample bees. Efforts to find an optimal standardized sampling method for pan traps have focused on pan trap color. Th...

  1. New adaptive sampling method in particle image velocimetry

    International Nuclear Information System (INIS)

    Yu, Kaikai; Xu, Jinglei; Tang, Lan; Mo, Jianwei

    2015-01-01

    This study proposes a new adaptive method to enable the number of interrogation windows and their positions in a particle image velocimetry (PIV) image interrogation algorithm to become self-adapted according to the seeding density. The proposed method can relax the constraint of uniform sampling rate and uniform window size commonly adopted in the traditional PIV algorithm. In addition, the positions of the sampling points are redistributed on the basis of the spring force generated by the sampling points. The advantages include control of the number of interrogation windows according to the local seeding density and smoother distribution of sampling points. The reliability of the adaptive sampling method is illustrated by processing synthetic and experimental images. The synthetic example attests to the advantages of the sampling method. Compared with that of the uniform interrogation technique in the experimental application, the spatial resolution is locally enhanced when using the proposed sampling method. (technical design note)

  2. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions

    Science.gov (United States)

    Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.

    2018-03-01

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  3. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions.

    Science.gov (United States)

    Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E

    2018-03-14

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  4. MRI-determined liver proton density fat fraction, with MRS validation: Comparison of regions of interest sampling methods in patients with type 2 diabetes.

    Science.gov (United States)

    Vu, Kim-Nhien; Gilbert, Guillaume; Chalut, Marianne; Chagnon, Miguel; Chartrand, Gabriel; Tang, An

    2016-05-01

    To assess the agreement between published magnetic resonance imaging (MRI)-based regions of interest (ROI) sampling methods using liver mean proton density fat fraction (PDFF) as the reference standard. This retrospective, internal review board-approved study was conducted in 35 patients with type 2 diabetes. Liver PDFF was measured by magnetic resonance spectroscopy (MRS) using a stimulated-echo acquisition mode sequence and MRI using a multiecho spoiled gradient-recalled echo sequence at 3.0T. ROI sampling methods reported in the literature were reproduced and liver mean PDFF obtained by whole-liver segmentation was used as the reference standard. Intraclass correlation coefficients (ICCs), Bland-Altman analysis, repeated-measures analysis of variance (ANOVA), and paired t-tests were performed. ICC between MRS and MRI-PDFF was 0.916. Bland-Altman analysis showed excellent intermethod agreement with a bias of -1.5 ± 2.8%. The repeated-measures ANOVA found no systematic variation of PDFF among the nine liver segments. The correlation between liver mean PDFF and ROI sampling methods was very good to excellent (0.873 to 0.975). Paired t-tests revealed significant differences (P sampling methods that exclusively or predominantly sampled the right lobe. Significant correlations with mean PDFF were found with sampling methods that included higher number of segments, total area equal or larger than 5 cm(2) , or sampled both lobes (P = 0.001, 0.023, and 0.002, respectively). MRI-PDFF quantification methods should sample each liver segment in both lobes and include a total surface area equal or larger than 5 cm(2) to provide a close estimate of the liver mean PDFF. © 2015 Wiley Periodicals, Inc.

  5. Sample normalization methods in quantitative metabolomics.

    Science.gov (United States)

    Wu, Yiman; Li, Liang

    2016-01-22

    To reveal metabolomic changes caused by a biological event in quantitative metabolomics, it is critical to use an analytical tool that can perform accurate and precise quantification to examine the true concentration differences of individual metabolites found in different samples. A number of steps are involved in metabolomic analysis including pre-analytical work (e.g., sample collection and storage), analytical work (e.g., sample analysis) and data analysis (e.g., feature extraction and quantification). Each one of them can influence the quantitative results significantly and thus should be performed with great care. Among them, the total sample amount or concentration of metabolites can be significantly different from one sample to another. Thus, it is critical to reduce or eliminate the effect of total sample amount variation on quantification of individual metabolites. In this review, we describe the importance of sample normalization in the analytical workflow with a focus on mass spectrometry (MS)-based platforms, discuss a number of methods recently reported in the literature and comment on their applicability in real world metabolomics applications. Sample normalization has been sometimes ignored in metabolomics, partially due to the lack of a convenient means of performing sample normalization. We show that several methods are now available and sample normalization should be performed in quantitative metabolomics where the analyzed samples have significant variations in total sample amounts. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    Science.gov (United States)

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith

  7. Manifold Regularized Correlation Object Tracking

    OpenAIRE

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2017-01-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...

  8. Radioactive air sampling methods

    CERN Document Server

    Maiello, Mark L

    2010-01-01

    Although the field of radioactive air sampling has matured and evolved over decades, it has lacked a single resource that assimilates technical and background information on its many facets. Edited by experts and with contributions from top practitioners and researchers, Radioactive Air Sampling Methods provides authoritative guidance on measuring airborne radioactivity from industrial, research, and nuclear power operations, as well as naturally occuring radioactivity in the environment. Designed for industrial hygienists, air quality experts, and heath physicists, the book delves into the applied research advancing and transforming practice with improvements to measurement equipment, human dose modeling of inhaled radioactivity, and radiation safety regulations. To present a wide picture of the field, it covers the international and national standards that guide the quality of air sampling measurements and equipment. It discusses emergency response issues, including radioactive fallout and the assets used ...

  9. The MIXR sample or: how I learned to stop worrying and love multiwavelength catalogue cross-correlations

    Science.gov (United States)

    Mingo, Beatriz; Watson, Mike; Stewart, Gordon; Rosen, Simon; Blain, Andrew; Hardcastle, Martin; Mateos, Silvia; Carrera, Francisco; Ruiz, Angel; Pineau, Francois-Xavier

    2016-08-01

    We cross-match 3XMM, WISE and FIRST/NVSS to create the largest-to-date mid-IR, X-ray, and radio (MIXR) sample of galaxies and AGN. We use MIXR to triage sources and efficiently and accurately pre-classify them as star-forming galaxies or AGN, and to highlight bias and shortcomings in current AGN sample selection methods, paving the way for the next generation of instruments. Our results highlight key questions in AGN science, such as the need for a re-definition of the radio-loud/radio-quiet classification, and our observed lack of correlation between the kinetic (jet) and radiative (luminosity) output in AGN, which has dramatic potential consequences on our current understanding of AGN accretion, variability and feedback.

  10. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples

    Science.gov (United States)

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-01

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.

  11. Method validation for control determination of mercury in fresh fish and shrimp samples by solid sampling thermal decomposition/amalgamation atomic absorption spectrometry.

    Science.gov (United States)

    Torres, Daiane Placido; Martins-Teixeira, Maristela Braga; Cadore, Solange; Queiroz, Helena Müller

    2015-01-01

    A method for the determination of total mercury in fresh fish and shrimp samples by solid sampling thermal decomposition/amalgamation atomic absorption spectrometry (TDA AAS) has been validated following international foodstuff protocols in order to fulfill the Brazilian National Residue Control Plan. The experimental parameters have been previously studied and optimized according to specific legislation on validation and inorganic contaminants in foodstuff. Linearity, sensitivity, specificity, detection and quantification limits, precision (repeatability and within-laboratory reproducibility), robustness as well as accuracy of the method have been evaluated. Linearity of response was satisfactory for the two range concentrations available on the TDA AAS equipment, between approximately 25.0 and 200.0 μg kg(-1) (square regression) and 250.0 and 2000.0 μg kg(-1) (linear regression) of mercury. The residues for both ranges were homoscedastic and independent, with normal distribution. Correlation coefficients obtained for these ranges were higher than 0.995. Limits of quantification (LOQ) and of detection of the method (LDM), based on signal standard deviation (SD) for a low-in-mercury sample, were 3.0 and 1.0 μg kg(-1), respectively. Repeatability of the method was better than 4%. Within-laboratory reproducibility achieved a relative SD better than 6%. Robustness of the current method was evaluated and pointed sample mass as a significant factor. Accuracy (assessed as the analyte recovery) was calculated on basis of the repeatability, and ranged from 89% to 99%. The obtained results showed the suitability of the present method for direct mercury measurement in fresh fish and shrimp samples and the importance of monitoring the analysis conditions for food control purposes. Additionally, the competence of this method was recognized by accreditation under the standard ISO/IEC 17025.

  12. A nanosilver-based spectrophotometric method for determination of malachite green in surface water samples.

    Science.gov (United States)

    Sahraei, R; Farmany, A; Mortazavi, S S; Noorizadeh, H

    2013-07-01

    A new spectrophotometric method is reported for the determination of nanomolar level of malachite green in surface water samples. The method is based on the catalytic effect of silver nanoparticles on the oxidation of malachite green by hexacyanoferrate (III) in acetate-acetic acid medium. The absorbance is measured at 610 nm with the fixed-time method. Under the optimum conditions, the linear range was 8.0 × 10(-9)-2.0 × 10(-7) mol L(-1) malachite green with a correlation coefficient of 0.996. The limit of detection (S/N = 3) was 2.0 × 10(-9) mol L(-1). Relative standard deviation for ten replicate determinations of 1.0 × 10(-8) mol L(-1) malachite green was 1.86%. The method is featured with good accuracy and reproducibility for malachite green determination in surface water samples without any pre-concentration and separation step.

  13. Fluidics platform and method for sample preparation

    Science.gov (United States)

    Benner, Henry W.; Dzenitis, John M.

    2016-06-21

    Provided herein are fluidics platforms and related methods for performing integrated sample collection and solid-phase extraction of a target component of the sample all in one tube. The fluidics platform comprises a pump, particles for solid-phase extraction and a particle-holding means. The method comprises contacting the sample with one or more reagents in a pump, coupling a particle-holding means to the pump and expelling the waste out of the pump while the particle-holding means retains the particles inside the pump. The fluidics platform and methods herein described allow solid-phase extraction without pipetting and centrifugation.

  14. AN EMPIRICAL INVESTIGATION OF THE EFFECTS OF NONNORMALITY UPON THE SAMPLING DISTRIBUTION OF THE PROJECT MOMENT CORRELATION COEFFICIENT.

    Science.gov (United States)

    HJELM, HOWARD; NORRIS, RAYMOND C.

    THE STUDY EMPIRICALLY DETERMINED THE EFFECTS OF NONNORMALITY UPON SOME SAMPLING DISTRIBUTIONS OF THE PRODUCT MOMENT CORRELATION COEFFICIENT (PMCC). SAMPLING DISTRIBUTIONS OF THE PMCC WERE OBTAINED BY DRAWING NUMEROUS SAMPLES FROM CONTROL AND EXPERIMENTAL POPULATIONS HAVING VARIOUS DEGREES OF NONNORMALITY AND BY CALCULATING CORRELATION COEFFICIENTS…

  15. Method for measuring the disintegration rate of a beta-emitting radionuclide in a liquid sample

    International Nuclear Information System (INIS)

    1977-01-01

    A method of measuring the distintegration rate of a beta-emitting radionuclide in a liquid sample by counting at least two differently quenched versions of the sample. In each counting operation the sample is counted in the presence of and in the absence of a standard radioactive source. A pulse height (PH) corresponding to a unique point on the pulse height spectrum generated in the presence of the standard is determined. A zero threshold sample count rate (CPM) is derived by counting the sample once in a counting window having a zero threshold lower limit. Normalized values of the measured pulse heights (PH) are developed and correlated with the corresponding counts (CPM) to determine the pulse count for a normalized pulse height value of zero and hence the sample disintegration rate

  16. Influence of the operating parameters and of the sample introduction system on time correlation of line intensities using an axially viewed CCD-based ICP-AES system

    Energy Technology Data Exchange (ETDEWEB)

    Grotti, Marco, E-mail: grotti@chimica.unige.i [Dipartimento di Chimica e Chimica Industriale, Via Dodecaneso 31, 16146 Genova (Italy); Todoli, Jose Luis [Departamento de Quimica Analitica, Nutricion y Bromatologia, Universidad de Alicante, 03080, Alicante (Spain); Mermet, Jean Michel [Spectroscopy Forever, 01390 Tramoyes (France)

    2010-02-15

    The influence of the acquisition and operating parameters on time correlation between emission line intensities was investigated using axially viewed inductively coupled plasma-multichannel-based emission spectrometry and various sample introduction systems. It was found that to obtain flicker-noise limited signals, necessary to compensate for time-correlated signal fluctuations by internal standardization, the flicker-noise magnitude of the sample introduction system, the integration time and the emission line intensity had to be considered. The highest correlation between lines was observed for ultrasonic nebulization with desolvatation, the noisiest system among those considered, for which the contribution of the uncorrelated shot-noise was negligible. In contrast, for sample introduction systems characterized by lower flicker-noise levels, shot-noise led to high, non-correlated RSD values, making the internal standard method to be much less efficient. To minimize shot-noise, time correlation was improved by increasing the emission line intensities and the integration time. Improvement in repeatability did not depend only on time correlation, but also on the ratio between the relative standard deviations of the analytical and reference lines. The best signal compensation was obtained when RSD values of the reference and analytical lines were similar, which is usually obtained when the system is flicker-noise limited, while departure from similarity can lead to a degradation of repeatability when using the internal standard method. Moreover, the use of so-called robust plasma conditions, i.e. a high power (1500 W) along with a low carrier gas flow rate (0.8 L/min) improved also the compensation. Finally, high correlation and consequent improvement in repeatability by internal standardization was observed also in the presence of complex matrices (sediment and soil samples), although a matrix-induced degradation of the correlation between lines was generally

  17. Sampling Methods in Cardiovascular Nursing Research: An Overview.

    Science.gov (United States)

    Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie

    2014-01-01

    Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.

  18. Correlation between Cervical Vertebral Maturation Stages and Dental Maturation in a Saudi Sample

    Directory of Open Access Journals (Sweden)

    Nayef H Felemban

    2017-01-01

    Full Text Available Background: The aim of the present study was to compare the cervical vertebra maturation stages method and dental maturity using tooth calcification stages. Methods: The current study comprised of 405 subjects selected from orthodontic patients of Saudi origin coming to clinics of the specialized dental centers in western region of Saudi Arabia. Dental age was assessed according to the developmental stages of upper and lower third molars and skeletal maturation according to the cervical vertebrae maturation stage method. Statistical analysis was done using Kruskal-Wallis H, Mann-Whitney U test, Chi-Square test; t-test and Spearman correlation coefficient for inter group comparison. Results: The females were younger than males in all cervical stages. The CS1-CS2 show the period before the peak of growth, during CS3-CS5 it’s the pubertal growth spurt and CS6 is the period after the peak of the growth. The mean age and standard deviation for cervical stages of CS2, CS3 and CS4 were 12.09 ±1.72 years, 13.19 ±1.62 and 14.88 ±1.52 respectively. The Spearman correlation coefficients between cervical vertebrae and dental maturation were between 0.166 and 0.612, 0.243 and 0.832 for both sexes for upper and lower third molars. The significance levels for all coefficients were equal at 0.01 and 0.05. Conclusion: The results of this study showed that the skeletal maturity increased with the increase in dental ages for both genders. An early rate of skeletal maturation stage was observed in females. This study needs further analysis using a larger sample covering the entire dentition.

  19. Total lead (Pb) concentration in oil shale ash samples based on correlation to isotope Pb-210 gamma-spectrometric measurements

    Energy Technology Data Exchange (ETDEWEB)

    Vaasma, T.; Kiisk, M.; Tkaczyk, A.H. [University of Tartu (Estonia); Bitjukova, L. [Tallinn University of Technology (Estonia)

    2014-07-01

    (PF) and circulating fluidized bed (CFB) firing technology. These samples were analyzed to determine macro and trace elements by the ICP-MS method. The same samples were also measured with a high-purity germanium detector (planar BSI GPD-50400) to determine the activity concentrations of natural radionuclides. The lead concentrations and Pb-210 activity concentrations were determined, and the correlation between the corresponding values was analyzed. Initial results demonstrate a strong positive linear relationship between these values, with the coefficient of determination (R{sup 2}) over 94. The correlation coefficient (Pearson's, 'r') had a value over 0.95. Both Pb and Pb-210 values had an increasing trend from the bottom ash towards electrostatic precipitator (ESP) ashes. The strong linear correlation between Pb concentrations and Pb-210 activity concentrations gives a credible indication that lead can be measured in ash samples using its radioactive isotope Pb-210. Especially in situations where there are higher concentrations of Pb, for example in the case of wastes from the metallurgic and energy industries, this method could be used to detect the lead concentration quickly and with no chemical processing of the sample. Document available in abstract form only. (authors)

  20. On-capillary sample cleanup method for the electrophoretic determination of carbohydrates in juice samples.

    Science.gov (United States)

    Morales-Cid, Gabriel; Simonet, Bartolomé M; Cárdenas, Soledad; Valcárcel, Miguel

    2007-05-01

    On many occasions, sample treatment is a critical step in electrophoretic analysis. As an alternative to batch procedures, in this work, a new strategy is presented with a view to develop an on-capillary sample cleanup method. This strategy is based on the partial filling of the capillary with carboxylated single-walled carbon nanotube (c-SWNT). The nanoparticles retain interferences from the matrix allowing the determination and quantification of carbohydrates (viz glucose, maltose and fructose). The precision of the method for the analysis of real samples ranged from 5.3 to 6.4%. The proposed method was compared with a method based on a batch filtration of the juice sample through diatomaceous earth and further electrophoretic determination. This method was also validated in this work. The RSD for this other method ranged from 5.1 to 6%. The results obtained by both methods were statistically comparable demonstrating the accuracy of the proposed methods and their effectiveness. Electrophoretic separation of carbohydrates was achieved using 200 mM borate solution as a buffer at pH 9.5 and applying 15 kV. During separation, the capillary temperature was kept constant at 40 degrees C. For the on-capillary cleanup method, a solution containing 50 mg/L of c-SWNTs prepared in 300 mM borate solution at pH 9.5 was introduced for 60 s into the capillary just before sample introduction. For the electrophoretic analysis of samples cleaned in batch with diatomaceous earth, it is also recommended to introduce into the capillary, just before the sample, a 300 mM borate solution as it enhances the sensitivity and electrophoretic resolution.

  1. Present status of NMCC and sample preparation method for bio-samples

    International Nuclear Information System (INIS)

    Futatsugawa, S.; Hatakeyama, S.; Saitou, S.; Sera, K.

    1993-01-01

    In NMCC(Nishina Memorial Cyclotron Center) we are doing researches on PET of nuclear medicine (Positron Emission Computed Tomography) and PIXE analysis (Particle Induced X-ray Emission) using a small cyclotron of compactly designed. The NMCC facilities have been opened to researchers of other institutions since April 1993. The present status of NMCC is described. Bio-samples (medical samples, plants, animals and environmental samples) have mainly been analyzed by PIXE in NMCC. Small amounts of bio-samples for PIXE are decomposed quickly and easily in a sealed PTFE (polytetrafluoroethylene) vessel with a microwave oven. This sample preparation method of bio-samples also is described. (author)

  2. Evaluation of common methods for sampling invertebrate pollinator assemblages: net sampling out-perform pan traps.

    Science.gov (United States)

    Popic, Tony J; Davila, Yvonne C; Wardle, Glenda M

    2013-01-01

    Methods for sampling ecological assemblages strive to be efficient, repeatable, and representative. Unknowingly, common methods may be limited in terms of revealing species function and so of less value for comparative studies. The global decline in pollination services has stimulated surveys of flower-visiting invertebrates, using pan traps and net sampling. We explore the relative merits of these two methods in terms of species discovery, quantifying abundance, function, and composition, and responses of species to changing floral resources. Using a spatially-nested design we sampled across a 5000 km(2) area of arid grasslands, including 432 hours of net sampling and 1296 pan trap-days, between June 2010 and July 2011. Net sampling yielded 22% more species and 30% higher abundance than pan traps, and better reflected the spatio-temporal variation of floral resources. Species composition differed significantly between methods; from 436 total species, 25% were sampled by both methods, 50% only by nets, and the remaining 25% only by pans. Apart from being less comprehensive, if pan traps do not sample flower-visitors, the link to pollination is questionable. By contrast, net sampling functionally linked species to pollination through behavioural observations of flower-visitation interaction frequency. Netted specimens are also necessary for evidence of pollen transport. Benefits of net-based sampling outweighed minor differences in overall sampling effort. As pan traps and net sampling methods are not equivalent for sampling invertebrate-flower interactions, we recommend net sampling of invertebrate pollinator assemblages, especially if datasets are intended to document declines in pollination and guide measures to retain this important ecosystem service.

  3. Automated modal parameter estimation using correlation analysis and bootstrap sampling

    Science.gov (United States)

    Yaghoubi, Vahid; Vakilzadeh, Majid K.; Abrahamsson, Thomas J. S.

    2018-02-01

    The estimation of modal parameters from a set of noisy measured data is a highly judgmental task, with user expertise playing a significant role in distinguishing between estimated physical and noise modes of a test-piece. Various methods have been developed to automate this procedure. The common approach is to identify models with different orders and cluster similar modes together. However, most proposed methods based on this approach suffer from high-dimensional optimization problems in either the estimation or clustering step. To overcome this problem, this study presents an algorithm for autonomous modal parameter estimation in which the only required optimization is performed in a three-dimensional space. To this end, a subspace-based identification method is employed for the estimation and a non-iterative correlation-based method is used for the clustering. This clustering is at the heart of the paper. The keys to success are correlation metrics that are able to treat the problems of spatial eigenvector aliasing and nonunique eigenvectors of coalescent modes simultaneously. The algorithm commences by the identification of an excessively high-order model from frequency response function test data. The high number of modes of this model provides bases for two subspaces: one for likely physical modes of the tested system and one for its complement dubbed the subspace of noise modes. By employing the bootstrap resampling technique, several subsets are generated from the same basic dataset and for each of them a model is identified to form a set of models. Then, by correlation analysis with the two aforementioned subspaces, highly correlated modes of these models which appear repeatedly are clustered together and the noise modes are collected in a so-called Trashbox cluster. Stray noise modes attracted to the mode clusters are trimmed away in a second step by correlation analysis. The final step of the algorithm is a fuzzy c-means clustering procedure applied to

  4. Comparison of glomerular filtration rates by dynamic renal scintigraphy and dual-plasma sample clearance method in diabetic nephropathy

    International Nuclear Information System (INIS)

    Xie Peng; Huang Jianmin; Pan Liping; Liu Xiaomei; Wei Lingge; Gao Jianqing

    2010-01-01

    Objective: To evaluate the accuracy of renal scintigraphy for the estimation of glomerular filtration rates (dGFR) in patients with diabetic nephropathy as compared to the conventional dual-plasma sample clearance method (pscGFR). Methods: Forty-six patients with diabetic nephropathy underwent both dynamic renal scintigraphy and dual-plasma sample measurement after 99 Tc m -DTPA injection. Paired student t-test and correlation analysis were performed to compare dGFR and pscGFR (normalized to body surface area, 1.73 m -2 ). Results: The mean dGFR was higher than mean pscGFR ((51.08±26.78)ml·min -1 vs (44.06±29.43)ml·min -1 , t=4.209, P=0.000). The dGFR correlated with pscGFR (r=0.923, P=0.000) linearly (regression equation: pscGFR=1.015 x dGFR -7.773, F=254.656, P=0.000). Conclusions: dGFR correlated well with pscGFR. Although it could not absolutely replace the latter in patients with diabetic nephropathy, dGFR could reasonably evaluate the filtration function for these patients. (authors)

  5. Quantum Correlations in Nonlocal Boson Sampling.

    Science.gov (United States)

    Shahandeh, Farid; Lund, Austin P; Ralph, Timothy C

    2017-09-22

    Determination of the quantum nature of correlations between two spatially separated systems plays a crucial role in quantum information science. Of particular interest is the questions of if and how these correlations enable quantum information protocols to be more powerful. Here, we report on a distributed quantum computation protocol in which the input and output quantum states are considered to be classically correlated in quantum informatics. Nevertheless, we show that the correlations between the outcomes of the measurements on the output state cannot be efficiently simulated using classical algorithms. Crucially, at the same time, local measurement outcomes can be efficiently simulated on classical computers. We show that the only known classicality criterion violated by the input and output states in our protocol is the one used in quantum optics, namely, phase-space nonclassicality. As a result, we argue that the global phase-space nonclassicality inherent within the output state of our protocol represents true quantum correlations.

  6. Direct sampling methods for inverse elastic scattering problems

    Science.gov (United States)

    Ji, Xia; Liu, Xiaodong; Xi, Yingxia

    2018-03-01

    We consider the inverse elastic scattering of incident plane compressional and shear waves from the knowledge of the far field patterns. Specifically, three direct sampling methods for location and shape reconstruction are proposed using the different component of the far field patterns. Only inner products are involved in the computation, thus the novel sampling methods are very simple and fast to be implemented. With the help of the factorization of the far field operator, we give a lower bound of the proposed indicator functionals for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functionals decay like the Bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functionals continuously dependent on the far field patterns, which further implies that the novel sampling methods are extremely stable with respect to data error. For the case when the observation directions are restricted into the limited aperture, we firstly introduce some data retrieval techniques to obtain those data that can not be measured directly and then use the proposed direct sampling methods for location and shape reconstructions. Finally, some numerical simulations in two dimensions are conducted with noisy data, and the results further verify the effectiveness and robustness of the proposed sampling methods, even for multiple multiscale cases and limited-aperture problems.

  7. Surface Sampling Collection and Culture Methods for Escherichia coli in Household Environments with High Fecal Contamination.

    Science.gov (United States)

    Exum, Natalie G; Kosek, Margaret N; Davis, Meghan F; Schwab, Kellogg J

    2017-08-22

    Empiric quantification of environmental fecal contamination is an important step toward understanding the impact that water, sanitation, and hygiene interventions have on reducing enteric infections. There is a need to standardize the methods used for surface sampling in field studies that examine fecal contamination in low-income settings. The dry cloth method presented in this manuscript improves upon the more commonly used swabbing technique that has been shown in the literature to have a low sampling efficiency. The recovery efficiency of a dry electrostatic cloth sampling method was evaluated using Escherichia coli and then applied to household surfaces in Iquitos, Peru, where there is high fecal contamination and enteric infection. Side-by-side measurements were taken from various floor locations within a household at the same time over a three-month period to compare for consistency of quantification of E. coli bacteria. The dry cloth sampling method in the laboratory setting showed 105% (95% Confidence Interval: 98%, 113%) E. coli recovery efficiency off of the cloths. The field application demonstrated strong agreement of side-by-side results (Pearson correlation coefficient for dirt surfaces was 0.83 ( p samples (Pearson (0.53, p method can be utilized in households with high bacterial loads using either continuous (quantitative) or categorical (semi-quantitative) data. The standardization of this low-cost, dry electrostatic cloth sampling method can be used to measure differences between households in intervention and non-intervention arms of randomized trials.

  8. Comparison of mucosal lining fluid sampling methods and influenza-specific IgA detection assays for use in human studies of influenza immunity.

    Science.gov (United States)

    de Silva, Thushan I; Gould, Victoria; Mohammed, Nuredin I; Cope, Alethea; Meijer, Adam; Zutt, Ilse; Reimerink, Johan; Kampmann, Beate; Hoschler, Katja; Zambon, Maria; Tregoning, John S

    2017-10-01

    We need greater understanding of the mechanisms underlying protection against influenza virus to develop more effective vaccines. To do this, we need better, more reproducible methods of sampling the nasal mucosa. The aim of the current study was to compare levels of influenza virus A subtype-specific IgA collected using three different methods of nasal sampling. Samples were collected from healthy adult volunteers before and after LAIV immunization by nasal wash, flocked swabs and Synthetic Absorptive Matrix (SAM) strips. Influenza A virus subtype-specific IgA levels were measured by haemagglutinin binding ELISA or haemagglutinin binding microarray and the functional response was assessed by microneutralization. Nasosorption using SAM strips lead to the recovery of a more concentrated sample of material, with a significantly higher level of total and influenza H1-specific IgA. However, an equivalent percentage of specific IgA was observed with all sampling methods when normalized to the total IgA. Responses measured using a recently developed antibody microarray platform, which allows evaluation of binding to multiple influenza strains simultaneously with small sample volumes, were compared to ELISA. There was a good correlation between ELISA and microarray values. Material recovered from SAM strips was weakly neutralizing when used in an in vitro assay, with a modest correlation between the level of IgA measured by ELISA and neutralization, but a greater correlation between microarray-measured IgA and neutralizing activity. In conclusion we have tested three different methods of nasal sampling and show that flocked swabs and novel SAM strips are appropriate alternatives to traditional nasal washes for assessment of mucosal influenza humoral immunity. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. A rapid method of radium-226 analysis in water samples using an alpha spectroscopic technique

    International Nuclear Information System (INIS)

    Lim, T.P.

    1981-01-01

    A fast, reliable and accurate method for radium-226 determination in environmental water samples has been devised, using an alpha spectroscopic technique. The correlation between barium-133 and radium-226 in the barium-radium sulphate precipitation mechanism was studied and in the limited experimental recovery range, the coefficient of correlation was r = 0.986. A self-absorption study for various barium carrier concentrations was also undertaken to obtain the least broadening of alpha energy line widths. An optimum value of 0.3 mg barium carrier was obtained for chemical recovery in the range of 85 percent. (auth)

  10. Sampling Methods for Wallenius' and Fisher's Noncentral Hypergeometric Distributions

    DEFF Research Database (Denmark)

    Fog, Agner

    2008-01-01

    the mode, ratio-of-uniforms rejection method, and rejection by sampling in the tau domain. Methods for the multivariate distributions include: simulation of urn experiments, conditional method, Gibbs sampling, and Metropolis-Hastings sampling. These methods are useful for Monte Carlo simulation of models...... of biased sampling and models of evolution and for calculating moments and quantiles of the distributions.......Several methods for generating variates with univariate and multivariate Wallenius' and Fisher's noncentral hypergeometric distributions are developed. Methods for the univariate distributions include: simulation of urn experiments, inversion by binary search, inversion by chop-down search from...

  11. Evaluation of common methods for sampling invertebrate pollinator assemblages: net sampling out-perform pan traps.

    Directory of Open Access Journals (Sweden)

    Tony J Popic

    Full Text Available Methods for sampling ecological assemblages strive to be efficient, repeatable, and representative. Unknowingly, common methods may be limited in terms of revealing species function and so of less value for comparative studies. The global decline in pollination services has stimulated surveys of flower-visiting invertebrates, using pan traps and net sampling. We explore the relative merits of these two methods in terms of species discovery, quantifying abundance, function, and composition, and responses of species to changing floral resources. Using a spatially-nested design we sampled across a 5000 km(2 area of arid grasslands, including 432 hours of net sampling and 1296 pan trap-days, between June 2010 and July 2011. Net sampling yielded 22% more species and 30% higher abundance than pan traps, and better reflected the spatio-temporal variation of floral resources. Species composition differed significantly between methods; from 436 total species, 25% were sampled by both methods, 50% only by nets, and the remaining 25% only by pans. Apart from being less comprehensive, if pan traps do not sample flower-visitors, the link to pollination is questionable. By contrast, net sampling functionally linked species to pollination through behavioural observations of flower-visitation interaction frequency. Netted specimens are also necessary for evidence of pollen transport. Benefits of net-based sampling outweighed minor differences in overall sampling effort. As pan traps and net sampling methods are not equivalent for sampling invertebrate-flower interactions, we recommend net sampling of invertebrate pollinator assemblages, especially if datasets are intended to document declines in pollination and guide measures to retain this important ecosystem service.

  12. DOE methods for evaluating environmental and waste management samples

    International Nuclear Information System (INIS)

    Goheen, S.C.; McCulloch, M.; Thomas, B.L.; Riley, R.G.; Sklarew, D.S.; Mong, G.M.; Fadeff, S.K.

    1993-03-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) provides applicable methods in use by. the US Department of Energy (DOE) laboratories for sampling and analyzing constituents of waste and environmental samples. The development of DOE Methods is supported by the Laboratory Management Division (LMD) of the DOE. This document contains chapters and methods that are proposed for use in evaluating components of DOE environmental and waste management samples. DOE Methods is a resource intended to support sampling and analytical activities that will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the US Environmental Protection Agency (EPA), or others

  13. DOE methods for evaluating environmental and waste management samples.

    Energy Technology Data Exchange (ETDEWEB)

    Goheen, S C; McCulloch, M; Thomas, B L; Riley, R G; Sklarew, D S; Mong, G M; Fadeff, S K [eds.; Pacific Northwest Lab., Richland, WA (United States)

    1994-04-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) provides applicable methods in use by. the US Department of Energy (DOE) laboratories for sampling and analyzing constituents of waste and environmental samples. The development of DOE Methods is supported by the Laboratory Management Division (LMD) of the DOE. This document contains chapters and methods that are proposed for use in evaluating components of DOE environmental and waste management samples. DOE Methods is a resource intended to support sampling and analytical activities that will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the US Environmental Protection Agency (EPA), or others.

  14. Cadmium and lead determination by ICPMS: Method optimization and application in carabao milk samples

    Directory of Open Access Journals (Sweden)

    Riza A. Magbitang

    2012-06-01

    Full Text Available A method utilizing inductively coupled plasma mass spectrometry (ICPMS as the element-selective detector with microwave-assisted nitric acid digestion as the sample pre-treatment technique was developed for the simultaneous determination of cadmium (Cd and lead (Pb in milk samples. The estimated detection limits were 0.09ìg kg-1 and 0.33ìg kg-1 for Cd and Pb, respectively. The method was linear in the concentration range 0.01 to 500ìg kg-1with correlation coefficients of 0.999 for both analytes.The method was validated using certified reference material BCR 150 and the determined values for Cd and Pb were 18.24 ± 0.18 ìg kg-1 and 807.57 ± 7.07ìg kg-1, respectively. Further validation using another certified reference material, NIST 1643e, resulted in determined concentrations of 6.48 ± 0.10 ìg L-1 for Cd and 21.96 ± 0.87 ìg L-1 for Pb. These determined values agree well with the certified values in the reference materials.The method was applied to processed and raw carabao milk samples collected in Nueva Ecija, Philippines.The Cd levels determined in the samples were in the range 0.11 ± 0.07 to 5.17 ± 0.13 ìg kg-1 for the processed milk samples, and 0.11 ± 0.07 to 0.45 ± 0.09 ìg kg-1 for the raw milk samples. The concentrations of Pb were in the range 0.49 ± 0.21 to 5.82 ± 0.17 ìg kg-1 for the processed milk samples, and 0.72 ± 0.18 to 6.79 ± 0.20 ìg kg-1 for the raw milk samples.

  15. Isotope correlations for safeguards surveillance and accountancy methods

    International Nuclear Information System (INIS)

    Persiani, P.J.; Kalimullah.

    1983-01-01

    Isotope correlations corroborated by experiments, coupled with measurement methods for nuclear material in the fuel cycle have the potential as a safeguards surveillance and accountancy system. The US/DOE/OSS Isotope Correlations for Surveillance and Accountancy Methods (ICSAM) program has been structured into three phases: (1) the analytical development of Isotope Correlation Technique (ICT) for actual power reactor fuel cycles; (2) the development of a dedicated portable ICT computer system for in-field implementation, and (3) the experimental program for measurement of U, Pu isotopics in representative spent fuel-rods of the initial 3 or 4 burnup cycles of the Commonwealth Edison Zion -1 and -2 PWR power plants. Since any particular correlation could generate different curves depending upon the type and positioning of the fuel assembly, a 3-D reactor model and 2-group cross section depletion calculation for the first cycle of the ZION-2 was performed with each fuel assembly as a depletion block. It is found that for a given PWR all assemblies with a unique combination of enrichment zone and number of burnable poison rods (BPRs) generate one coincident curve. Some correlations are found to generate a single curve for assemblies of all enrichments and number of BPRs. The 8 axial segments of the 3-D calculation generate one coincident curve for each correlation. For some correlations the curve for the full assembly homogenized over core-height deviates from the curve for the 8 axial segments, and for other correlations coincides with the curve for the segments. The former behavior is primarily based on the transmutation lag between the end segment and the middle segments. The experimental implication is that the isotope correlations exhibiting this behavior can be determined by dissolving a full assembly but not by dissolving only an axial segment, or pellets

  16. Some new results on correlation-preserving factor scores prediction methods

    NARCIS (Netherlands)

    Ten Berge, J.M.F.; Krijnen, W.P.; Wansbeek, T.J.; Shapiro, A.

    1999-01-01

    Anderson and Rubin and McDonald have proposed a correlation-preserving method of factor scores prediction which minimizes the trace of a residual covariance matrix for variables. Green has proposed a correlation-preserving method which minimizes the trace of a residual covariance matrix for factors.

  17. Correlation- and covariance-supported normalization method for estimating orthodontic trainer treatment for clenching activity.

    Science.gov (United States)

    Akdenur, B; Okkesum, S; Kara, S; Günes, S

    2009-11-01

    In this study, electromyography signals sampled from children undergoing orthodontic treatment were used to estimate the effect of an orthodontic trainer on the anterior temporal muscle. A novel data normalization method, called the correlation- and covariance-supported normalization method (CCSNM), based on correlation and covariance between features in a data set, is proposed to provide predictive guidance to the orthodontic technique. The method was tested in two stages: first, data normalization using the CCSNM; second, prediction of normalized values of anterior temporal muscles using an artificial neural network (ANN) with a Levenberg-Marquardt learning algorithm. The data set consists of electromyography signals from right anterior temporal muscles, recorded from 20 children aged 8-13 years with class II malocclusion. The signals were recorded at the start and end of a 6-month treatment. In order to train and test the ANN, two-fold cross-validation was used. The CCSNM was compared with four normalization methods: minimum-maximum normalization, z score, decimal scaling, and line base normalization. In order to demonstrate the performance of the proposed method, prevalent performance-measuring methods, and the mean square error and mean absolute error as mathematical methods, the statistical relation factor R2 and the average deviation have been examined. The results show that the CCSNM was the best normalization method among other normalization methods for estimating the effect of the trainer.

  18. Comparison of Relative Bias, Precision, and Efficiency of Sampling Methods for Natural Enemies of Soybean Aphid (Hemiptera: Aphididae).

    Science.gov (United States)

    Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W

    2015-06-01

    Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount. © The Authors 2015. Published by Oxford University Press on behalf of

  19. Sensitive spectrophotometric methods for determination of some organophosphorus pesticides in vegetable samples

    Directory of Open Access Journals (Sweden)

    MAGDA A. AKL

    2010-03-01

    Full Text Available Three rapid, simple, reproducible and sensitive spectrophotometric methods (A, B and C are described for the determination of two organophosphorus pesticides, (malathion and dimethoate in formulations and vegetable samples. The methods A and B involve the addition of an excess of Ce4+ into sulphuric acid medium and the determination of the unreacted oxidant by decreasing the red color of chromotrope 2R (C2R at a suitable lmax = 528 nm for method A, or a decrease in the orange pink color of rhodamine 6G (Rh6G at a suitable lmax = = 525 nm. The method C is based on the oxidation of malathion or dimethoate with the slight excess of N-bromosuccinimide (NBS and the determination of unreacted oxidant by reacting it with amaranth dye (AM in hydrochloric acid medium at a suitable lmax = 520 nm. A regression analysis of Beer-Lambert plots showed a good correlation in the concentration range of 0.1-4.2 μg mL−1. The apparent molar absorptivity, Sandell sensitivity, the detection and quantification limits were calculated. For more accurate analysis, Ringbom optimum concentration ranges are 0.25-4.0 μg mL−1. The developed methods were successfully applied to the determination of malathion, and dimethoate in their formulations and environmental vegetable samples.

  20. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    Science.gov (United States)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  1. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  2. Fast electronic structure methods for strongly correlated molecular systems

    International Nuclear Information System (INIS)

    Head-Gordon, Martin; Beran, Gregory J O; Sodt, Alex; Jung, Yousung

    2005-01-01

    A short review is given of newly developed fast electronic structure methods that are designed to treat molecular systems with strong electron correlations, such as diradicaloid molecules, for which standard electronic structure methods such as density functional theory are inadequate. These new local correlation methods are based on coupled cluster theory within a perfect pairing active space, containing either a linear or quadratic number of pair correlation amplitudes, to yield the perfect pairing (PP) and imperfect pairing (IP) models. This reduces the scaling of the coupled cluster iterations to no worse than cubic, relative to the sixth power dependence of the usual (untruncated) coupled cluster doubles model. A second order perturbation correction, PP(2), to treat the neglected (weaker) correlations is formulated for the PP model. To ensure minimal prefactors, in addition to favorable size-scaling, highly efficient implementations of PP, IP and PP(2) have been completed, using auxiliary basis expansions. This yields speedups of almost an order of magnitude over the best alternatives using 4-center 2-electron integrals. A short discussion of the scope of accessible chemical applications is given

  3. Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.

    Science.gov (United States)

    Reddon, John R.; And Others

    1985-01-01

    Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)

  4. Local Field Response Method Phenomenologically Introducing Spin Correlations

    Science.gov (United States)

    Tomaru, Tatsuya

    2018-03-01

    The local field response (LFR) method is a way of searching for the ground state in a similar manner to quantum annealing. However, the LFR method operates on a classical machine, and quantum effects are introduced through a priori information and through phenomenological means reflecting the states during the computations. The LFR method has been treated with a one-body approximation, and therefore, the effect of entanglement has not been sufficiently taken into account. In this report, spin correlations are phenomenologically introduced as one of the effects of entanglement, by which multiple tunneling at anticrossing points is taken into account. As a result, the accuracy of solutions for a 128-bit system increases by 31% compared with that without spin correlations.

  5. Equilibrium Molecular Thermodynamics from Kirkwood Sampling

    OpenAIRE

    Somani, Sandeep; Okamoto, Yuko; Ballard, Andrew J.; Wales, David J.

    2015-01-01

    We present two methods for barrierless equilibrium sampling of molecular systems based on the recently proposed Kirkwood method (J. Chem. Phys. 2009, 130, 134102). Kirkwood sampling employs low-order correlations among internal coordinates of a molecule for random (or non-Markovian) sampling of the high dimensional conformational space. This is a geometrical sampling method independent of the potential energy surface. The first method is a variant of biased Monte Carlo, wher...

  6. 40 CFR Appendix I to Part 261 - Representative Sampling Methods

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Representative Sampling Methods I...—Representative Sampling Methods The methods and equipment used for sampling waste materials will vary with the form and consistency of the waste materials to be sampled. Samples collected using the sampling...

  7. Flow velocity measurement by using zero-crossing polarity cross correlation method

    International Nuclear Information System (INIS)

    Xu Chengji; Lu Jinming; Xia Hong

    1993-01-01

    Using the designed correlation metering system and a high accurate hot-wire anemometer as a calibration device, the experimental study of correlation method in a tunnel was carried out. The velocity measurement of gas flow by using zero-crossing polarity cross correlation method was realized and the experimental results has been analysed

  8. Population models and simulation methods: The case of the Spearman rank correlation.

    Science.gov (United States)

    Astivia, Oscar L Olvera; Zumbo, Bruno D

    2017-11-01

    The purpose of this paper is to highlight the importance of a population model in guiding the design and interpretation of simulation studies used to investigate the Spearman rank correlation. The Spearman rank correlation has been known for over a hundred years to applied researchers and methodologists alike and is one of the most widely used non-parametric statistics. Still, certain misconceptions can be found, either explicitly or implicitly, in the published literature because a population definition for this statistic is rarely discussed within the social and behavioural sciences. By relying on copula distribution theory, a population model is presented for the Spearman rank correlation, and its properties are explored both theoretically and in a simulation study. Through the use of the Iman-Conover algorithm (which allows the user to specify the rank correlation as a population parameter), simulation studies from previously published articles are explored, and it is found that many of the conclusions purported in them regarding the nature of the Spearman correlation would change if the data-generation mechanism better matched the simulation design. More specifically, issues such as small sample bias and lack of power of the t-test and r-to-z Fisher transformation disappear when the rank correlation is calculated from data sampled where the rank correlation is the population parameter. A proof for the consistency of the sample estimate of the rank correlation is shown as well as the flexibility of the copula model to encompass results previously published in the mathematical literature. © 2017 The British Psychological Society.

  9. Mixed Methods Sampling: A Typology with Examples

    Science.gov (United States)

    Teddlie, Charles; Yu, Fen

    2007-01-01

    This article presents a discussion of mixed methods (MM) sampling techniques. MM sampling involves combining well-established qualitative and quantitative techniques in creative ways to answer research questions posed by MM research designs. Several issues germane to MM sampling are presented including the differences between probability and…

  10. Log sampling methods and software for stand and landscape analyses.

    Science.gov (United States)

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe methods for efficient, accurate sampling of logs at landscape and stand scales to estimate density, total length, cover, volume, and weight. Our methods focus on optimizing the sampling effort by choosing an appropriate sampling method and transect length for specific forest conditions and objectives. Sampling methods include the line-intersect method and...

  11. New methods for sampling sparse populations

    Science.gov (United States)

    Anna Ringvall

    2007-01-01

    To improve surveys of sparse objects, methods that use auxiliary information have been suggested. Guided transect sampling uses prior information, e.g., from aerial photographs, for the layout of survey strips. Instead of being laid out straight, the strips will wind between potentially more interesting areas. 3P sampling (probability proportional to prediction) uses...

  12. Statistical sampling method for releasing decontaminated vehicles

    International Nuclear Information System (INIS)

    Lively, J.W.; Ware, J.A.

    1996-01-01

    Earth moving vehicles (e.g., dump trucks, belly dumps) commonly haul radiologically contaminated materials from a site being remediated to a disposal site. Traditionally, each vehicle must be surveyed before being released. The logistical difficulties of implementing the traditional approach on a large scale demand that an alternative be devised. A statistical method (MIL-STD-105E, open-quotes Sampling Procedures and Tables for Inspection by Attributesclose quotes) for assessing product quality from a continuous process was adapted to the vehicle decontamination process. This method produced a sampling scheme that automatically compensates and accommodates fluctuating batch sizes and changing conditions without the need to modify or rectify the sampling scheme in the field. Vehicles are randomly selected (sampled) upon completion of the decontamination process to be surveyed for residual radioactive surface contamination. The frequency of sampling is based on the expected number of vehicles passing through the decontamination process in a given period and the confidence level desired. This process has been successfully used for 1 year at the former uranium mill site in Monticello, Utah (a CERCLA regulated clean-up site). The method forces improvement in the quality of the decontamination process and results in a lower likelihood that vehicles exceeding the surface contamination standards are offered for survey. Implementation of this statistical sampling method on Monticello Projects has resulted in more efficient processing of vehicles through decontamination and radiological release, saved hundreds of hours of processing time, provided a high level of confidence that release limits are met, and improved the radiological cleanliness of vehicles leaving the controlled site

  13. 19 CFR 151.70 - Method of sampling by Customs.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Method of sampling by Customs. 151.70 Section 151... THE TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Wool and Hair § 151.70 Method of sampling by Customs. A general sample shall be taken from each sampling unit, unless it is not...

  14. Innovative methods for inorganic sample preparation

    Energy Technology Data Exchange (ETDEWEB)

    Essling, A.M.; Huff, E.A.; Graczyk, D.G.

    1992-04-01

    Procedures and guidelines are given for the dissolution of a variety of selected materials using fusion, microwave, and Parr bomb techniques. These materials include germanium glass, corium-concrete mixtures, and zeolites. Emphasis is placed on sample-preparation approaches that produce a single master solution suitable for complete multielement characterization of the sample. In addition, data are presented on the soil microwave digestion method approved by the Environmental Protection Agency (EPA). Advantages and disadvantages of each sample-preparation technique are summarized.

  15. Innovative methods for inorganic sample preparation

    International Nuclear Information System (INIS)

    Essling, A.M.; Huff, E.A.; Graczyk, D.G.

    1992-04-01

    Procedures and guidelines are given for the dissolution of a variety of selected materials using fusion, microwave, and Parr bomb techniques. These materials include germanium glass, corium-concrete mixtures, and zeolites. Emphasis is placed on sample-preparation approaches that produce a single master solution suitable for complete multielement characterization of the sample. In addition, data are presented on the soil microwave digestion method approved by the Environmental Protection Agency (EPA). Advantages and disadvantages of each sample-preparation technique are summarized

  16. Method of vacuum correlation functions: Results and prospects

    International Nuclear Information System (INIS)

    Badalian, A. M.; Simonov, Yu. A.; Shevchenko, V. I.

    2006-01-01

    Basic results obtained within the QCD method of vacuum correlation functions over the past 20 years in the context of investigations into strong-interaction physics at the Institute of Theoretical and Experimental Physics (ITEP, Moscow) are formulated Emphasis is placed primarily on the prospects of the general theory developed within QCD by employing both nonperturbative and perturbative methods. On the basis of ab initio arguments, it is shown that the lowest two field correlation functions play a dominant role in QCD dynamics. A quantitative theory of confinement and deconfinement, as well as of the spectra of light and heavy quarkonia, glueballs, and hybrids, is given in terms of these two correlation functions. Perturbation theory in a nonperturbative vacuum (background perturbation theory) plays a significant role, not possessing drawbacks of conventional perturbation theory and leading to the infrared freezing of the coupling constant α s

  17. Strongly Correlated Systems Theoretical Methods

    CERN Document Server

    Avella, Adolfo

    2012-01-01

    The volume presents, for the very first time, an exhaustive collection of those modern theoretical methods specifically tailored for the analysis of Strongly Correlated Systems. Many novel materials, with functional properties emerging from macroscopic quantum behaviors at the frontier of modern research in physics, chemistry and materials science, belong to this class of systems. Any technique is presented in great detail by its own inventor or by one of the world-wide recognized main contributors. The exposition has a clear pedagogical cut and fully reports on the most relevant case study where the specific technique showed to be very successful in describing and enlightening the puzzling physics of a particular strongly correlated system. The book is intended for advanced graduate students and post-docs in the field as textbook and/or main reference, but also for other researchers in the field who appreciates consulting a single, but comprehensive, source or wishes to get acquainted, in a as painless as po...

  18. Strongly correlated systems numerical methods

    CERN Document Server

    Mancini, Ferdinando

    2013-01-01

    This volume presents, for the very first time, an exhaustive collection of those modern numerical methods specifically tailored for the analysis of Strongly Correlated Systems. Many novel materials, with functional properties emerging from macroscopic quantum behaviors at the frontier of modern research in physics, chemistry and material science, belong to this class of systems. Any technique is presented in great detail by its own inventor or by one of the world-wide recognized main contributors. The exposition has a clear pedagogical cut and fully reports on the most relevant case study where the specific technique showed to be very successful in describing and enlightening the puzzling physics of a particular strongly correlated system. The book is intended for advanced graduate students and post-docs in the field as textbook and/or main reference, but also for other researchers in the field who appreciate consulting a single, but comprehensive, source or wishes to get acquainted, in a as painless as possi...

  19. Image correlation spectroscopy: mapping correlations in space, time, and reciprocal space.

    Science.gov (United States)

    Wiseman, Paul W

    2013-01-01

    This chapter presents an overview of two recent implementations of image correlation spectroscopy (ICS). The background theory is presented for spatiotemporal image correlation spectroscopy and image cross-correlation spectroscopy (STICS and STICCS, respectively) as well as k-(reciprocal) space image correlation spectroscopy (kICS). An introduction to the background theory is followed by sections outlining procedural aspects for properly implementing STICS, STICCS, and kICS. These include microscopy image collection, sampling in space and time, sample and fluorescent probe requirements, signal to noise, and background considerations that are all required to properly implement the ICS methods. Finally, procedural steps for immobile population removal and actual implementation of the ICS analysis programs to fluorescence microscopy image time stacks are described. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Method for numerical simulation of two-term exponentially correlated colored noise

    International Nuclear Information System (INIS)

    Yilmaz, B.; Ayik, S.; Abe, Y.; Gokalp, A.; Yilmaz, O.

    2006-01-01

    A method for numerical simulation of two-term exponentially correlated colored noise is proposed. The method is an extension of traditional method for one-term exponentially correlated colored noise. The validity of the algorithm is tested by comparing numerical simulations with analytical results in two physical applications

  1. Correlation between different methods of intra- abdominal pressure ...

    African Journals Online (AJOL)

    This study aimed to determine the correlation between transvesical ... circumstances may arise where this method is not viable and alternative methods ..... The polycompartment syndrome: A concise state-of-the- art review. ... hypertension in a mixed population of critically ill patients: A multiple-center epidemiological study.

  2. Chapter 12. Sampling and analytical methods

    International Nuclear Information System (INIS)

    Busenberg, E.; Plummer, L.N.; Cook, P.G.; Solomon, D.K.; Han, L.F.; Groening, M.; Oster, H.

    2006-01-01

    When water samples are taken for the analysis of CFCs, regardless of the sampling method used, contamination of samples by contact with atmospheric air (with its 'high' CFC concentrations) is a major concern. This is because groundwaters usually have lower CFC concentrations than those waters which have been exposed to the modern air. Some groundwaters might not contain CFCs and, therefore, are most sensitive to trace contamination by atmospheric air. Thus, extreme precautions are needed to obtain uncontaminated samples when groundwaters, particularly those with older ages, are sampled. It is recommended at the start of any CFC investigation that samples from a CFC-free source be collected and analysed, as a check upon the sampling equipment and methodology. The CFC-free source might be a deep monitoring well or, alternatively, CFC-free water could be carefully prepared in the laboratory. It is especially important that all tubing, pumps and connection that will be used in the sampling campaign be checked in this manner

  3. Two-Way Gene Interaction From Microarray Data Based on Correlation Methods.

    Science.gov (United States)

    Alavi Majd, Hamid; Talebi, Atefeh; Gilany, Kambiz; Khayyer, Nasibeh

    2016-06-01

    Gene networks have generated a massive explosion in the development of high-throughput techniques for monitoring various aspects of gene activity. Networks offer a natural way to model interactions between genes, and extracting gene network information from high-throughput genomic data is an important and difficult task. The purpose of this study is to construct a two-way gene network based on parametric and nonparametric correlation coefficients. The first step in constructing a Gene Co-expression Network is to score all pairs of gene vectors. The second step is to select a score threshold and connect all gene pairs whose scores exceed this value. In the foundation-application study, we constructed two-way gene networks using nonparametric methods, such as Spearman's rank correlation coefficient and Blomqvist's measure, and compared them with Pearson's correlation coefficient. We surveyed six genes of venous thrombosis disease, made a matrix entry representing the score for the corresponding gene pair, and obtained two-way interactions using Pearson's correlation, Spearman's rank correlation, and Blomqvist's coefficient. Finally, these methods were compared with Cytoscape, based on BIND, and Gene Ontology, based on molecular function visual methods; R software version 3.2 and Bioconductor were used to perform these methods. Based on the Pearson and Spearman correlations, the results were the same and were confirmed by Cytoscape and GO visual methods; however, Blomqvist's coefficient was not confirmed by visual methods. Some results of the correlation coefficients are not the same with visualization. The reason may be due to the small number of data.

  4. Sample size determination for equivalence assessment with multiple endpoints.

    Science.gov (United States)

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  5. Manure sampling procedures and nutrient estimation by the hydrometer method for gestation pigs.

    Science.gov (United States)

    Zhu, Jun; Ndegwa, Pius M; Zhang, Zhijian

    2004-05-01

    Three manure agitation procedures were examined in this study (vertical mixing, horizontal mixing, and no mixing) to determine the efficacy of producing a representative manure sample. The total solids content for manure from gestation pigs was found to be well correlated with the total nitrogen (TN) and total phosphorus (TP) concentrations in the manure, with highly significant correlation coefficients of 0.988 and 0.994, respectively. Linear correlations were observed between the TN and TP contents and the manure specific gravity (correlation coefficients: 0.991 and 0.987, respectively). Therefore, it may be inferred that the nutrients in pig manure can be estimated with reasonable accuracy by measuring the liquid manure specific gravity. A rapid testing method for manure nutrient contents (TN and TP) using a soil hydrometer was also evaluated. The results showed that the estimating error increased from +/-10% to +/-30% with the decrease in TN (from 1000 to 100 ppm) and TP (from 700 to 50 ppm) concentrations in the manure. Data also showed that the hydrometer readings had to be taken within 10 s after mixing to avoid reading drift in specific gravity due to the settling of manure solids.

  6. Sampling methods for low-frequency electromagnetic imaging

    International Nuclear Information System (INIS)

    Gebauer, Bastian; Hanke, Martin; Schneider, Christoph

    2008-01-01

    For the detection of hidden objects by low-frequency electromagnetic imaging the linear sampling method works remarkably well despite the fact that the rigorous mathematical justification is still incomplete. In this work, we give an explanation for this good performance by showing that in the low-frequency limit the measurement operator fulfils the assumptions for the fully justified variant of the linear sampling method, the so-called factorization method. We also show how the method has to be modified in the physically relevant case of electromagnetic imaging with divergence-free currents. We present numerical results to illustrate our findings, and to show that similar performance can be expected for the case of conducting objects and layered backgrounds

  7. Correlation of Heavy Element in Sea Water and Sediment Samples from Peninsula of Muria

    International Nuclear Information System (INIS)

    Rosidi; Sukirno

    2007-01-01

    The analysis of heavy metals in marine environmental samples peninsula of Muria in the year 2004 has been carried out by using neutron activation analysis (NAA) method. The objective of this analysis is to know the distribution of heavy metals in the sea water and sediment, which accommodate the recent environmental data in supporting the license of site for the nuclear power plants (NPP). The result of the analysis show that there were only 5 observed elements found in sea water and sediment were Cd, Co, Cr, Sb and Sc. All of heavy metals from sea water (0.002 mg/l) are obviously lower than the threshold value established by environmental minister Act; Kep LH No 51/2004. From the observed data to use software of SPSS version 10, application of the Pearson correlation (r) shows that between Co with Sc was indicator show a highly positive significant correlation (r=0.928), between Cr with Sc was sufficiently positive high (r=0.756), between Cr with Cd was precisely (r=0.611) while Co with Sb shows the significantly low (r=0.429). (author)

  8. General correlation and partial correlation analysis in finding interactions: with Spearman rank correlation and proportion correlation as correlation measures

    OpenAIRE

    WenJun Zhang; Xin Li

    2015-01-01

    Between-taxon interactions can be detected by calculating the sampling data of taxon sample type. In present study, Spearman rank correlation and proportion correlation are chosen as the general correlation measures, and their partial correlations are calculated and compared. The results show that for Spearman rank correlation measure, in all predicted candidate direct interactions by partial correlation, about 16.77% (x, 0-45.4%) of them are not successfully detected by Spearman rank correla...

  9. 3D Rigid Registration by Cylindrical Phase Correlation Method

    Czech Academy of Sciences Publication Activity Database

    Bican, Jakub; Flusser, Jan

    2009-01-01

    Roč. 30, č. 10 (2009), s. 914-921 ISSN 0167-8655 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/1593 Grant - others:GAUK(CZ) 48908 Institutional research plan: CEZ:AV0Z10750506 Keywords : 3D registration * correlation methods * Image registration Subject RIV: BD - Theory of Information Impact factor: 1.303, year: 2009 http://library.utia.cas.cz/separaty/2009/ZOI/bican-3d digit registration by cylindrical phase correlation method.pdf

  10. Sampling of temporal networks: Methods and biases

    Science.gov (United States)

    Rocha, Luis E. C.; Masuda, Naoki; Holme, Petter

    2017-11-01

    Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of sampling methods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.

  11. Neonatal blood gas sampling methods | Goenka | South African ...

    African Journals Online (AJOL)

    There is little published guidance that systematically evaluates the different methods of neonatal blood gas sampling, where each method has its individual benefits and risks. This review critically surveys the available evidence to generate a comparison between arterial and capillary blood gas sampling, focusing on their ...

  12. An improved selective sampling method

    International Nuclear Information System (INIS)

    Miyahara, Hiroshi; Iida, Nobuyuki; Watanabe, Tamaki

    1986-01-01

    The coincidence methods which are currently used for the accurate activity standardisation of radio-nuclides, require dead time and resolving time corrections which tend to become increasingly uncertain as countrates exceed about 10 K. To reduce the dependence on such corrections, Muller, in 1981, proposed the selective sampling method using a fast multichannel analyser (50 ns ch -1 ) for measuring the countrates. It is, in many ways, more convenient and possibly potentially more reliable to replace the MCA with scalers and a circuit is described employing five scalers; two of them serving to measure the background correction. Results of comparisons using our new method and the coincidence method for measuring the activity of 60 Co sources yielded agree-ment within statistical uncertainties. (author)

  13. An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method

    International Nuclear Information System (INIS)

    Campolina, Daniel; Lima, Paulo Rubens I.; Pereira, Claubia; Veloso, Maria Auxiliadora F.

    2015-01-01

    Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)

  14. DOE methods for evaluating environmental and waste management samples

    International Nuclear Information System (INIS)

    Goheen, S.C.; McCulloch, M.; Thomas, B.L.; Riley, R.G.; Sklarew, D.S.; Mong, G.M.; Fadeff, S.K.

    1994-04-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) is a resource intended to support sampling and analytical activities for the evaluation of environmental and waste management samples from U.S. Department of Energy (DOE) sites. DOE Methods is the result of extensive cooperation from all DOE analytical laboratories. All of these laboratories have contributed key information and provided technical reviews as well as significant moral support leading to the success of this document. DOE Methods is designed to encompass methods for collecting representative samples and for determining the radioisotope activity and organic and inorganic composition of a sample. These determinations will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the U.S. Environmental Protection Agency, or others. The development of DOE Methods is supported by the Laboratory Management Division of the DOE. Methods are prepared for entry into DOE Methods as chapter editors, together with DOE and other participants in this program, identify analytical and sampling method needs. Unique methods or methods consolidated from similar procedures in the DOE Procedures Database are selected for potential inclusion in this document. Initial selection is based largely on DOE needs and procedure applicability and completeness. Methods appearing in this document are one of two types. open-quotes Draftclose quotes or open-quotes Verified.close quotes. open-quotes Draftclose quotes methods that have been reviewed internally and show potential for eventual verification are included in this document, but they have not been reviewed externally, and their precision and bias may not be known. open-quotes Verifiedclose quotes methods in DOE Methods have been reviewed by volunteers from various DOE sites and private corporations

  15. Prevalence and correlates of vaping cannabis in a sample of young adults.

    Science.gov (United States)

    Jones, Connor B; Hill, Melanie L; Pardini, Dustin A; Meier, Madeline H

    2016-12-01

    Vaping nicotine (i.e., the use of e-cigarettes and similar devices to inhale nicotine) is becoming increasingly popular among young people. Though some vaporizers are capable of vaporizing cannabis, sparse research has investigated this method of cannabis administration. The present study examines the prevalence and correlates of vaping cannabis in a sample of 482 college students. Participants reported high lifetime rates of vaping nicotine (37%) and cannabis (29%). Men (r s = 0.09, p = .047) and individuals from higher socioeconomic status families (r s = 0.14, p = .003) vaped cannabis more frequently than women and individuals from lower SES families. In addition, those who vaped cannabis more frequently were more open to new experiences (r s = 0.17, p vaping were frequent cannabis use (r s = 0.70, p vaping (r s = 0.46, p vaping cannabis, endorsed by 65% of those who had vaped cannabis, was convenience and discreetness for use in public places. Several correlates distinguished cannabis users who vaped from cannabis users who did not vape, most notably more frequent cannabis use (odds ratios [OR] = 3.68, p vaping (OR = 1.73, p vaping is prevalent among young adults, particularly among those who use other substances frequently and have more favorable attitudes toward smoking cannabis. Research is needed on the antecedents and potential harms and benefits of cannabis vaping in young adulthood. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. A Method for Choosing the Best Samples for Mars Sample Return.

    Science.gov (United States)

    Gordon, Peter R; Sephton, Mark A

    2018-05-01

    Success of a future Mars Sample Return mission will depend on the correct choice of samples. Pyrolysis-FTIR can be employed as a triage instrument for Mars Sample Return. The technique can thermally dissociate minerals and organic matter for detection. Identification of certain mineral types can determine the habitability of the depositional environment, past or present, while detection of organic matter may suggest past or present habitation. In Mars' history, the Theiikian era represents an attractive target for life search missions and the acquisition of samples. The acidic and increasingly dry Theiikian may have been habitable and followed a lengthy neutral and wet period in Mars' history during which life could have originated and proliferated to achieve relatively abundant levels of biomass with a wide distribution. Moreover, the sulfate minerals produced in the Theiikian are also known to be good preservers of organic matter. We have used pyrolysis-FTIR and samples from a Mars analog ferrous acid stream with a thriving ecosystem to test the triage concept. Pyrolysis-FTIR identified those samples with the greatest probability of habitability and habitation. A three-tier scoring system was developed based on the detection of (i) organic signals, (ii) carbon dioxide and water, and (iii) sulfur dioxide. The presence of each component was given a score of A, B, or C depending on whether the substance had been detected, tentatively detected, or not detected, respectively. Single-step (for greatest possible sensitivity) or multistep (for more diagnostic data) pyrolysis-FTIR methods informed the assignments. The system allowed the highest-priority samples to be categorized as AAA (or A*AA if the organic signal was complex), while the lowest-priority samples could be categorized as CCC. Our methods provide a mechanism with which to rank samples and identify those that should take the highest priority for return to Earth during a Mars Sample Return mission. Key Words

  17. Method for estimating modulation transfer function from sample images.

    Science.gov (United States)

    Saiga, Rino; Takeuchi, Akihisa; Uesugi, Kentaro; Terada, Yasuko; Suzuki, Yoshio; Mizutani, Ryuta

    2018-02-01

    The modulation transfer function (MTF) represents the frequency domain response of imaging modalities. Here, we report a method for estimating the MTF from sample images. Test images were generated from a number of images, including those taken with an electron microscope and with an observation satellite. These original images were convolved with point spread functions (PSFs) including those of circular apertures. The resultant test images were subjected to a Fourier transformation. The logarithm of the squared norm of the Fourier transform was plotted against the squared distance from the origin. Linear correlations were observed in the logarithmic plots, indicating that the PSF of the test images can be approximated with a Gaussian. The MTF was then calculated from the Gaussian-approximated PSF. The obtained MTF closely coincided with the MTF predicted from the original PSF. The MTF of an x-ray microtomographic section of a fly brain was also estimated with this method. The obtained MTF showed good agreement with the MTF determined from an edge profile of an aluminum test object. We suggest that this approach is an alternative way of estimating the MTF, independently of the image type. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Sample preparation method for scanning force microscopy

    CERN Document Server

    Jankov, I R; Szente, R N; Carreno, M N P; Swart, J W; Landers, R

    2001-01-01

    We present a method of sample preparation for studies of ion implantation on metal surfaces. The method, employing a mechanical mask, is specially adapted for samples analysed by Scanning Force Microscopy. It was successfully tested on polycrystalline copper substrates implanted with phosphorus ions at an acceleration voltage of 39 keV. The changes of the electrical properties of the surface were measured by Kelvin Probe Force Microscopy and the surface composition was analysed by Auger Electron Spectroscopy.

  19. Correlation and agreement of a digital and conventional method to measure arch parameters.

    Science.gov (United States)

    Nawi, Nes; Mohamed, Alizae Marny; Marizan Nor, Murshida; Ashar, Nor Atika

    2018-01-01

    The aim of the present study was to determine the overall reliability and validity of arch parameters measured digitally compared to conventional measurement. A sample of 111 plaster study models of Down syndrome (DS) patients were digitized using a blue light three-dimensional (3D) scanner. Digital and manual measurements of defined parameters were performed using Geomagic analysis software (Geomagic Studio 2014 software, 3D Systems, Rock Hill, SC, USA) on digital models and with a digital calliper (Tuten, Germany) on plaster study models. Both measurements were repeated twice to validate the intraexaminer reliability based on intraclass correlation coefficients (ICCs) using the independent t test and Pearson's correlation, respectively. The Bland-Altman method of analysis was used to evaluate the agreement of the measurement between the digital and plaster models. No statistically significant differences (p > 0.05) were found between the manual and digital methods when measuring the arch width, arch length, and space analysis. In addition, all parameters showed a significant correlation coefficient (r ≥ 0.972; p digital and manual measurements. Furthermore, a positive agreement between digital and manual measurements of the arch width (90-96%), arch length and space analysis (95-99%) were also distinguished using the Bland-Altman method. These results demonstrate that 3D blue light scanning and measurement software are able to precisely produce 3D digital model and measure arch width, arch length, and space analysis. The 3D digital model is valid to be used in various clinical applications.

  20. Sampling and analysis methods for geothermal fluids and gases

    Energy Technology Data Exchange (ETDEWEB)

    Watson, J.C.

    1978-07-01

    The sampling procedures for geothermal fluids and gases include: sampling hot springs, fumaroles, etc.; sampling condensed brine and entrained gases; sampling steam-lines; low pressure separator systems; high pressure separator systems; two-phase sampling; downhole samplers; and miscellaneous methods. The recommended analytical methods compiled here cover physical properties, dissolved solids, and dissolved and entrained gases. The sequences of methods listed for each parameter are: wet chemical, gravimetric, colorimetric, electrode, atomic absorption, flame emission, x-ray fluorescence, inductively coupled plasma-atomic emission spectroscopy, ion exchange chromatography, spark source mass spectrometry, neutron activation analysis, and emission spectrometry. Material on correction of brine component concentrations for steam loss during flashing is presented. (MHR)

  1. A flexible fluorescence correlation spectroscopy based method for quantification of the DNA double labeling efficiency with precision control

    International Nuclear Information System (INIS)

    Hou, Sen; Tabaka, Marcin; Sun, Lili; Trochimczyk, Piotr; Kaminski, Tomasz S; Kalwarczyk, Tomasz; Zhang, Xuzhu; Holyst, Robert

    2014-01-01

    We developed a laser-based method to quantify the double labeling efficiency of double-stranded DNA (dsDNA) in a fluorescent dsDNA pool with fluorescence correlation spectroscopy (FCS). Though, for quantitative biochemistry, accurate measurement of this parameter is of critical importance, before our work it was almost impossible to quantify what percentage of DNA is doubly labeled with the same dye. The dsDNA is produced by annealing complementary single-stranded DNA (ssDNA) labeled with the same dye at 5′ end. Due to imperfect ssDNA labeling, the resulting dsDNA is a mixture of doubly labeled dsDNA, singly labeled dsDNA and unlabeled dsDNA. Our method allows the percentage of doubly labeled dsDNA in the total fluorescent dsDNA pool to be measured. In this method, we excite the imperfectly labeled dsDNA sample in a focal volume of <1 fL with a laser beam and correlate the fluctuations of the fluorescence signal to get the FCS autocorrelation curves; we express the amplitudes of the autocorrelation function as a function of the DNA labeling efficiency; we perform a comparative analysis of a dsDNA sample and a reference dsDNA sample, which is prepared by increasing the total dsDNA concentration c (c > 1) times by adding unlabeled ssDNA during the annealing process. The method is flexible in that it allows for the selection of the reference sample and the c value can be adjusted as needed for a specific study. We express the precision of the method as a function of the ssDNA labeling efficiency or the dsDNA double labeling efficiency. The measurement precision can be controlled by changing the c value. (letter)

  2. Cross-Correlation-Function-Based Multipath Mitigation Method for Sine-BOC Signals

    Directory of Open Access Journals (Sweden)

    H. H. Chen

    2012-06-01

    Full Text Available Global Navigation Satellite Systems (GNSS positioning accuracy indoor and urban canyons environments are greatly affected by multipath due to distortions in its autocorrelation function. In this paper, a cross-correlation function between the received sine phased Binary Offset Carrier (sine-BOC modulation signal and the local signal is studied firstly, and a new multipath mitigation method based on cross-correlation function for sine-BOC signal is proposed. This method is implemented to create a cross-correlation function by designing the modulated symbols of the local signal. The theoretical analysis and simulation results indicate that the proposed method exhibits better multipath mitigation performance compared with the traditional Double Delta Correlator (DDC techniques, especially the medium/long delay multipath signals, and it is also convenient and flexible to implement by using only one correlator, which is the case of low-cost mass-market receivers.

  3. Improved LC-MS/MS method for the quantification of hepcidin-25 in clinical samples.

    Science.gov (United States)

    Abbas, Ioana M; Hoffmann, Holger; Montes-Bayón, María; Weller, Michael G

    2018-06-01

    Mass spectrometry-based methods play a crucial role in the quantification of the main iron metabolism regulator hepcidin by singling out the bioactive 25-residue peptide from the other naturally occurring N-truncated isoforms (hepcidin-20, -22, -24), which seem to be inactive in iron homeostasis. However, several difficulties arise in the MS analysis of hepcidin due to the "sticky" character of the peptide and the lack of suitable standards. Here, we propose the use of amino- and fluoro-silanized autosampler vials to reduce hepcidin interaction to laboratory glassware surfaces after testing several types of vials for the preparation of stock solutions and serum samples for isotope dilution liquid chromatography-tandem mass spectrometry (ID-LC-MS/MS). Furthermore, we have investigated two sample preparation strategies and two chromatographic separation conditions with the aim of developing a LC-MS/MS method for the sensitive and reliable quantification of hepcidin-25 in serum samples. A chromatographic separation based on usual acidic mobile phases was compared with a novel approach involving the separation of hepcidin-25 with solvents at high pH containing 0.1% of ammonia. Both methods were applied to clinical samples in an intra-laboratory comparison of two LC-MS/MS methods using the same hepcidin-25 calibrators with good correlation of the results. Finally, we recommend a LC-MS/MS-based quantification method with a dynamic range of 0.5-40 μg/L for the assessment of hepcidin-25 in human serum that uses TFA-based mobile phases and silanized glass vials. Graphical abstract Structure of hepcidin-25 (Protein Data Bank, PDB ID 2KEF).

  4. METHODS FOR DETERMINING AGITATOR MIXING REQUIREMENTS FOR A MIXING & SAMPLING FACILITY TO FEED WTP (WASTE TREATMENT PLANT)

    Energy Technology Data Exchange (ETDEWEB)

    GRIFFIN PW

    2009-08-27

    The following report is a summary of work conducted to evaluate the ability of existing correlative techniques and alternative methods to accurately estimate impeller speed and power requirements for mechanical mixers proposed for use in a mixing and sampling facility (MSF). The proposed facility would accept high level waste sludges from Hanford double-shell tanks and feed uniformly mixed high level waste to the Waste Treatment Plant. Numerous methods are evaluated and discussed, and resulting recommendations provided.

  5. On Angular Sampling Methods for 3-D Spatial Channel Models

    DEFF Research Database (Denmark)

    Fan, Wei; Jämsä, Tommi; Nielsen, Jesper Ødum

    2015-01-01

    This paper discusses generating three dimensional (3D) spatial channel models with emphasis on the angular sampling methods. Three angular sampling methods, i.e. modified uniform power sampling, modified uniform angular sampling, and random pairing methods are proposed and investigated in detail....... The random pairing method, which uses only twenty sinusoids in the ray-based model for generating the channels, presents good results if the spatial channel cluster is with a small elevation angle spread. For spatial clusters with large elevation angle spreads, however, the random pairing method would fail...... and the other two methods should be considered....

  6. Sample dependent correlation between TL and LM-OSL in Al2O3:C

    International Nuclear Information System (INIS)

    Dallas, G.I.; Polymeris, G.S.; Stefanaki, E.C.; Afouxenidis, D.; Tsirliganis, N.C.; Kitis, G.

    2008-01-01

    Al 2 O 3 :C single crystals are known to exhibit different, sample dependent, glow-curve shapes. The relation between the Thermoluminescence (TL) traps and the linear modulated optically stimulation luminescence (LM-OSL) traps is of high importance. In the present work a correlation study is attempted using 23 single crystals with dimensions between 400 and 500μm. The correlation study involved two steps. In the first step, both TL glow curves and LM-OSL decay curves are deconvoluted and a one-to-one correlation between TL peaks and LM-OSL components is attempted. In the second step the TL glow-curves are corrected for thermal quenching, the corrected curves are deconvoluted and a new correlation between TL and LM-OSL individual components is performed

  7. Trait correlates of relational aggression in a nonclinical sample: DSM-IV personality disorders and psychopathy.

    Science.gov (United States)

    Schmeelk, Kelly M; Sylvers, Patrick; Lilienfeld, Scott O

    2008-06-01

    The implications of adult relational aggression in adults for personality pathology are poorly understood. We investigated the association between relational aggression and features of DSM-IV personality disorders and psychopathy in a sample of undergraduates (N = 220). In contrast to the childhood literature, we found no significant difference in relational aggression between men and women. Unlike overt aggression, which correlated about equally highly with features of all three personality disorder clusters, relational aggression correlated significantly more highly with features of Cluster B than Clusters A or C. In addition, even after controlling for overt aggression, relational aggression correlated significantly with features of psychopathy, although only with Factor 2 traits. With the exception of sadistic personality disorder features, gender did not moderate the relationship between relational aggression and personality pathology. Further research on the psycho-pathological implications of relational aggression in more severely affected samples is warranted.

  8. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  9. Evaluation of factor for one-point venous blood sampling method based on the causality model

    International Nuclear Information System (INIS)

    Matsutomo, Norikazu; Onishi, Hideo; Kobara, Kouichi; Sasaki, Fumie; Watanabe, Haruo; Nagaki, Akio; Mimura, Hiroaki

    2009-01-01

    One-point venous blood sampling method (Mimura, et al.) can evaluate the regional cerebral blood flow (rCBF) value with a high degree of accuracy. However, the method is accompanied by complexity of technique because it requires a venous blood Octanol value, and its accuracy is affected by factors of input function. Therefore, we evaluated the factors that are used for input function to determine the accuracy input function and simplify the technique. The input function which uses the time-dependent brain count of 5 minutes, 15 minutes, and 25 minutes from administration, and the input function in which an objective variable is used as the artery octanol value to exclude the venous blood octanol value are created. Therefore, a correlation between these functions and rCBF value by the microsphere (MS) method is evaluated. Creation of a high-accuracy input function and simplification of technique are possible. The rCBF value obtained by the input function, the factor of which is a time-dependent brain count of 5 minutes from administration, and the objective variable is artery octanol value, had a high correlation with the MS method (y=0.899x+4.653, r=0.842). (author)

  10. Wet-digestion of environmental sample using silver-mediated electrochemical method

    International Nuclear Information System (INIS)

    Kuwabara, Jun

    2010-01-01

    An application of silver-mediated electrochemical method to environmental samples as the effective digestion method for iodine analysis was tried. Usual digestion method for 129 I in many type of environmental sample is combustion method using quartz glass tube. Chemical yield of iodine on the combustion method reduce depending on the type of sample. The silver-mediated electrochemical method is expected to achieve very low loss of iodine. In this study, dried kombu (Laminaria) sample was tried to digest with electrochemical cell. At the case of 1g of sample, digestion was completed for about 24 hours under the electric condition of <10V and <2A. After the digestion, oxidized species of iodine was reduced to iodide by adding sodium sulfite. And then the precipitate of silver iodide was obtained. (author)

  11. A flexible method for multi-level sample size determination

    International Nuclear Information System (INIS)

    Lu, Ming-Shih; Sanborn, J.B.; Teichmann, T.

    1997-01-01

    This paper gives a flexible method to determine sample sizes for both systematic and random error models (this pertains to sampling problems in nuclear safeguard questions). In addition, the method allows different attribute rejection limits. The new method could assist achieving a higher detection probability and enhance inspection effectiveness

  12. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs

    Directory of Open Access Journals (Sweden)

    Min-Kyu Kim

    2015-12-01

    Full Text Available This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs. The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.

  13. DOE methods for evaluating environmental and waste management samples

    Energy Technology Data Exchange (ETDEWEB)

    Goheen, S.C.; McCulloch, M.; Thomas, B.L.; Riley, R.G.; Sklarew, D.S.; Mong, G.M.; Fadeff, S.K. [eds.

    1994-10-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) is a resource intended to support sampling and analytical activities for the evaluation of environmental and waste management samples from U.S. Department of Energy (DOE) sites. DOE Methods is the result of extensive cooperation from all DOE analytical laboratories. All of these laboratories have contributed key information and provided technical reviews as well as significant moral support leading to the success of this document. DOE Methods is designed to encompass methods for collecting representative samples and for determining the radioisotope activity and organic and inorganic composition of a sample. These determinations will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the U.S. Environmental Protection Agency, or others. The development of DOE Methods is supported by the Analytical Services Division of DOE. Unique methods or methods consolidated from similar procedures in the DOE Procedures Database are selected for potential inclusion in this document. Initial selection is based largely on DOE needs and procedure applicability and completeness. Methods appearing in this document are one of two types, {open_quotes}Draft{close_quotes} or {open_quotes}Verified{close_quotes}. {open_quotes}Draft{close_quotes} methods that have been reviewed internally and show potential for eventual verification are included in this document, but they have not been reviewed externally, and their precision and bias may not be known. {open_quotes}Verified{close_quotes} methods in DOE Methods have been reviewed by volunteers from various DOE sites and private corporations. These methods have delineated measures of precision and accuracy.

  14. DOE methods for evaluating environmental and waste management samples

    International Nuclear Information System (INIS)

    Goheen, S.C.; McCulloch, M.; Thomas, B.L.; Riley, R.G.; Sklarew, D.S.; Mong, G.M.; Fadeff, S.K.

    1994-10-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) is a resource intended to support sampling and analytical activities for the evaluation of environmental and waste management samples from U.S. Department of Energy (DOE) sites. DOE Methods is the result of extensive cooperation from all DOE analytical laboratories. All of these laboratories have contributed key information and provided technical reviews as well as significant moral support leading to the success of this document. DOE Methods is designed to encompass methods for collecting representative samples and for determining the radioisotope activity and organic and inorganic composition of a sample. These determinations will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the U.S. Environmental Protection Agency, or others. The development of DOE Methods is supported by the Analytical Services Division of DOE. Unique methods or methods consolidated from similar procedures in the DOE Procedures Database are selected for potential inclusion in this document. Initial selection is based largely on DOE needs and procedure applicability and completeness. Methods appearing in this document are one of two types, open-quotes Draftclose quotes or open-quotes Verifiedclose quotes. open-quotes Draftclose quotes methods that have been reviewed internally and show potential for eventual verification are included in this document, but they have not been reviewed externally, and their precision and bias may not be known. open-quotes Verifiedclose quotes methods in DOE Methods have been reviewed by volunteers from various DOE sites and private corporations. These methods have delineated measures of precision and accuracy

  15. Strength and deformability of hollow concrete blocks: correlation of block and cylindrical sample test results

    Directory of Open Access Journals (Sweden)

    C. S. Barbosa

    Full Text Available This paper deals with correlations among mechanical properties of hollow blocks and those of concrete used to make them. Concrete hollow blocks and test samples were moulded with plastic consistency concrete, to assure the same material in all cases, in three diferente levels of strength (nominally 10 N/mm², 20 N/mm² and 30 N/mm². The mechanical properties and structural behaviour in axial compression and tension tests were determined by standard tests in blocks and cylinders. Stress and strain analyses were made based on concrete’s modulus of elasticity obtained in the sample tests as well as on measured strain in the blocks’ face-shells and webs. A peculiar stress-strain analysis, based on the superposition of effects, provided an estimation of the block load capacity based on its deformations. In addition, a tentative method to preview the block deformability from the concrete mechanical properties is described and tested. This analysis is a part of a broader research that aims to support a detailed structural analysis of blocks, prisms and masonry constructions.

  16. Neutron activation analysis of certified samples by the absolute method

    Science.gov (United States)

    Kadem, F.; Belouadah, N.; Idiri, Z.

    2015-07-01

    The nuclear reactions analysis technique is mainly based on the relative method or the use of activation cross sections. In order to validate nuclear data for the calculated cross section evaluated from systematic studies, we used the neutron activation analysis technique (NAA) to determine the various constituent concentrations of certified samples for animal blood, milk and hay. In this analysis, the absolute method is used. The neutron activation technique involves irradiating the sample and subsequently performing a measurement of the activity of the sample. The fundamental equation of the activation connects several physical parameters including the cross section that is essential for the quantitative determination of the different elements composing the sample without resorting to the use of standard sample. Called the absolute method, it allows a measurement as accurate as the relative method. The results obtained by the absolute method showed that the values are as precise as the relative method requiring the use of standard sample for each element to be quantified.

  17. THE USE OF RANKING SAMPLING METHOD WITHIN MARKETING RESEARCH

    Directory of Open Access Journals (Sweden)

    CODRUŢA DURA

    2011-01-01

    Full Text Available Marketing and statistical literature available to practitioners provides a wide range of sampling methods that can be implemented in the context of marketing research. Ranking sampling method is based on taking apart the general population into several strata, namely into several subdivisions which are relatively homogenous regarding a certain characteristic. In fact, the sample will be composed by selecting, from each stratum, a certain number of components (which can be proportional or non-proportional to the size of the stratum until the pre-established volume of the sample is reached. Using ranking sampling within marketing research requires the determination of some relevant statistical indicators - average, dispersion, sampling error etc. To that end, the paper contains a case study which illustrates the actual approach used in order to apply the ranking sample method within a marketing research made by a company which provides Internet connection services, on a particular category of customers – small and medium enterprises.

  18. The quantitative regional cerebral blood flow measurement with autoradiography method using 123I-IMP SPECT. Evaluation of arterialized venous blood sampling as a substitute for arterial blood sampling

    International Nuclear Information System (INIS)

    Ohnishi, Takashi; Yano, Takao; Nakano, Shinichi; Jinnouchi, Seishi; Nagamachi, Shigeki; Flores, L. II; Nakahara, Hiroshi; Watanabe, Katsushi.

    1996-01-01

    The purpose of this study is validation of calibrating a standard input function in autoradiography (ARG) method by one point venous blood sampling as a substitute for that by one point arterial blood sampling. Ten and 20 minutes after intravenous constant infusion of 123 I-IMP, arterialized venous blood sampling from a dorsal vein were performed on 15 patients having ischemic cerebrovascular disease. And arterial blood sampling from radial artery was performed 10 min after 123 I-IMP infusion. The mean difference rates of integrated input function between calibrated standard input function by arterial blood sampling at 10 min and that by venous blood sampling were 4.1±3% and 9.3±5.4% at 10 and 20 min after 123 I-IMP infusion, respectively. The ratio of venous blood radioactivity to arterial blood radioactivity at 10 min after 123 I-IMP infusion was 0.96±0.02. There was an excellent correlation between ARG method CBF values obtained by arterial blood sampling at 10 min and those obtained by arterialized venous blood sampling at 10 min. In conclusion, a substitution by arterialized venous blood sampling from dorsal hand vein for artery can be possible. The optimized time for arterialized venous blood sampling was 10 min after 123 I-IMP infusion. (author)

  19. An adaptive sampling and windowing interrogation method in PIV

    Science.gov (United States)

    Theunissen, R.; Scarano, F.; Riethmuller, M. L.

    2007-01-01

    This study proposes a cross-correlation based PIV image interrogation algorithm that adapts the number of interrogation windows and their size to the image properties and to the flow conditions. The proposed methodology releases the constraint of uniform sampling rate (Cartesian mesh) and spatial resolution (uniform window size) commonly adopted in PIV interrogation. Especially in non-optimal experimental conditions where the flow seeding is inhomogeneous, this leads either to loss of robustness (too few particles per window) or measurement precision (too large or coarsely spaced interrogation windows). Two criteria are investigated, namely adaptation to the local signal content in the image and adaptation to local flow conditions. The implementation of the adaptive criteria within a recursive interrogation method is described. The location and size of the interrogation windows are locally adapted to the image signal (i.e., seeding density). Also the local window spacing (commonly set by the overlap factor) is put in relation with the spatial variation of the velocity field. The viability of the method is illustrated over two experimental cases where the limitation of a uniform interrogation approach appears clearly: a shock-wave-boundary layer interaction and an aircraft vortex wake. The examples show that the spatial sampling rate can be adapted to the actual flow features and that the interrogation window size can be arranged so as to follow the spatial distribution of seeding particle images and flow velocity fluctuations. In comparison with the uniform interrogation technique, the spatial resolution is locally enhanced while in poorly seeded regions the level of robustness of the analysis (signal-to-noise ratio) is kept almost constant.

  20. Acute Pain Perception During Different Sampling Methods for Respiratory Culture in Cystic Fibrosis Patients.

    Science.gov (United States)

    Eyns, Hanneke; De Wachter, Elke; Malfroot, Anne; Vaes, Peter

    2018-03-01

    Reliable identification of lower respiratory tract pathogens is crucial in the management of cystic fibrosis (CF). The multitude of treatments and clinical procedures are a considerable burden and are potentially provoking pain. As part of another study (NCT02363764), investigating the bacterial yield of three sampling methods, nasal swabs (NSs), cough swabs (CSs), and (induced) sputum samples ([I]SSs), in both expectorating patients (EPs) and non-expectorating patients (NEPs) with CF, the present study aimed to explore the prevalence of respiratory culture sampling-related pain as assessed by self-report within a cohort of children and adults. Literate patients with CF (aged six years or older) completed a questionnaire on pain perception related to the three aforementioned sampling methods (No/Yes; visual analogue scale for pain [VAS-Pain] [0-10 cm]). In addition, patients were asked to rank these methods by their own preference without taking into account the presumed bacterial yield. In total, 119 questionnaires were returned. In the EPs-group, CS was most frequently (n%; mean VAS-Pain if pain [range]) reported as painful method: overall (n = 101; 12.9%; 1.8 [0.2-4.8]), children (n = 41; 22.0%; 1.4 [0.2-2.7]), and adults (n = 60; 6.7%; 2.5 [0.5-4.8]). Highest pain intensity scores were observed with NS overall (3.0%; 2.4 [0.3-6.2]) and in children (4.9%; 3.3 [0.3-6.2]), but not in adults (1.7%; 0.6 [-]).NEPs-children (n = 17) reported ISS most frequently and as most painful sampling method (17.6%; 2.0 [1.0-4.0]). The only NEP-adult did not perceive pain. NEPs preferred NS > CS > ISS (61.1%, 33.3%, 5.6%, respectively [P = 0.001]) as primary sampling method, whereas EPs preferred SS > NS > CS (65.7%, 26.3%, 8.1%, respectively [P method inversely correlated to pain perception and intensity in EPs (φ = -0.155 [P = 0.007] and ρ = -0.926 [P = 0.008], respectively), but not in NEPs (φ = -0.226 [P = 0.097] and ρ = -0.135 [P = 0

  1. A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT

    NARCIS (Netherlands)

    MIKOSCH, T; WANG, QA

    We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.

  2. Validation of single-sample doubly labeled water method

    International Nuclear Information System (INIS)

    Webster, M.D.; Weathers, W.W.

    1989-01-01

    We have experimentally validated a single-sample variant of the doubly labeled water method for measuring metabolic rate and water turnover in a very small passerine bird, the verdin (Auriparus flaviceps). We measured CO 2 production using the Haldane gravimetric technique and compared these values with estimates derived from isotopic data. Doubly labeled water results based on the one-sample calculations differed from Haldane values by less than 0.5% on average (range -8.3 to 11.2%, n = 9). Water flux computed by the single-sample method differed by -1.5% on average from results for the same birds based on the standard, two-sample technique (range -13.7 to 2.0%, n = 9)

  3. Correlates of homophobia, transphobia, and internalized homophobia in gay or lesbian and heterosexual samples.

    Science.gov (United States)

    Warriner, Katrina; Nagoshi, Craig T; Nagoshi, Julie L

    2013-01-01

    This research assessed the correlates of homophobia and transphobia in heterosexual and homosexual individuals, based on a theory of different sources of perceived symbolic threat to social status. Compared to 310 heterosexual college students, a sample of 30 gay male and 30 lesbian college students scored lower on homophobia, transphobia, and religious fundamentalism. Mean gender differences were smaller for gay men and lesbians for homophobia, aggressiveness, benevolent sexism, masculinity, and femininity. Fundamentalism, right-wing authoritarianism, and hostile and benevolent sexism were correlated only with homophobia in lesbians, whereas fundamentalism and authoritarianism were correlated only with transphobia in gay men. Correlates of internalized homophobia were different than those found for homophobia and transphobia, which was discussed in terms of gender differences in threats to status based on sexual orientation versus gender identity.

  4. The perturbed angular correlation method - a modern technique in studying solids

    International Nuclear Information System (INIS)

    Unterricker, S.; Hunger, H.J.

    1979-01-01

    Starting from theoretical fundamentals the differential perturbed angular correlation method has been explained. By using the probe nucleus 111 Cd the magnetic dipole interaction in Fesub(x)Alsub(1-x) alloys and the electric quadrupole interaction in Cd have been measured. The perturbed angular correlation method is a modern nuclear measuring method and can be applied in studying ordering processes, phase transformations and radiation damages in metals, semiconductors and insulators

  5. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  6. Correlated sampling added to the specific purpose Monte Carlo code McPNL for neutron lifetime log responses

    International Nuclear Information System (INIS)

    Mickael, M.; Verghese, K.; Gardner, R.P.

    1989-01-01

    The specific purpose neutron lifetime oil well logging simulation code, McPNL, has been rewritten for greater user-friendliness and faster execution. Correlated sampling has been added to the code to enable studies of relative changes in the tool response caused by environmental changes. The absolute responses calculated by the code have been benchmarked against laboratory test pit data. The relative responses from correlated sampling are not directly benchmarked, but they are validated using experimental and theoretical results

  7. a Task-Oriented Disaster Information Correlation Method

    Science.gov (United States)

    Linyao, Q.; Zhiqiang, D.; Qing, Z.

    2015-07-01

    With the rapid development of sensor networks and Earth observation technology, a large quantity of disaster-related data is available, such as remotely sensed data, historic data, case data, simulated data, and disaster products. However, the efficiency of current data management and service systems has become increasingly difficult due to the task variety and heterogeneous data. For emergency task-oriented applications, the data searches primarily rely on artificial experience based on simple metadata indices, the high time consumption and low accuracy of which cannot satisfy the speed and veracity requirements for disaster products. In this paper, a task-oriented correlation method is proposed for efficient disaster data management and intelligent service with the objectives of 1) putting forward disaster task ontology and data ontology to unify the different semantics of multi-source information, 2) identifying the semantic mapping from emergency tasks to multiple data sources on the basis of uniform description in 1), and 3) linking task-related data automatically and calculating the correlation between each data set and a certain task. The method goes beyond traditional static management of disaster data and establishes a basis for intelligent retrieval and active dissemination of disaster information. The case study presented in this paper illustrates the use of the method on an example flood emergency relief task.

  8. Extraction methods for determination of Pu and Am contents in soil samples from the Chernobyl' NPP 30-km zone

    International Nuclear Information System (INIS)

    Shvetsov, I.K.; Yakovlev, N.G.; Kalinichenko, B.S.; Kulakov, V.M.; Kulazhko, V.G.; Vlasov, M.M.; Shubko, V.M.; Pchelkin, V.A.; Rodionov, Yu.F.; Lisin, S.K.

    1989-01-01

    The possibilities for decreasing the time of soil sample analysis for Pu, Am, Cm isotope concentrations with simultaneous increasing the sensitivity and analysis representativity are demonstrated. It is achieved due to changing the total sample break-down by oxidizing leaching, and the procedure of ion-exchange separation by single extraction using trioctylamine. Experience in the method applications for analysis of soil samples in the Chernobyl' NPP 30-km zone aimed at determination of correlation coefficients for Pu/Ce-144 and Pu/Am-241 is generalized. 4 refs.; 4 figs.; 1 tab

  9. Semi-Supervised Multi-View Ensemble Learning Based On Extracting Cross-View Correlation

    Directory of Open Access Journals (Sweden)

    ZALL, R.

    2016-05-01

    Full Text Available Correlated information between different views incorporate useful for learning in multi view data. Canonical correlation analysis (CCA plays important role to extract these information. However, CCA only extracts the correlated information between paired data and cannot preserve correlated information between within-class samples. In this paper, we propose a two-view semi-supervised learning method called semi-supervised random correlation ensemble base on spectral clustering (SS_RCE. SS_RCE uses a multi-view method based on spectral clustering which takes advantage of discriminative information in multiple views to estimate labeling information of unlabeled samples. In order to enhance discriminative power of CCA features, we incorporate the labeling information of both unlabeled and labeled samples into CCA. Then, we use random correlation between within-class samples from cross view to extract diverse correlated features for training component classifiers. Furthermore, we extend a general model namely SSMV_RCE to construct ensemble method to tackle semi-supervised learning in the presence of multiple views. Finally, we compare the proposed methods with existing multi-view feature extraction methods using multi-view semi-supervised ensembles. Experimental results on various multi-view data sets are presented to demonstrate the effectiveness of the proposed methods.

  10. Suicidal Behaviors among Adolescents in Puerto Rico: Rates and Correlates in Clinical and Community Samples

    Science.gov (United States)

    Jones, Jennifer; Ramirez, Rafael Roberto; Davies, Mark; Canino, Glorisa; Goodwin, Renee D.

    2008-01-01

    This study examined rates and correlates of suicidal behavior among youth on the island of Puerto Rico. Data were drawn from two probability samples, one clinical (n = 736) and one community-based sample (n = 1,896), of youth ages 12 to 17. Consistent with previous studies in U.S. mainland adolescent populations, our results demonstrate that most…

  11. Spatial and statistical methods for correlating the interaction between groundwater contamination and tap water exposure in karst regions

    Science.gov (United States)

    Padilla, I. Y.; Rivera, V. L.; Macchiavelli, R. E.; Torres Torres, N. I.

    2016-12-01

    Groundwater systems in karst regions are highly vulnerable to contamination and have an enormous capacity to store and rapidly convey pollutants to potential exposure zones over long periods of time. Contaminants in karst aquifers used for drinking water purposes can, therefore, enter distributions lines and the tap water point of use. This study applies spatial and statistical analytical methods to assess potential correlations between contaminants in a karst groundwater system in northern Puerto Rico and exposure in the tap water. It focuses on chlorinated volatile organic compounds (CVOC) and phthalates because of their ubiquitous presence in the environment and the potential public health impacts. The work integrates historical data collected from regulatory agencies and current field measurements involving groundwater and tap water sampling and analysis. Contaminant distributions and cluster analysis is performed with Geographic Information System technology. Correlations between detection frequencies and contaminants concentration in source groundwater and tap water point of use are assessed using Pearson's Chi Square and T-Test analysis. Although results indicate that correlations are contaminant-specific, detection frequencies are generally higher for total CVOC in groundwater than tap water samples, but greater for phthalates in tap water than groundwater samples. Spatial analysis shows widespread distribution of CVOC and phthalates in both groundwater and tap water, suggesting that contamination comes from multiple sources. Spatial correlation analysis indicates that association between tap water and groundwater contamination depends on the source and type of contaminants, spatial location, and time. Full description of the correlations may, however, need to take into consideration variable anthropogenic interventions.

  12. Standard methods for sampling North American freshwater fishes

    Science.gov (United States)

    Bonar, Scott A.; Hubert, Wayne A.; Willis, David W.

    2009-01-01

    This important reference book provides standard sampling methods recommended by the American Fisheries Society for assessing and monitoring freshwater fish populations in North America. Methods apply to ponds, reservoirs, natural lakes, and streams and rivers containing cold and warmwater fishes. Range-wide and eco-regional averages for indices of abundance, population structure, and condition for individual species are supplied to facilitate comparisons of standard data among populations. Provides information on converting nonstandard to standard data, statistical and database procedures for analyzing and storing standard data, and methods to prevent transfer of invasive species while sampling.

  13. Analysis of uranium and its correlation with some physico-chemical properties of drinking water samples from Amritsar, Punjab.

    Science.gov (United States)

    Singh, Surinder; Rani, Asha; Mahajan, Rakesh Kumar; Walia, Tejinder Pal Singh

    2003-12-01

    Fission track technique has been used for uranium estimation in drinking water samples collected from some areas of Amritsar District, Punjab, India. The uranium concentration in water samples is found to vary from 3.19 to 45.59 microg l(-1). Some of the physico-chemical properties such as pH, conductance and hardness and the content of calcium, magnesium, total dissolved solids (TDS), sodium, potassium, chloride, nitrate and heavy metals viz. zinc, cadmium, lead and copper have been determined in water samples. An attempt has been made to correlate uranium concentration with these water quality parameters. A positive correlation of conductance, nitrate, chloride, sodium, potassium, magnesium, TDS, calcium and hardness with uranium concentration has been observed. However, no correlation has been observed between the concentration of uranium and the heavy metals analysed.

  14. METHODS FOR DETERMINING AGITATOR MIXING REQUIREMENTS FOR A MIXING and SAMPLING FACILITY TO FEED WTP (WASTE TREATMENT PLANT)

    International Nuclear Information System (INIS)

    Griffin, P.W.

    2009-01-01

    The following report is a summary of work conducted to evaluate the ability of existing correlative techniques and alternative methods to accurately estimate impeller speed and power requirements for mechanical mixers proposed for use in a mixing and sampling facility (MSF). The proposed facility would accept high level waste sludges from Hanford double-shell tanks and feed uniformly mixed high level waste to the Waste Treatment Plant. Numerous methods are evaluated and discussed, and resulting recommendations provided.

  15. Correlation of basic TL, OSL and IRSL properties of ten K-feldspar samples of various origins

    Energy Technology Data Exchange (ETDEWEB)

    Sfampa, I.K. [Aristotle University of Thessaloniki, Nuclear Physics Laboratory, 54124 Thessaloniki (Greece); Polymeris, G.S. [Institute of Nuclear Sciences, Ankara University, 06100 Besevler, Ankara (Turkey); Pagonis, V. [McDaniel College, Physics Department, Westminster, MD 21157 (United States); Theodosoglou, E. [Department of Mineralogy-Petrology-Economic Geology, School of Geology, Aristotle University of Thessaloniki, 54124 Thessaloniki (Greece); Tsirliganis, N.C. [Laboratory of Radiation Applications and Archaeological Dating, Department of Archaeometry and Physicochemical Measurements, ‘Athena’ R.& I.C., Kimmeria University Campus, GR67100 Xanthi (Greece); Kitis, G., E-mail: gkitis@auth.gr [Aristotle University of Thessaloniki, Nuclear Physics Laboratory, 54124 Thessaloniki (Greece)

    2015-09-15

    Highlights: • OSL and IRSL bleaching behavior of ten K-feldspar samples is presented. • OSL and IRSL decay curves were component resolved using tunneling model. • The growth of integrated OSL and IRSL signals versus time was described by new expression based on tunneling model. • Correlation between TL, OSL and IRSL signals and of all properties with K-feldspar structure was discussed. - Abstract: Feldspars stand among the most widely used minerals in dosimetric methods of dating using thermoluminescence (TL), optically stimulated luminescence (OSL) and infrared stimulated luminescence (IRSL). Having very good dosimetric properties, they can in principle contribute to the dating of every site of archaeological and geological interest. The present work studies basic properties of ten naturally occurring K-feldspar samples belonging to three feldspar species, namely sanidine, orthoclase and microcline. The basic properties studied are (a) the influence of blue light and infrared stimulation on the thermoluminescence glow-curves, (b) the growth of OSL, IRSL, residual TL and TL-loss as a function of OSL and IRSL bleaching time and (c) the correlation between the OSL and IRSL signals and the energy levels responsible for the TL glow-curve. All experimental data were fitted using analytical expressions derived from a recently developed tunneling recombination model. The results show that the analytical expressions provide excellent fits to all experimental results, thus verifying the tunneling recombination mechanism in these materials and providing valuable information about the concentrations of luminescence centers.

  16. Experimental study on reactivity measurement in thermal reactor by polarity correlation method

    International Nuclear Information System (INIS)

    Yasuda, Hideshi

    1977-11-01

    Experimental study on the polarity correlation method for measuring the reactivity of a thermal reactor, especially the one possessing long prompt neutron lifetime such as graphite on heavy water moderated core, is reported. The techniques of reactor kinetics experiment are briefly reviewed, which are classified in two groups, one characterized by artificial disturbance to a reactor and the other by natural fluctuation inherent in a reactor. The fluctuation phenomena of neutron count rate are explained using F. de Hoffman's stochastic method, and correlation functions for the neutron count rate fluctuation are shown. The experimental results by polarity correlation method applied to the β/l measurements in both graphite-moderated SHE core and light water-moderated JMTRC and JRR-4 cores, and also to the measurement of SHE shut down reactivity margin are presented. The measured values were in good agreement with those by a pulsed neutron method in the reactivity range from critical to -12 dollars. The conditional polarity correlation experiments in SHE at -20 cent and -100 cent are demonstrated. The prompt neutron decay constants agreed with those obtained by the polarity correlation experiments. The results of experiments measuring large negative reactivity of -52 dollars of SHE by pulsed neutron, rod drop and source multiplication methods are given. Also it is concluded that the polarity and conditional polarity correlation methods are sufficiently applicable to noise analysis of a low power thermal reactor with long prompt neutron lifetime. (Nakai, Y.)

  17. Parcellating an individual subject's cortical and subcortical brain structures using snowball sampling of resting-state correlations.

    Science.gov (United States)

    Wig, Gagan S; Laumann, Timothy O; Cohen, Alexander L; Power, Jonathan D; Nelson, Steven M; Glasser, Matthew F; Miezin, Francis M; Snyder, Abraham Z; Schlaggar, Bradley L; Petersen, Steven E

    2014-08-01

    We describe methods for parcellating an individual subject's cortical and subcortical brain structures using resting-state functional correlations (RSFCs). Inspired by approaches from social network analysis, we first describe the application of snowball sampling on RSFC data (RSFC-Snowballing) to identify the centers of cortical areas, subdivisions of subcortical nuclei, and the cerebellum. RSFC-Snowballing parcellation is then compared with parcellation derived from identifying locations where RSFC maps exhibit abrupt transitions (RSFC-Boundary Mapping). RSFC-Snowballing and RSFC-Boundary Mapping largely complement one another, but also provide unique parcellation information; together, the methods identify independent entities with distinct functional correlations across many cortical and subcortical locations in the brain. RSFC parcellation is relatively reliable within a subject scanned across multiple days, and while the locations of many area centers and boundaries appear to exhibit considerable overlap across subjects, there is also cross-subject variability-reinforcing the motivation to parcellate brains at the level of individuals. Finally, examination of a large meta-analysis of task-evoked functional magnetic resonance imaging data reveals that area centers defined by task-evoked activity exhibit correspondence with area centers defined by RSFC-Snowballing. This observation provides important evidence for the ability of RSFC to parcellate broad expanses of an individual's brain into functionally meaningful units. © The Author 2013. Published by Oxford University Press.

  18. Dynamical correlations in finite nuclei: A simple method to study tensor effects

    International Nuclear Information System (INIS)

    Dellagiacoma, F.; Orlandini, G.; Traini, M.

    1983-01-01

    Dynamical correlations are introduced in finite nuclei by changing the two-body density through a phenomenological method. The role of tensor and short-range correlations in nuclear momentum distribution, electric form factor and two-body density of 4 He is investigated. The importance of induced tensor correlations in the total photonuclear cross section is reinvestigated providing a successful test of the method proposed here. (orig.)

  19. A Comparison of Multivariate and Pre-Processing Methods for Quantitative Laser-Induced Breakdown Spectroscopy of Geologic Samples

    Science.gov (United States)

    Anderson, R. B.; Morris, R. V.; Clegg, S. M.; Bell, J. F., III; Humphries, S. D.; Wiens, R. C.

    2011-01-01

    The ChemCam instrument selected for the Curiosity rover is capable of remote laser-induced breakdown spectroscopy (LIBS).[1] We used a remote LIBS instrument similar to ChemCam to analyze 197 geologic slab samples and 32 pressed-powder geostandards. The slab samples are well-characterized and have been used to validate the calibration of previous instruments on Mars missions, including CRISM [2], OMEGA [3], the MER Pancam [4], Mini-TES [5], and Moessbauer [6] instruments and the Phoenix SSI [7]. The resulting dataset was used to compare multivariate methods for quantitative LIBS and to determine the effect of grain size on calculations. Three multivariate methods - partial least squares (PLS), multilayer perceptron artificial neural networks (MLP ANNs) and cascade correlation (CC) ANNs - were used to generate models and extract the quantitative composition of unknown samples. PLS can be used to predict one element (PLS1) or multiple elements (PLS2) at a time, as can the neural network methods. Although MLP and CC ANNs were successful in some cases, PLS generally produced the most accurate and precise results.

  20. Passive sampling methods for contaminated sediments

    DEFF Research Database (Denmark)

    Peijnenburg, Willie J.G.M.; Teasdale, Peter R.; Reible, Danny

    2014-01-01

    “Dissolved” concentrations of contaminants in sediment porewater (Cfree) provide a more relevant exposure metric for risk assessment than do total concentrations. Passive sampling methods (PSMs) for estimating Cfree offer the potential for cost-efficient and accurate in situ characterization...

  1. Field evaluation of personal sampling methods for multiple bioaerosols.

    Science.gov (United States)

    Wang, Chi-Hsun; Chen, Bean T; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine

    2015-01-01

    Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  2. [Sampling methods for PM2.5 from stationary sources: a review].

    Science.gov (United States)

    Jiang, Jing-Kun; Deng, Jian-Guo; Li, Zhen; Li, Xing-Hua; Duan, Lei; Hao, Ji-Ming

    2014-05-01

    The new China national ambient air quality standard has been published in 2012 and will be implemented in 2016. To meet the requirements in this new standard, monitoring and controlling PM2,,5 emission from stationary sources are very important. However, so far there is no national standard method on sampling PM2.5 from stationary sources. Different sampling methods for PM2.5 from stationary sources and relevant international standards were reviewed in this study. It includes the methods for PM2.5 sampling in flue gas and the methods for PM2.5 sampling after dilution. Both advantages and disadvantages of these sampling methods were discussed. For environmental management, the method for PM2.5 sampling in flue gas such as impactor and virtual impactor was suggested as a standard to determine filterable PM2.5. To evaluate environmental and health effects of PM2.5 from stationary sources, standard dilution method for sampling of total PM2.5 should be established.

  3. Approximation of the exponential integral (well function) using sampling methods

    Science.gov (United States)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  4. A Bayes linear Bayes method for estimation of correlated event rates.

    Science.gov (United States)

    Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim

    2013-12-01

    Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.

  5. 40 CFR 80.8 - Sampling methods for gasoline and diesel fuel.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Sampling methods for gasoline and... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES General Provisions § 80.8 Sampling methods for gasoline and diesel fuel. The sampling methods specified in this section shall be used to collect samples...

  6. Characterization of hazardous waste sites: a methods manual. Volume 2. Available sampling methods (second edition)

    International Nuclear Information System (INIS)

    Ford, P.J.; Turina, P.J.; Seely, D.E.

    1984-12-01

    Investigations at hazardous waste sites and sites of chemical spills often require on-site measurements and sampling activities to assess the type and extent of contamination. This document is a compilation of sampling methods and materials suitable to address most needs that arise during routine waste site and hazardous spill investigations. The sampling methods presented in this document are compiled by media, and were selected on the basis of practicality, economics, representativeness, compatability with analytical considerations, and safety, as well as other criteria. In addition to sampling procedures, sample handling and shipping, chain-of-custody procedures, instrument certification, equipment fabrication, and equipment decontamination procedures are described. Sampling methods for soil, sludges, sediments, and bulk materials cover the solids medium. Ten methods are detailed for surface waters, groundwater and containerized liquids; twelve are presented for ambient air, soil gases and vapors, and headspace gases. A brief discussion of ionizing radiation survey instruments is also provided

  7. Is meat quality from Longissimus lumborum samples correlated with other cuts in horse meat?

    Science.gov (United States)

    De Palo, Pasquale; Maggiolino, Aristide; Centoducati, Pasquale; Milella, Paola; Calzaretti, Giovanna; Tateo, Alessandra

    2016-03-01

    The present work aims to investigate if the variation of each parameter in Longissimus lumborum muscle could correspond to the same or to a similar variation of the parameter in the other muscles. The work presents results of Pearson's correlations between Longissimus lumborum samples and other muscle samples, such as Biceps femoris, Rectus femoris, Semimembranosus, Supraspinatus and Semitendinosus in horse meat. A total of 27 male IHDH (Italian Heavy Draught Horse) breed foals were employed. They were slaughtered at 11 months of age and the above-mentioned muscles were sampled. The Longissimus lumborum muscle showed to be representative of other muscles and of the whole carcass for some chemical parameters (moisture, protein and ash) and for some fatty acids profile patterns such as C12:0, C14:0, total monounsaturated fatty acid and polyunsaturated fatty acid, but poor correlations were recorded for intramuscular fat concentration, rheological and colorimetric parameters. Although almost all the qualitative parameters in meat are affected by the anatomical site and by the muscle, the Longissimus lumborum is often not representative in horse meat with regard to modifications of this parameters. © 2015 Japanese Society of Animal Science.

  8. Analysis of reactor capital costs and correlated sampling of economic input variables - 15342

    International Nuclear Information System (INIS)

    Ganda, F.; Kim, T.K.; Taiwo, T.A.; Wigeland, R.

    2015-01-01

    In this paper we present work aimed at enhancing the capability to perform nuclear fuel cycle cost estimates and evaluation of financial risk. Reactor capital costs are of particular relevance, since they typically comprise about 60% to 70% of the calculated Levelized Cost of Electricity at Equilibrium (LCAE). The work starts with the collection of historical construction cost and construction duration of nuclear plants in the U.S. and France, as well as forecasted costs of nuclear plants currently under construction in the U.S. This data has the primary goal of supporting the introduction of an appropriate framework, supported in this paper by two case studies with historical data, which allows the development of solid and defensible assumptions on nuclear reactor capital costs. Work is also presented on the enhancement of the capability to model interdependence of cost estimates between facilities and uncertainties. The correlated sampling capabilities in the nuclear economic code NECOST have been expanded to include partial correlations between input variables, according to a given correlation matrix. Accounting for partial correlations correctly allows a narrowing, where appropriate, of the probability density function of the difference in the LCAE between alternative, but correlated, fuel cycles. It also allows the correct calculation of the standard deviation of the LCAE of multistage systems, which appears smaller than the correct value if correlated input costs are treated as uncorrelated. (authors)

  9. Determination of alcohol and extract concentration in beer samples using a combined method of near-infrared (NIR) spectroscopy and refractometry.

    Science.gov (United States)

    Castritius, Stefan; Kron, Alexander; Schäfer, Thomas; Rädle, Matthias; Harms, Diedrich

    2010-12-22

    A new approach of combination of near-infrared (NIR) spectroscopy and refractometry was developed in this work to determine the concentration of alcohol and real extract in various beer samples. A partial least-squares (PLS) regression, as multivariate calibration method, was used to evaluate the correlation between the data of spectroscopy/refractometry and alcohol/extract concentration. This multivariate combination of spectroscopy and refractometry enhanced the precision in the determination of alcohol, compared to single spectroscopy measurements, due to the effect of high extract concentration on the spectral data, especially of nonalcoholic beer samples. For NIR calibration, two mathematical pretreatments (first-order derivation and linear baseline correction) were applied to eliminate light scattering effects. A sample grouping of the refractometry data was also applied to increase the accuracy of the determined concentration. The root mean squared errors of validation (RMSEV) of the validation process concerning alcohol and extract concentration were 0.23 Mas% (method A), 0.12 Mas% (method B), and 0.19 Mas% (method C) and 0.11 Mas% (method A), 0.11 Mas% (method B), and 0.11 Mas% (method C), respectively.

  10. Adaptive sampling method in deep-penetration particle transport problem

    International Nuclear Information System (INIS)

    Wang Ruihong; Ji Zhicheng; Pei Lucheng

    2012-01-01

    Deep-penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, a kind of particle transport random walking system under the emission point as a sampling station is built. Then, an adaptive sampling scheme is derived for better solution with the achieved information. The main advantage of the adaptive scheme is to choose the most suitable sampling number from the emission point station to obtain the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is introduced. Its main principle is to define the importance function due to the particle state and to ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive scheme under the emission point as a station could overcome the difficulty of underestimation of the result in some degree, and the adaptive importance sampling method gets satisfied results as well. (authors)

  11. A unitary correlation operator method

    International Nuclear Information System (INIS)

    Feldmeier, H.; Neff, T.; Roth, R.; Schnack, J.

    1997-09-01

    The short range repulsion between nucleons is treated by a unitary correlation operator which shifts the nucleons away from each other whenever their uncorrelated positions are within the repulsive core. By formulating the correlation as a transformation of the relative distance between particle pairs, general analytic expressions for the correlated wave functions and correlated operators are given. The decomposition of correlated operators into irreducible n-body operators is discussed. The one- and two-body-irreducible parts are worked out explicitly and the contribution of three-body correlations is estimated to check convergence. Ground state energies of nuclei up to mass number A=48 are calculated with a spin-isospin-dependent potential and single Slater determinants as uncorrelated states. They show that the deduced energy-and mass-number-independent correlated two-body Hamiltonian reproduces all ''exact'' many-body calculations surprisingly well. (orig.)

  12. Field evaluation of personal sampling methods for multiple bioaerosols.

    Directory of Open Access Journals (Sweden)

    Chi-Hsun Wang

    Full Text Available Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min. Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  13. Evaluation of correlation between physical properties and ultrasonic pulse velocity of fired clay samples.

    Science.gov (United States)

    Özkan, İlker; Yayla, Zeliha

    2016-03-01

    The aim of this study is to establish a correlation between physical properties and ultrasonic pulse velocity of clay samples fired at elevated temperatures. Brick-making clay and pottery clay were studied for this purpose. The physical properties of clay samples were assessed after firing pressed clay samples separately at temperatures of 850, 900, 950, 1000, 1050 and 1100 °C. A commercial ultrasonic testing instrument (Proceq Pundit Lab) was used to evaluate the ultrasonic pulse velocity measurements for each fired clay sample as a function of temperature. It was observed that there became a relationship between physical properties and ultrasonic pulse velocities of the samples. The results showed that in consequence of increasing densification of the samples, the differences between the ultrasonic pulse velocities were higher with increasing temperature. These findings may facilitate the use of ultrasonic pulse velocity for the estimation of physical properties of fired clay samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Kolmogorov-Smirnov test for spatially correlated data

    Science.gov (United States)

    Olea, R.A.; Pawlowsky-Glahn, V.

    2009-01-01

    The Kolmogorov-Smirnov test is a convenient method for investigating whether two underlying univariate probability distributions can be regarded as undistinguishable from each other or whether an underlying probability distribution differs from a hypothesized distribution. Application of the test requires that the sample be unbiased and the outcomes be independent and identically distributed, conditions that are violated in several degrees by spatially continuous attributes, such as topographical elevation. A generalized form of the bootstrap method is used here for the purpose of modeling the distribution of the statistic D of the Kolmogorov-Smirnov test. The innovation is in the resampling, which in the traditional formulation of bootstrap is done by drawing from the empirical sample with replacement presuming independence. The generalization consists of preparing resamplings with the same spatial correlation as the empirical sample. This is accomplished by reading the value of unconditional stochastic realizations at the sampling locations, realizations that are generated by simulated annealing. The new approach was tested by two empirical samples taken from an exhaustive sample closely following a lognormal distribution. One sample was a regular, unbiased sample while the other one was a clustered, preferential sample that had to be preprocessed. Our results show that the p-value for the spatially correlated case is always larger that the p-value of the statistic in the absence of spatial correlation, which is in agreement with the fact that the information content of an uncorrelated sample is larger than the one for a spatially correlated sample of the same size. ?? Springer-Verlag 2008.

  15. Sample preparation method for ICP-MS measurement of 99Tc in a large amount of environmental samples

    International Nuclear Information System (INIS)

    Kondo, M.; Seki, R.

    2002-01-01

    Sample preparation for measurement of 99 Tc in a large amount of soil and water samples by ICP-MS has been developed using 95m Tc as a yield tracer. This method is based on the conventional method for a small amount of soil samples using incineration, acid digestion, extraction chromatography (TEVA resin) and ICP-MS measurement. Preliminary concentration of Tc has been introduced by co-precipitation with ferric oxide. The matrix materials in a large amount of samples were more sufficiently removed with keeping the high recovery of Tc than previous method. The recovery of Tc was 70-80% for 100 g soil samples and 60-70% for 500 g of soil and 500 L of water samples. The detection limit of this method was evaluated as 0.054 mBq/kg in 500 g soil and 0.032 μBq/L in 500 L water. The determined value of 99 Tc in the IAEA-375 (soil sample collected near the Chernobyl Nuclear Reactor) was 0.25 ± 0.02 Bq/kg. (author)

  16. Systems and methods for self-synchronized digital sampling

    Science.gov (United States)

    Samson, Jr., John R. (Inventor)

    2008-01-01

    Systems and methods for self-synchronized data sampling are provided. In one embodiment, a system for capturing synchronous data samples is provided. The system includes an analog to digital converter adapted to capture signals from one or more sensors and convert the signals into a stream of digital data samples at a sampling frequency determined by a sampling control signal; and a synchronizer coupled to the analog to digital converter and adapted to receive a rotational frequency signal from a rotating machine, wherein the synchronizer is further adapted to generate the sampling control signal, and wherein the sampling control signal is based on the rotational frequency signal.

  17. Optimal CCD readout by digital correlated double sampling

    Science.gov (United States)

    Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.

    2016-01-01

    Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.

  18. Gastroesophageal reflux - correlation between diagnostic methods

    International Nuclear Information System (INIS)

    Cruz, Maria das Gracas de Almeida; Penas, Maria Exposito; Fonseca, Lea Mirian Barbosa; Lemme, Eponina Maria O.; Martinho, Maria Jose Ribeiro

    1999-01-01

    A group of 97 individuals with typical symptoms of gastroesophageal reflux disease (GERD) was submitted to gastroesophageal reflux scintigraphy (GES) and compared to the results obtained from endoscopy, histopathology and 24 hours pHmetry. Twenty-four healthy individuals were used as a control group and they have done only the GERS. The results obtained showed that: a) the difference int he reflux index (RI) for the control group and the sick individuals was statistically significant (p < 0.0001); b) the correlation between GERS and the other methods showed the following results: sensitivity, 84%; specificity, 95%; positive predictive value, 98%; negative predictive value, 67%; accuracy, 87%. We have concluded that the scintigraphic method should be used to confirm the diagnosis of GERD and also recommended as initial investiative procedure. (author)

  19. Method and apparatus for sampling atmospheric mercury

    Science.gov (United States)

    Trujillo, Patricio E.; Campbell, Evan E.; Eutsler, Bernard C.

    1976-01-20

    A method of simultaneously sampling particulate mercury, organic mercurial vapors, and metallic mercury vapor in the working and occupational environment and determining the amount of mercury derived from each such source in the sampled air. A known volume of air is passed through a sampling tube containing a filter for particulate mercury collection, a first adsorber for the selective adsorption of organic mercurial vapors, and a second adsorber for the adsorption of metallic mercury vapor. Carbon black molecular sieves are particularly useful as the selective adsorber for organic mercurial vapors. The amount of mercury adsorbed or collected in each section of the sampling tube is readily quantitatively determined by flameless atomic absorption spectrophotometry.

  20. Correlation expansion: a powerful alternative multiple scattering calculation method

    International Nuclear Information System (INIS)

    Zhao Haifeng; Wu Ziyu; Sebilleau, Didier

    2008-01-01

    We introduce a powerful alternative expansion method to perform multiple scattering calculations. In contrast to standard MS series expansion, where the scattering contributions are grouped in terms of scattering order and may diverge in the low energy region, this expansion, called correlation expansion, partitions the scattering process into contributions from different small atom groups and converges at all energies. It converges faster than MS series expansion when the latter is convergent. Furthermore, it takes less memory than the full MS method so it can be used in the near edge region without any divergence problem, even for large clusters. The correlation expansion framework we derive here is very general and can serve to calculate all the elements of the scattering path operator matrix. Photoelectron diffraction calculations in a cluster containing 23 atoms are presented to test the method and compare it to full MS and standard MS series expansion

  1. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    Science.gov (United States)

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  2. Estimation of rank correlation for clustered data.

    Science.gov (United States)

    Rosner, Bernard; Glynn, Robert J

    2017-06-30

    It is well known that the sample correlation coefficient (R xy ) is the maximum likelihood estimator of the Pearson correlation (ρ xy ) for independent and identically distributed (i.i.d.) bivariate normal data. However, this is not true for ophthalmologic data where X (e.g., visual acuity) and Y (e.g., visual field) are available for each eye and there is positive intraclass correlation for both X and Y in fellow eyes. In this paper, we provide a regression-based approach for obtaining the maximum likelihood estimator of ρ xy for clustered data, which can be implemented using standard mixed effects model software. This method is also extended to allow for estimation of partial correlation by controlling both X and Y for a vector U_ of other covariates. In addition, these methods can be extended to allow for estimation of rank correlation for clustered data by (i) converting ranks of both X and Y to the probit scale, (ii) estimating the Pearson correlation between probit scores for X and Y, and (iii) using the relationship between Pearson and rank correlation for bivariate normally distributed data. The validity of the methods in finite-sized samples is supported by simulation studies. Finally, two examples from ophthalmology and analgesic abuse are used to illustrate the methods. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Correlative FRET: new method improves rigor and reproducibility in determining distances within synaptic nanoscale architecture

    Science.gov (United States)

    Shinogle-Decker, Heather; Martinez-Rivera, Noraida; O'Brien, John; Powell, Richard D.; Joshi, Vishwas N.; Connell, Samuel; Rosa-Molinar, Eduardo

    2018-02-01

    A new correlative Förster Resonance Energy Transfer (FRET) microscopy method using FluoroNanogold™, a fluorescent immunoprobe with a covalently attached Nanogold® particle (1.4nm Au), overcomes resolution limitations in determining distances within synaptic nanoscale architecture. FRET by acceptor photobleaching has long been used as a method to increase fluorescence resolution. The transfer of energy from a donor to an acceptor generally occurs between 10-100Å, which is the relative distance between the donor molecule and the acceptor molecule. For the correlative FRET microscopy method using FluoroNanogold™, we immuno-labeled GFP-tagged-HeLa-expressing Connexin 35 (Cx35) with anti-GFP and with anti-Cx35/36 antibodies, and then photo-bleached the Cx before processing the sample for electron microscopic imaging. Preliminary studies reveal the use of Alexa Fluor® 594 FluoroNanogold™ slightly increases FRET distance to 70Å, in contrast to the 62.5Å using AlexaFluor 594®. Preliminary studies also show that using a FluoroNanogold™ probe inhibits photobleaching. After one photobleaching session, Alexa Fluor 594® fluorescence dropped to 19% of its original fluorescence; in contrast, after one photobleaching session, Alexa Fluor 594® FluoroNanogold™ fluorescence dropped to 53% of its original intensity. This result confirms that Alexa Fluor 594® FluoroNanogold™ is a much better donor probe than is Alexa Fluor 594®. The new method (a) creates a double confirmation method in determining structure and orientation of synaptic architecture, (b) allows development of a two-dimensional in vitro model to be used for precise testing of multiple parameters, and (c) increases throughput. Future work will include development of FluoroNanogold™ probes with different sizes of gold for additional correlative microscopy studies.

  4. Characterizing lentic freshwater fish assemblages using multiple sampling methods

    Science.gov (United States)

    Fischer, Jesse R.; Quist, Michael C.

    2014-01-01

    Characterizing fish assemblages in lentic ecosystems is difficult, and multiple sampling methods are almost always necessary to gain reliable estimates of indices such as species richness. However, most research focused on lentic fish sampling methodology has targeted recreationally important species, and little to no information is available regarding the influence of multiple methods and timing (i.e., temporal variation) on characterizing entire fish assemblages. Therefore, six lakes and impoundments (48–1,557 ha surface area) were sampled seasonally with seven gear types to evaluate the combined influence of sampling methods and timing on the number of species and individuals sampled. Probabilities of detection for species indicated strong selectivities and seasonal trends that provide guidance on optimal seasons to use gears when targeting multiple species. The evaluation of species richness and number of individuals sampled using multiple gear combinations demonstrated that appreciable benefits over relatively few gears (e.g., to four) used in optimal seasons were not present. Specifically, over 90 % of the species encountered with all gear types and season combinations (N = 19) from six lakes and reservoirs were sampled with nighttime boat electrofishing in the fall and benthic trawling, modified-fyke, and mini-fyke netting during the summer. Our results indicated that the characterization of lentic fish assemblages was highly influenced by the selection of sampling gears and seasons, but did not appear to be influenced by waterbody type (i.e., natural lake, impoundment). The standardization of data collected with multiple methods and seasons to account for bias is imperative to monitoring of lentic ecosystems and will provide researchers with increased reliability in their interpretations and decisions made using information on lentic fish assemblages.

  5. Method evaluation of Fusarium DNA extraction from mycelia and wheat for down-stream real-time PCR quantification and correlation to mycotoxin levels.

    Science.gov (United States)

    Fredlund, Elisabeth; Gidlund, Ann; Olsen, Monica; Börjesson, Thomas; Spliid, Niels Henrik Hytte; Simonsson, Magnus

    2008-04-01

    Identification of Fusarium species by traditional methods requires specific skill and experience and there is an increased interest for new molecular methods for identification and quantification of Fusarium from food and feed samples. Real-time PCR with probe technology (Taqman) can be used for the identification and quantification of several species of Fusarium from cereal grain samples. There are several critical steps that need to be considered when establishing a real-time PCR-based method for DNA quantification, including extraction of DNA from the samples. In this study, several DNA extraction methods were evaluated, including the DNeasy Plant Mini Spin Columns (Qiagen), the Bio robot EZ1 (Qiagen) with the DNeasy Blood and Tissue Kit (Qiagen), and the Fast-DNA Spin Kit for Soil (Qbiogene). Parameters such as DNA quality and stability, PCR inhibitors, and PCR efficiency were investigated. Our results showed that all methods gave good PCR efficiency (above 90%) and DNA stability whereas the DNeasy Plant Mini Spin Columns in combination with sonication gave the best results with respect to Fusarium DNA yield. The modified DNeasy Plant Mini Spin protocol was used to analyse 31 wheat samples for the presence of F. graminearum and F. culmorum. The DNA level of F. graminearum could be correlated to the level of DON (r(2) = 0.9) and ZEN (r(2) = 0.6) whereas no correlation was found between F. culmorum and DON/ZEA. This shows that F. graminearum and not F. culmorum, was the main producer of DON in Swedish wheat during 2006.

  6. Optical methods for microstructure determination of doped samples

    Science.gov (United States)

    Ciosek, Jerzy F.

    2008-12-01

    The optical methods to determine refractive index profile of layered materials are commonly used with spectroscopic ellipsometry or transmittance/reflectance spectrometry. Measurements of spectral reflection and transmission usually permit to characterize optical materials and determine their refractive index. However, it is possible to characterize of samples with dopants, impurities as well as defects using optical methods. Microstructures of a hydrogenated crystalline Si wafer and a layer of SiO2 - ZrO2 composition are investigated. The first sample is a Si(001):H Czochralski grown single crystalline wafer with 50 nm thick surface Si02 layer. Hydrogen dose implantation (D continue to be an important issue in microelectronic device and sensor fabrication. Hydrogen-implanted silicon (Si: H) has become a topic of remarkable interest, mostly because of the potential of implantation-induced platelets and micro-cavities for the creation of gettering -active areas and for Si layer splitting. Oxygen precipitation and atmospheric impurity are analysed. The second sample is the layer of co-evaporated SiO2 and ZrO2 materials using simultaneously two electron beam guns in reactive evaporation methods. The composition structure was investigated by X-Ray photoelectron spectroscopy (XPS), and spectroscopic ellipsometry methods. A non-uniformity and composition of layer are analysed using average density method.

  7. [Outlier sample discriminating methods for building calibration model in melons quality detecting using NIR spectra].

    Science.gov (United States)

    Tian, Hai-Qing; Wang, Chun-Guang; Zhang, Hai-Jun; Yu, Zhi-Hong; Li, Jian-Kang

    2012-11-01

    Outlier samples strongly influence the precision of the calibration model in soluble solids content measurement of melons using NIR Spectra. According to the possible sources of outlier samples, three methods (predicted concentration residual test; Chauvenet test; leverage and studentized residual test) were used to discriminate these outliers respectively. Nine suspicious outliers were detected from calibration set which including 85 fruit samples. Considering the 9 suspicious outlier samples maybe contain some no-outlier samples, they were reclaimed to the model one by one to see whether they influence the model and prediction precision or not. In this way, 5 samples which were helpful to the model joined in calibration set again, and a new model was developed with the correlation coefficient (r) 0. 889 and root mean square errors for calibration (RMSEC) 0.6010 Brix. For 35 unknown samples, the root mean square errors prediction (RMSEP) was 0.854 degrees Brix. The performance of this model was more better than that developed with non outlier was eliminated from calibration set (r = 0.797, RMSEC= 0.849 degrees Brix, RMSEP = 1.19 degrees Brix), and more representative and stable with all 9 samples were eliminated from calibration set (r = 0.892, RMSEC = 0.605 degrees Brix, RMSEP = 0.862 degrees).

  8. Investigation of spatial correlation in MR images of human cerebral white matter using geostatistical methods

    International Nuclear Information System (INIS)

    Keil, Fabian

    2014-01-01

    Investigating the structure of human cerebral white matter is gaining interest in the neurological as well as in the neuroscientific community. It has been demonstrated in many studies that white matter is a very dynamic structure, rather than a static construct which does not change for a lifetime. That means, structural changes within white matter can be observed even on short timescales, e.g. in the course of normal ageing, neurodegenerative diseases or even during learning processes. To investigate these changes, one method of choice is the texture analysis of images obtained from white matter. In this regard, MRI plays a distinguished role as it provides a completely non-invasive way of acquiring in vivo images of human white matter. This thesis adapted a statistical texture analysis method, known as variography, to quantify the spatial correlation of human cerebral white matter based on MR images. This method, originally introduced in geoscience, relies on the idea of spatial correlation in geological phenomena: in naturally grown structures near things are correlated stronger to each other than distant things. This work reveals that the geological principle of spatial correlation can be applied to MR images of human cerebral white matter and proves that variography is an adequate method to quantify alterations therein. Since the process of MRI data acquisition is completely different to the measuring process used to quantify geological phenomena, the variographic analysis had to be adapted carefully to MR methods in order to provide a correctly working methodology. Therefore, theoretical considerations were evaluated with numerical samples in a first, and validated with real measurements in a second step. It was shown that MR variography facilitates to reduce the information stored in the texture of a white matter image to a few highly significant parameters, thereby quantifying heterogeneity and spatial correlation distance with an accuracy better than 5

  9. Evaluation of sampling methods for Bacillus spore-contaminated HVAC filters.

    Science.gov (United States)

    Calfee, M Worth; Rose, Laura J; Tufts, Jenia; Morse, Stephen; Clayton, Matt; Touati, Abderrahmane; Griffin-Gatchalian, Nicole; Slone, Christina; McSweeney, Neal

    2014-01-01

    The objective of this study was to compare an extraction-based sampling method to two vacuum-based sampling methods (vacuum sock and 37mm cassette filter) with regards to their ability to recover Bacillus atrophaeus spores (surrogate for Bacillus anthracis) from pleated heating, ventilation, and air conditioning (HVAC) filters that are typically found in commercial and residential buildings. Electrostatic and mechanical HVAC filters were tested, both without and after loading with dust to 50% of their total holding capacity. The results were analyzed by one-way ANOVA across material types, presence or absence of dust, and sampling device. The extraction method gave higher relative recoveries than the two vacuum methods evaluated (p≤0.001). On average, recoveries obtained by the vacuum methods were about 30% of those achieved by the extraction method. Relative recoveries between the two vacuum methods were not significantly different (p>0.05). Although extraction methods yielded higher recoveries than vacuum methods, either HVAC filter sampling approach may provide a rapid and inexpensive mechanism for understanding the extent of contamination following a wide-area biological release incident. Published by Elsevier B.V.

  10. CORRELATION ANALYSIS OF A LARGE SAMPLE OF NARROW-LINE SEYFERT 1 GALAXIES: LINKING CENTRAL ENGINE AND HOST PROPERTIES

    International Nuclear Information System (INIS)

    Xu Dawei; Komossa, S.; Wang Jing; Yuan Weimin; Zhou Hongyan; Lu Honglin; Li Cheng; Grupe, Dirk

    2012-01-01

    We present a statistical study of a large, homogeneously analyzed sample of narrow-line Seyfert 1 (NLS1) galaxies, accompanied by a comparison sample of broad-line Seyfert 1 (BLS1) galaxies. Optical emission-line and continuum properties are subjected to correlation analyses, in order to identify the main drivers of the correlation space of active galactic nuclei (AGNs), and of NLS1 galaxies in particular. For the first time, we have established the density of the narrow-line region as a key parameter in Eigenvector 1 space, as important as the Eddington ratio L/L Edd . This is important because it links the properties of the central engine with the properties of the host galaxy, i.e., the interstellar medium (ISM). We also confirm previously found correlations involving the line width of Hβ and the strength of the Fe II and [O III] λ5007 emission lines, and we confirm the important role played by L/L Edd in driving the properties of NLS1 galaxies. A spatial correlation analysis shows that large-scale environments of the BLS1 and NLS1 galaxies of our sample are similar. If mergers are rare in our sample, accretion-driven winds, on the one hand, or bar-driven inflows, on the other hand, may account for the strong dependence of Eigenvector 1 on ISM density.

  11. Field Sample Preparation Method Development for Isotope Ratio Mass Spectrometry

    International Nuclear Information System (INIS)

    Leibman, C.; Weisbrod, K.; Yoshida, T.

    2015-01-01

    Non-proliferation and International Security (NA-241) established a working group of researchers from Los Alamos National Laboratory (LANL), Pacific Northwest National Laboratory (PNNL) and Savannah River National Laboratory (SRNL) to evaluate the utilization of in-field mass spectrometry for safeguards applications. The survey of commercial off-the-shelf (COTS) mass spectrometers (MS) revealed no instrumentation existed capable of meeting all the potential safeguards requirements for performance, portability, and ease of use. Additionally, fieldable instruments are unlikely to meet the International Target Values (ITVs) for accuracy and precision for isotope ratio measurements achieved with laboratory methods. The major gaps identified for in-field actinide isotope ratio analysis were in the areas of: 1. sample preparation and/or sample introduction, 2. size reduction of mass analyzers and ionization sources, 3. system automation, and 4. decreased system cost. Development work in 2 through 4, numerated above continues, in the private and public sector. LANL is focusing on developing sample preparation/sample introduction methods for use with the different sample types anticipated for safeguard applications. Addressing sample handling and sample preparation methods for MS analysis will enable use of new MS instrumentation as it becomes commercially available. As one example, we have developed a rapid, sample preparation method for dissolution of uranium and plutonium oxides using ammonium bifluoride (ABF). ABF is a significantly safer and faster alternative to digestion with boiling combinations of highly concentrated mineral acids. Actinides digested with ABF yield fluorides, which can then be analyzed directly or chemically converted and separated using established column chromatography techniques as needed prior to isotope analysis. The reagent volumes and the sample processing steps associated with ABF sample digestion lend themselves to automation and field

  12. Radiochemistry methods in DOE methods for evaluating environmental and waste management samples

    International Nuclear Information System (INIS)

    Fadeff, S.K.; Goheen, S.C.

    1994-08-01

    Current standard sources of radiochemistry methods are often inappropriate for use in evaluating US Department of Energy environmental and waste management (DOE/EW) samples. Examples of current sources include EPA, ASTM, Standard Methods for the Examination of Water and Wastewater and HASL-300. Applicability of these methods is limited to specific matrices (usually water), radiation levels (usually environmental levels), and analytes (limited number). Radiochemistry methods in DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) attempt to fill the applicability gap that exists between standard methods and those needed for DOE/EM activities. The Radiochemistry chapter in DOE Methods includes an ''analysis and reporting'' guidance section as well as radiochemistry methods. A basis for identifying the DOE/EM radiochemistry needs is discussed. Within this needs framework, the applicability of standard methods and targeted new methods is identified. Sources of new methods (consolidated methods from DOE laboratories and submissions from individuals) and the methods review process will be discussed. The processes involved in generating consolidated methods add editing individually submitted methods will be compared. DOE Methods is a living document and continues to expand by adding various kinds of methods. Radiochemistry methods are highlighted in this paper. DOE Methods is intended to be a resource for methods applicable to DOE/EM problems. Although it is intended to support DOE, the guidance and methods are not necessarily exclusive to DOE. The document is available at no cost through the Laboratory Management Division of DOE, Office of Technology Development

  13. Correlation based method for comparing and reconstructing quasi-identical two-dimensional structures

    International Nuclear Information System (INIS)

    Mejia-Barbosa, Y.

    2000-03-01

    We show a method for comparing and reconstructing two similar amplitude-only structures, which are composed by the same number of identical apertures. The structures are two-dimensional and differ only in the location of one of the apertures. The method is based on a subtraction algorithm, which involves the auto-correlations and cross-correlation functions of the compared structures. Experimental results illustrate the feasibility of the method. (author)

  14. 7 CFR 51.308 - Methods of sampling and calculation of percentages.

    Science.gov (United States)

    2010-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and calculation of percentages. (a) When the numerical... 7 Agriculture 2 2010-01-01 2010-01-01 false Methods of sampling and calculation of percentages. 51...

  15. Correlative Stochastic Optical Reconstruction Microscopy and Electron Microscopy

    Science.gov (United States)

    Kim, Doory; Deerinck, Thomas J.; Sigal, Yaron M.; Babcock, Hazen P.; Ellisman, Mark H.; Zhuang, Xiaowei

    2015-01-01

    Correlative fluorescence light microscopy and electron microscopy allows the imaging of spatial distributions of specific biomolecules in the context of cellular ultrastructure. Recent development of super-resolution fluorescence microscopy allows the location of molecules to be determined with nanometer-scale spatial resolution. However, correlative super-resolution fluorescence microscopy and electron microscopy (EM) still remains challenging because the optimal specimen preparation and imaging conditions for super-resolution fluorescence microscopy and EM are often not compatible. Here, we have developed several experiment protocols for correlative stochastic optical reconstruction microscopy (STORM) and EM methods, both for un-embedded samples by applying EM-specific sample preparations after STORM imaging and for embedded and sectioned samples by optimizing the fluorescence under EM fixation, staining and embedding conditions. We demonstrated these methods using a variety of cellular targets. PMID:25874453

  16. Validation and Clinical Evaluation of a Novel Method To Measure Miltefosine in Leishmaniasis Patients Using Dried Blood Spot Sample Collection

    Science.gov (United States)

    Rosing, H.; Hillebrand, M. J. X.; Blesson, S.; Mengesha, B.; Diro, E.; Hailu, A.; Schellens, J. H. M.; Beijnen, J. H.

    2016-01-01

    To facilitate future pharmacokinetic studies of combination treatments against leishmaniasis in remote regions in which the disease is endemic, a simple cheap sampling method is required for miltefosine quantification. The aims of this study were to validate a liquid chromatography-tandem mass spectrometry method to quantify miltefosine in dried blood spot (DBS) samples and to validate its use with Ethiopian patients with visceral leishmaniasis (VL). Since hematocrit (Ht) levels are typically severely decreased in VL patients, returning to normal during treatment, the method was evaluated over a range of clinically relevant Ht values. Miltefosine was extracted from DBS samples using a simple method of pretreatment with methanol, resulting in >97% recovery. The method was validated over a calibration range of 10 to 2,000 ng/ml, and accuracy and precision were within ±11.2% and ≤7.0% (≤19.1% at the lower limit of quantification), respectively. The method was accurate and precise for blood spot volumes between 10 and 30 μl and for Ht levels of 20 to 35%, although a linear effect of Ht levels on miltefosine quantification was observed in the bioanalytical validation. DBS samples were stable for at least 162 days at 37°C. Clinical validation of the method using paired DBS and plasma samples from 16 VL patients showed a median observed DBS/plasma miltefosine concentration ratio of 0.99, with good correlation (Pearson's r = 0.946). Correcting for patient-specific Ht levels did not further improve the concordance between the sampling methods. This successfully validated method to quantify miltefosine in DBS samples was demonstrated to be a valid and practical alternative to venous blood sampling that can be applied in future miltefosine pharmacokinetic studies with leishmaniasis patients, without Ht correction. PMID:26787691

  17. Sampling trace organic compounds in water: a comparison of a continuous active sampler to continuous passive and discrete sampling methods.

    Science.gov (United States)

    Coes, Alissa L; Paretti, Nicholas V; Foreman, William T; Iverson, Jana L; Alvarez, David A

    2014-03-01

    A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19-23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method. Published by Elsevier B.V.

  18. Sampling trace organic compounds in water: a comparison of a continuous active sampler to continuous passive and discrete sampling methods

    Science.gov (United States)

    Coes, Alissa L.; Paretti, Nicholas V.; Foreman, William T.; Iverson, Jana L.; Alvarez, David A.

    2014-01-01

    A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19–23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method.

  19. Multi-frequency direct sampling method in inverse scattering problem

    Science.gov (United States)

    Kang, Sangwoo; Lambert, Marc; Park, Won-Kwang

    2017-10-01

    We consider the direct sampling method (DSM) for the two-dimensional inverse scattering problem. Although DSM is fast, stable, and effective, some phenomena remain unexplained by the existing results. We show that the imaging function of the direct sampling method can be expressed by a Bessel function of order zero. We also clarify the previously unexplained imaging phenomena and suggest multi-frequency DSM to overcome traditional DSM. Our method is evaluated in simulation studies using both single and multiple frequencies.

  20. Tracing Method with Intra and Inter Protocols Correlation

    Directory of Open Access Journals (Sweden)

    Marin Mangri

    2009-05-01

    Full Text Available MEGACO or H.248 is a protocol enabling acentralized Softswitch (or MGC to control MGsbetween Voice over Packet (VoP networks andtraditional ones. To analyze much deeper the realimplementations it is useful to use a tracing systemwith intra and inter protocols correlation. For thisreason in the case of MEGACO-H.248 it is necessaryto find the appropriate method of correlation with allprotocols involved. Starting from Rel4 a separation ofCP (Control Plane and UP (User Plane managementwithin the networks appears. MEGACO protocol playsan important role in the migration to the new releasesor from monolithic platform to a network withdistributed components.

  1. Influences of different sample preparation methods on tooth enamel ESR signals

    International Nuclear Information System (INIS)

    Zhang Wenyi; Jiao Ling; Zhang Liang'an; Pan Zhihong; Zeng Hongyu

    2005-01-01

    Objective: To study the influences of different sample preparation methods on tooth enamel ESR signals in order to reduce the effect of dentine on their sensitivities to radiation. Methods: The enamel was separated from dentine of non-irradiated adult teeth by mechanical, chemical, or both methods. The samples of different preparations were scanned by an ESR spectrometer before and after irradiation. Results: The response of ESR signals of samples prepared with different methods to radiation dose was significantly different. Conclusion: The selection of sample preparation method is very important for dose reconstruction by tooth enamel ESR dosimetry, especially in the low dose range. (authors)

  2. Correlation of Element Concentration of Cd, Cr, Co and Sc in Sea Water, Fish and Algae Samples from Beach of Lemahabang Muria

    International Nuclear Information System (INIS)

    Sukirno; Rosidi; Agus Taftazani

    2007-01-01

    The analysis of Cd, Cr, Co, Sb and Sc element in beach environmental samples Lemahabang Muria in the year 2004 has been carried out by using neutron activation analysis (NAA) method. All of heavy metals from sea water (2.0 μg/l) are obviously lower than the threshold value established by SKRI No 51/2004. From the observed data by Excel application of the correlation value (r) shows that between Cd, Cr, Co, Sb and Sc concentrations from dependent variable of sea water about tree independent variable of kerapu fish, green and brown algae shows a highly positive significant correlation (r > 0.92) except element of Sb was sufficiently positive high (r = 0.66). (author)

  3. The Moulded Site Data (MSD) wind correlation method: description and assessment

    Energy Technology Data Exchange (ETDEWEB)

    King, C.; Hurley, B.

    2004-12-01

    The long-term wind resource at a potential windfarm site may be estimated by correlating short-term on-site wind measurements with data from a regional meteorological station. A correlation method developed at Airtricity is described in sufficient detail to be reproduced. An assessment of its performance is also described; the results may serve as a guide to expected accuracy when using the method as part of an annual electricity production estimate for a proposed windfarm. (Author)

  4. Deterministic alternatives to the full configuration interaction quantum Monte Carlo method for strongly correlated systems

    Science.gov (United States)

    Tubman, Norm; Whaley, Birgitta

    The development of exponential scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, allows exact diagonalization through stochastically sampling of determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, together with a stochastic projected wave function, which are used to explore the important parts of Hilbert space. However, a stochastic representation of the wave function is not required to search Hilbert space efficiently and new deterministic approaches have recently been shown to efficiently find the important parts of determinant space. We shall discuss the technique of Adaptive Sampling Configuration Interaction (ASCI) and the related heat-bath Configuration Interaction approach for ground state and excited state simulations. We will present several applications for strongly correlated Hamiltonians. This work was supported through the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences.

  5. A general method dealing with correlations in uncertainty propagation in fault trees

    International Nuclear Information System (INIS)

    Qin Zhang

    1989-01-01

    This paper deals with the correlations among the failure probabilities (frequencies) of not only the identical basic events but also other basic events in a fault tree. It presents a general and simple method to include these correlations in uncertainty propagation. Two examples illustrate this method and show that neglecting these correlations results in large underestimation of the top event failure probability (frequency). One is the failure of the primary pump in a chemical reactor cooling system, the other example is an accident to a road transport truck carrying toxic waste. (author)

  6. Methods for Sampling and Measurement of Compressed Air Contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Stroem, L

    1976-10-15

    In order to improve the technique for measuring oil and water entrained in a compressed air stream, a laboratory study has been made of some methods for sampling and measurement. For this purpose water or oil as artificial contaminants were injected in thin streams into a test loop, carrying dry compressed air. Sampling was performed in a vertical run, down-stream of the injection point. Wall attached liquid, coarse droplet flow, and fine droplet flow were sampled separately. The results were compared with two-phase flow theory and direct observation of liquid behaviour. In a study of sample transport through narrow tubes, it was observed that, below a certain liquid loading, the sample did not move, the liquid remaining stationary on the tubing wall. The basic analysis of the collected samples was made by gravimetric methods. Adsorption tubes were used with success to measure water vapour. A humidity meter with a sensor of the aluminium oxide type was found to be unreliable. Oil could be measured selectively by a flame ionization detector, the sample being pretreated in an evaporation- condensation unit

  7. Methods for Sampling and Measurement of Compressed Air Contaminants

    International Nuclear Information System (INIS)

    Stroem, L.

    1976-10-01

    In order to improve the technique for measuring oil and water entrained in a compressed air stream, a laboratory study has been made of some methods for sampling and measurement. For this purpose water or oil as artificial contaminants were injected in thin streams into a test loop, carrying dry compressed air. Sampling was performed in a vertical run, down-stream of the injection point. Wall attached liquid, coarse droplet flow, and fine droplet flow were sampled separately. The results were compared with two-phase flow theory and direct observation of liquid behaviour. In a study of sample transport through narrow tubes, it was observed that, below a certain liquid loading, the sample did not move, the liquid remaining stationary on the tubing wall. The basic analysis of the collected samples was made by gravimetric methods. Adsorption tubes were used with success to measure water vapour. A humidity meter with a sensor of the aluminium oxide type was found to be unreliable. Oil could be measured selectively by a flame ionization detector, the sample being pretreated in an evaporation- condensation unit

  8. Atmospheric pollution measurement by optical cross correlation methods - A concept

    Science.gov (United States)

    Fisher, M. J.; Krause, F. R.

    1971-01-01

    Method combines standard spectroscopy with statistical cross correlation analysis of two narrow light beams for remote sensing to detect foreign matter of given particulate size and consistency. Method is applicable in studies of generation and motion of clouds, nuclear debris, ozone, and radiation belts.

  9. Examination of Hydrate Formation Methods: Trying to Create Representative Samples

    Energy Technology Data Exchange (ETDEWEB)

    Kneafsey, T.J.; Rees, E.V.L.; Nakagawa, S.; Kwon, T.-H.

    2011-04-01

    Forming representative gas hydrate-bearing laboratory samples is important so that the properties of these materials may be measured, while controlling the composition and other variables. Natural samples are rare, and have often experienced pressure and temperature changes that may affect the property to be measured [Waite et al., 2008]. Forming methane hydrate samples in the laboratory has been done a number of ways, each having advantages and disadvantages. The ice-to-hydrate method [Stern et al., 1996], contacts melting ice with methane at the appropriate pressure to form hydrate. The hydrate can then be crushed and mixed with mineral grains under controlled conditions, and then compacted to create laboratory samples of methane hydrate in a mineral medium. The hydrate in these samples will be part of the load-bearing frame of the medium. In the excess gas method [Handa and Stupin, 1992], water is distributed throughout a mineral medium (e.g. packed moist sand, drained sand, moistened silica gel, other porous media) and the mixture is brought to hydrate-stable conditions (chilled and pressurized with gas), allowing hydrate to form. This method typically produces grain-cementing hydrate from pendular water in sand [Waite et al., 2004]. In the dissolved gas method [Tohidi et al., 2002], water with sufficient dissolved guest molecules is brought to hydrate-stable conditions where hydrate forms. In the laboratory, this is can be done by pre-dissolving the gas of interest in water and then introducing it to the sample under the appropriate conditions. With this method, it is easier to form hydrate from more soluble gases such as carbon dioxide. It is thought that this method more closely simulates the way most natural gas hydrate has formed. Laboratory implementation, however, is difficult, and sample formation is prohibitively time consuming [Minagawa et al., 2005; Spangenberg and Kulenkampff, 2005]. In another version of this technique, a specified quantity of gas

  10. Solid phase microextraction headspace sampling of chemical warfare agent contaminated samples : method development for GC-MS analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jackson Lepage, C.R.; Hancock, J.R. [Defence Research and Development Canada, Medicine Hat, AB (Canada); Wyatt, H.D.M. [Regina Univ., SK (Canada)

    2004-07-01

    Defence R and D Canada-Suffield (DRDC-Suffield) is responsible for analyzing samples that are suspected to contain chemical warfare agents, either collected by the Canadian Forces or by first-responders in the event of a terrorist attack in Canada. The analytical techniques used to identify the composition of the samples include gas chromatography-mass spectrometry (GC-MS), liquid chromatography-mass spectrometry (LC-MS), Fourier-transform infrared spectroscopy (FT-IR) and nuclear magnetic resonance spectroscopy. GC-MS and LC-MS generally require solvent extraction and reconcentration, thereby increasing sample handling. The authors examined analytical techniques which reduce or eliminate sample manipulation. In particular, this paper presented a screening method based on solid phase microextraction (SPME) headspace sampling and GC-MS analysis for chemical warfare agents such as mustard, sarin, soman, and cyclohexyl methylphosphonofluoridate in contaminated soil samples. SPME is a method which uses small adsorbent polymer coated silica fibers that trap vaporous or liquid analytes for GC or LC analysis. Collection efficiency can be increased by adjusting sampling time and temperature. This method was tested on two real-world samples, one from excavated chemical munitions and the second from a caustic decontamination mixture. 7 refs., 2 tabs., 3 figs.

  11. Autocorrelation and cross-correlation between hCGβ and PAPP-A in repeated sampling during first trimester of pregnancy

    DEFF Research Database (Denmark)

    Nørgaard, Pernille; Wright, Dave; Ball, Susan

    2013-01-01

    Theoretically, repeated sampling of free β-human chorionic gonadotropin (hCGβ) and pregnancy associated plasma protein-A (PAPP-A) in the first trimester of pregnancy might improve performance of risk assessment of trisomy 21 (T21). To assess the performance of a screening test involving repeated...... measures of biochemical markers, correlations between markers must be estimated. The aims of this study were to calculate the autocorrelation and cross-correlation between hCGβ and PAPP-A in the first trimester of pregnancy and to investigate the possible impact of gestational age at the first sample...

  12. Pion correlations as a function of atomic mass in heavy ion collisions

    International Nuclear Information System (INIS)

    Chacon, A.D.

    1989-01-01

    The method of two pion interferometry was used to obtain source-size and lifetime parameters for the pions produced in heavy ion collisions. The systems used were 1.70 · A GeV 56 Fe + Fe, 1.82 · A GeV 40 Ar + KCl and 1.54 · A GeV 93 Nb + Nb, allowing for a search for dependences on the atomic number. Two acceptances (centered, in the lab., at ∼ 0 degrees and 45 degrees) were used for each system, allowing a search for dependences on the viewing angle. The correlation functions were calculated by comparing the data samples to background (or reference) samples made using the method of event mixing, where pions from different events are combined to produce a data sample in which the Bose-Einstein correlation effect is absent. The effect of the correlation function on the background samples is calculated, and a method for weighting the events to remove the residual correlation effect is presented. The effect of the spectrometer design on the measured correlation functions is discussed, as are methods for correcting for these effects during the data analysis. 58 refs., 39 figs., 18 tabs

  13. Investigation of spatial correlation in MR images of human cerebral white matter using geostatistical methods

    Energy Technology Data Exchange (ETDEWEB)

    Keil, Fabian

    2014-03-20

    Investigating the structure of human cerebral white matter is gaining interest in the neurological as well as in the neuroscientific community. It has been demonstrated in many studies that white matter is a very dynamic structure, rather than a static construct which does not change for a lifetime. That means, structural changes within white matter can be observed even on short timescales, e.g. in the course of normal ageing, neurodegenerative diseases or even during learning processes. To investigate these changes, one method of choice is the texture analysis of images obtained from white matter. In this regard, MRI plays a distinguished role as it provides a completely non-invasive way of acquiring in vivo images of human white matter. This thesis adapted a statistical texture analysis method, known as variography, to quantify the spatial correlation of human cerebral white matter based on MR images. This method, originally introduced in geoscience, relies on the idea of spatial correlation in geological phenomena: in naturally grown structures near things are correlated stronger to each other than distant things. This work reveals that the geological principle of spatial correlation can be applied to MR images of human cerebral white matter and proves that variography is an adequate method to quantify alterations therein. Since the process of MRI data acquisition is completely different to the measuring process used to quantify geological phenomena, the variographic analysis had to be adapted carefully to MR methods in order to provide a correctly working methodology. Therefore, theoretical considerations were evaluated with numerical samples in a first, and validated with real measurements in a second step. It was shown that MR variography facilitates to reduce the information stored in the texture of a white matter image to a few highly significant parameters, thereby quantifying heterogeneity and spatial correlation distance with an accuracy better than 5

  14. Validation of method in instrumental NAA for food products sample

    International Nuclear Information System (INIS)

    Alfian; Siti Suprapti; Setyo Purwanto

    2010-01-01

    NAA is a method of testing that has not been standardized. To affirm and confirm that this method is valid. it must be done validation of the method with various sample standard reference materials. In this work. the validation is carried for food product samples using NIST SRM 1567a (wheat flour) and NIST SRM 1568a (rice flour). The results show that the validation method for testing nine elements (Al, K, Mg, Mn, Na, Ca, Fe, Se and Zn) in SRM 1567a and eight elements (Al, K, Mg, Mn, Na, Ca, Se and Zn ) in SRM 1568a pass the test of accuracy and precision. It can be conclude that this method has power to give valid result in determination element of the food products samples. (author)

  15. Validation of EIA sampling methods - bacterial and biochemical analysis

    Digital Repository Service at National Institute of Oceanography (India)

    Sheelu, G.; LokaBharathi, P.A.; Nair, S.; Raghukumar, C.; Mohandass, C.

    to temporal factors. Paired T-test between pre- and post-disturbance samples suggested that the above methods of sampling and variables like TC, protein and TOC could be used for monitoring disturbance....

  16. Correlation of Cadmium and Magnesium in the Blood and Serum Samples of Smokers and Non-Smokers Chronic Leukemia Patients.

    Science.gov (United States)

    Khan, Noman; Afridi, Hasan Imran; Kazi, Tasneem Gul; Arain, Muhammad Balal; Bilal, Muhammad; Akhtar, Asma; Khan, Mustafa

    2017-03-01

    It was studied that cancer-causing processes are related with the disproportions of essential and toxic elements in body tissues and fluid. The purpose of the current study was to evaluate the levels of magnesium (Mg) and cadmium (Cd) in serum and blood samples of smokers and nonsmokers who have chronic myeloid (CML) and lymphocytic (CLL) leukemia, age ranged 31-50 years. For comparative study, age-matched smokers and nonsmoker males were chosen as controls/referents. The levels of elements in patient were analyzed before any treatment by atomic absorption spectrophotometer, after microwave assisted acid digestion. The validation of the method was done by using certified reference materials of serum and blood samples. The resulted data indicated that the adult male smokers and nonsmokers have two- to fourfold higher levels of Cd in the blood and sera samples as compared to the referents (p blood and serum samples of both types of leukemia patients as related to referent values. The resulted data indicates significant negative correlation among Mg and Cd in leukemia patients and smoker referents. Further studies are needed to clarify the role of these elements in pathogenesis of chronic leukemia.

  17. Methods for converging correlation energies within the dielectric matrix formalism

    Science.gov (United States)

    Dixit, Anant; Claudot, Julien; Gould, Tim; Lebègue, Sébastien; Rocca, Dario

    2018-03-01

    Within the dielectric matrix formalism, the random-phase approximation (RPA) and analogous methods that include exchange effects are promising approaches to overcome some of the limitations of traditional density functional theory approximations. The RPA-type methods however have a significantly higher computational cost, and, similarly to correlated quantum-chemical methods, are characterized by a slow basis set convergence. In this work we analyzed two different schemes to converge the correlation energy, one based on a more traditional complete basis set extrapolation and one that converges energy differences by accounting for the size-consistency property. These two approaches have been systematically tested on the A24 test set, for six points on the potential-energy surface of the methane-formaldehyde complex, and for reaction energies involving the breaking and formation of covalent bonds. While both methods converge to similar results at similar rates, the computation of size-consistent energy differences has the advantage of not relying on the choice of a specific extrapolation model.

  18. Effect of sample preparation method on quantification of polymorphs using PXRD.

    Science.gov (United States)

    Alam, Shahnwaz; Patel, Sarsvatkumar; Bansal, Arvind Kumar

    2010-01-01

    The purpose of this study was to improve the sensitivity and accuracy of quantitative analysis of polymorphic mixtures. Various techniques such as hand grinding and mixing (in mortar and pestle), air jet milling and ball milling for micronization of particle and mixing were used to prepare binary mixtures. Using these techniques, mixtures of form I and form II of clopidogrel bisulphate were prepared in various proportions from 0-5% w/w of form I in form II and subjected to x-ray powder diffraction analysis. In order to obtain good resolution in minimum time, step time and step size were varied to optimize scan rate. Among the six combinations, step size of 0.05 degrees with step time of 5 s demonstrated identification of maximum characteristic peaks of form I in form II. Data obtained from samples prepared using both grinding and mixing in ball mill showed good analytical sensitivity and accuracy compared to other methods. Powder x-ray diffraction method was reproducible, precise with LOD of 0.29% and LOQ of 0.91%. Validation results showed excellent correlation between actual and predicted concentration with R2 > 0.9999.

  19. An efficient method for sampling the essential subspace of proteins

    NARCIS (Netherlands)

    Amadei, A; Linssen, A.B M; de Groot, B.L.; van Aalten, D.M.F.; Berendsen, H.J.C.

    A method is presented for a more efficient sampling of the configurational space of proteins as compared to conventional sampling techniques such as molecular dynamics. The method is based on the large conformational changes in proteins revealed by the ''essential dynamics'' analysis. A form of

  20. Measuring larval nematode contamination on cattle pastures: Comparing two herbage sampling methods.

    Science.gov (United States)

    Verschave, S H; Levecke, B; Duchateau, L; Vercruysse, J; Charlier, J

    2015-06-15

    Assessing levels of pasture larval contamination is frequently used to study the population dynamics of the free-living stages of parasitic nematodes of livestock. Direct quantification of infective larvae (L3) on herbage is the most applied method to measure pasture larval contamination. However, herbage collection remains labour intensive and there is a lack of studies addressing the variation induced by the sampling method and the required sample size. The aim of this study was (1) to compare two different sampling methods in terms of pasture larval count results and time required to sample, (2) to assess the amount of variation in larval counts at the level of sample plot, pasture and season, respectively and (3) to calculate the required sample size to assess pasture larval contamination with a predefined precision using random plots across pasture. Eight young stock pastures of different commercial dairy herds were sampled in three consecutive seasons during the grazing season (spring, summer and autumn). On each pasture, herbage samples were collected through both a double-crossed W-transect with samples taken every 10 steps (method 1) and four random located plots of 0.16 m(2) with collection of all herbage within the plot (method 2). The average (± standard deviation (SD)) pasture larval contamination using sampling methods 1 and 2 was 325 (± 479) and 305 (± 444)L3/kg dry herbage (DH), respectively. Large discrepancies in pasture larval counts of the same pasture and season were often seen between methods, but no significant difference (P = 0.38) in larval counts between methods was found. Less time was required to collect samples with method 2. This difference in collection time between methods was most pronounced for pastures with a surface area larger than 1 ha. The variation in pasture larval counts from samples generated by random plot sampling was mainly due to the repeated measurements on the same pasture in the same season (residual variance

  1. Personality correlates of equity sensitivity for samples of Canadian, Bulgarian, and Mexican business people.

    Science.gov (United States)

    Mintu-Wimsatt, Alma; Madjourova-Davri, Anna; Lozada, Héctor R

    2008-02-01

    Equity sensitivity concerns perceptions of what is or is not equitable. Previous studies have shown that equity sensitivity is associated with one's relationship orientation. Relationships are also influenced by personality variables. As both personality and equity sensitivity influence relationships, equity sensitivity and personality may be correlated also; so, this study examined that possibility. The relations of equity sensitivity with 3 personality variables were explored across three culturally different samples. This allowed validation across cultures of the proposed equity-personality relationship which has traditionally been assessed in a U.S. setting. In general, personality-equity sensitivity relationship was not supported across the samples.

  2. Brachytherapy dose-volume histogram computations using optimized stratified sampling methods

    International Nuclear Information System (INIS)

    Karouzakis, K.; Lahanas, M.; Milickovic, N.; Giannouli, S.; Baltas, D.; Zamboglou, N.

    2002-01-01

    A stratified sampling method for the efficient repeated computation of dose-volume histograms (DVHs) in brachytherapy is presented as used for anatomy based brachytherapy optimization methods. The aim of the method is to reduce the number of sampling points required for the calculation of DVHs for the body and the PTV. From the DVHs are derived the quantities such as Conformity Index COIN and COIN integrals. This is achieved by using partial uniform distributed sampling points with a density in each region obtained from a survey of the gradients or the variance of the dose distribution in these regions. The shape of the sampling regions is adapted to the patient anatomy and the shape and size of the implant. For the application of this method a single preprocessing step is necessary which requires only a few seconds. Ten clinical implants were used to study the appropriate number of sampling points, given a required accuracy for quantities such as cumulative DVHs, COIN indices and COIN integrals. We found that DVHs of very large tissue volumes surrounding the PTV, and also COIN distributions, can be obtained using a factor of 5-10 times smaller the number of sampling points in comparison with uniform distributed points

  3. A comparative proteomics method for multiple samples based on a 18O-reference strategy and a quantitation and identification-decoupled strategy.

    Science.gov (United States)

    Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin

    2017-08-15

    Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  5. Clustering Methods with Qualitative Data: A Mixed Methods Approach for Prevention Research with Small Samples

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.

    2016-01-01

    Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

  6. Soybean yield modeling using bootstrap methods for small samples

    Energy Technology Data Exchange (ETDEWEB)

    Dalposso, G.A.; Uribe-Opazo, M.A.; Johann, J.A.

    2016-11-01

    One of the problems that occur when working with regression models is regarding the sample size; once the statistical methods used in inferential analyzes are asymptotic if the sample is small the analysis may be compromised because the estimates will be biased. An alternative is to use the bootstrap methodology, which in its non-parametric version does not need to guess or know the probability distribution that generated the original sample. In this work we used a set of soybean yield data and physical and chemical soil properties formed with fewer samples to determine a multiple linear regression model. Bootstrap methods were used for variable selection, identification of influential points and for determination of confidence intervals of the model parameters. The results showed that the bootstrap methods enabled us to select the physical and chemical soil properties, which were significant in the construction of the soybean yield regression model, construct the confidence intervals of the parameters and identify the points that had great influence on the estimated parameters. (Author)

  7. Comparison of methods for the quantification of the different carbon fractions in atmospheric aerosol samples

    Science.gov (United States)

    Nunes, Teresa; Mirante, Fátima; Almeida, Elza; Pio, Casimiro

    2010-05-01

    to evaluate the possibility of continue using, for trend analysis, the historical data set, we performed an inter-comparison between our method and an adaptation of EUSAAR-2 protocol, taking into account that this last protocol will possibly be recommended for analysing carbonaceous aerosols at European sites. In this inter-comparison we tested different types of samples (PM2,5, PM2,5-10, PM10) with large spectra of carbon loadings, with and without pre-treatment acidification. For a reduced number of samples, five replicates of each one were analysed by each method for statistical purposes. The inter-comparison study revealed that when the sample analysis were performed in similar room conditions, the two thermo-optic methods give similar results for TC, OC and EC, without significant differences at a 95% confidence level. The correlation between the methods, DAO and EUSAAR-2 for EC is smaller than for TC and OC, although showing a coefficient correlation over 0,95, with a slope close to one. For samples performed in different periods, room temperatures seem to have a significant effect over OC quantification. The sample pre-treatment with HCl fumigation tends to decrease TC quantification, mainly due to the more volatile organic fraction release during the first heating step. For a set of 20 domestic biomass burning samples analyzed by the DAO method we observed an average decrease in TC quantification of 3,7 % in relation to non-acidified samples, even though this decrease is accompanied by an average increase in the less volatile organic fraction. The indirect measurement of carbon carbonate, usually a minor carbon component in the carbonaceous aerosol, based on the difference between TC measured by TOM of acidified and non-acidified samples is not a robust measurement, considering the biases affecting his quantification. The present study show that the two thermo-optic temperature program used for OC and EC quantification give similar results, and if in the

  8. Radiochemistry methods in DOE Methods for Evaluating Environmental and Waste Management Samples: Addressing new challenges

    International Nuclear Information System (INIS)

    Fadeff, S.K.; Goheen, S.C.; Riley, R.G.

    1994-01-01

    Radiochemistry methods in Department of Energy Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) add to the repertoire of other standard methods in support of U.S. Department of Energy environmental restoration and waste management (DOE/EM) radiochemical characterization activities. Current standard sources of radiochemistry methods are not always applicable for evaluating DOE/EM samples. Examples of current sources include those provided by the US Environmental Protection Agency, the American Society for Testing and Materials, Standard Methods for the Examination of Water and Wastewater, and Environmental Measurements Laboratory Procedures Manual (HASL-300). The applicability of these methods is generally limited to specific matrices (usually water), low-level radioactive samples, and a limited number of analytes. DOE Methods complements these current standard methods by addressing the complexities of EM characterization needs. The process for determining DOE/EM radiochemistry characterization needs is discussed. In this context of DOE/EM needs, the applicability of other sources of standard radiochemistry methods is defined, and gaps in methodology are identified. Current methods in DOE Methods and the EM characterization needs they address are discussed. Sources of new methods and the methods incorporation process are discussed. The means for individuals to participate in (1) identification of DOE/EM needs, (2) the methods incorporation process, and (3) submission of new methods are identified

  9. A comprehensive comparison of perpendicular distance sampling methods for sampling downed coarse woody debris

    Science.gov (United States)

    Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine; Michael S. Williams

    2013-01-01

    Many new methods for sampling down coarse woody debris have been proposed in the last dozen or so years. One of the most promising in terms of field application, perpendicular distance sampling (PDS), has several variants that have been progressively introduced in the literature. In this study, we provide an overview of the different PDS variants and comprehensive...

  10. A comparison of fitness-case sampling methods for genetic programming

    Science.gov (United States)

    Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel

    2017-11-01

    Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.

  11. A Method for Correlation of Gravestone Weathering and Air Quality (SO2), West Amidlands, UK

    Science.gov (United States)

    Carlson, Michael John

    From the beginning of the Industrial Revolution through the environmental revolution of the 1970s Britain suffered the effects of poor air quality primarily from particulate matter and acid in the form of NOx and SO x compounds. Air quality stations across the region recorded SO 2 beginning in the 1960s however the direct measurement of air quality prior to 1960 is lacking and only anecdotal notations exist. Proxy records including lung tissue samples, particulates in sediments cores, lake acidification studies and gravestone weathering have all been used to reconstruct the history of air quality. A 120-year record of acid deposition reconstructed from lead-lettered marble gravestone weathering combined with SO2 measurements from the air monitoring network across the West Midlands, UK region beginning in the 1960s form the framework for this study. The study seeks to create a spatial and temporal correlation between the gravestone weathering and measured SO 2. Successful correlation of the dataset from 1960s to the 2000s would allow a paleo-air quality record to be generated from the 120-year record of gravestone weathering. Decadal gravestone weathering rates can be estimated by non-linear regression analysis of stone loss at individual cemeteries. Gravestone weathering rates are interpolated across the region through Empirical Bayesian Kriging (EBK) methods performed through ArcGISRTM and through a land use based approach based on digitized maps of land use. Both methods of interpolation allow for the direct correlation of gravestone weathering and measured SO2 to be made. Decadal scale correlations of gravestone weathering rates and measured SO2 are very weak and non-existent for both EBK and the land use based approach. Decadal results combined together on a larger scale for each respective method display a better visual correlation. However, the relative clustering of data at lower SO2 concentrations and the lack of data at higher SO2 concentrations make the

  12. Shear Strength of Remoulding Clay Samples Using Different Methods of Moulding

    Science.gov (United States)

    Norhaliza, W.; Ismail, B.; Azhar, A. T. S.; Nurul, N. J.

    2016-07-01

    Shear strength for clay soil was required to determine the soil stability. Clay was known as a soil with complex natural formations and very difficult to obtain undisturbed samples at the site. The aim of this paper was to determine the unconfined shear strength of remoulded clay on different methods in moulding samples which were proctor compaction, hand operated soil compacter and miniature mould methods. All the samples were remoulded with the same optimum moisture content (OMC) and density that were 18% and 1880 kg/m3 respectively. The unconfined shear strength results of remoulding clay soils for proctor compaction method was 289.56kPa with the strain 4.8%, hand operated method was 261.66kPa with the strain 4.4% and miniature mould method was 247.52kPa with the strain 3.9%. Based on the proctor compaction method, the reduction percentage of unconfined shear strength of remoulded clay soil of hand operated method was 9.66%, and for miniature mould method was 14.52%. Thus, because there was no significant difference of reduction percentage of unconfined shear strength between three different methods, so it can be concluded that remoulding clay by hand operated method and miniature mould method were accepted and suggested to perform remoulding clay samples by other future researcher. However for comparison, the hand operated method was more suitable to form remoulded clay sample in term of easiness, saving time and less energy for unconfined shear strength determination purposes.

  13. Overweight and Obesity: Prevalence and Correlates in a Large Clinical Sample of Children with Autism Spectrum Disorder

    Science.gov (United States)

    Zuckerman, Katharine E.; Hill, Alison P.; Guion, Kimberly; Voltolina, Lisa; Fombonne, Eric

    2014-01-01

    Autism Spectrum Disorders (ASDs) and childhood obesity (OBY) are rising public health concerns. This study aimed to evaluate the prevalence of overweight (OWT) and OBY in a sample of 376 Oregon children with ASD, and to assess correlates of OWT and OBY in this sample. We used descriptive statistics, bivariate, and focused multivariate analyses to…

  14. Petascale Many Body Methods for Complex Correlated Systems

    Science.gov (United States)

    Pruschke, Thomas

    2012-02-01

    Correlated systems constitute an important class of materials in modern condensed matter physics. Correlation among electrons are at the heart of all ordering phenomena and many intriguing novel aspects, such as quantum phase transitions or topological insulators, observed in a variety of compounds. Yet, theoretically describing these phenomena is still a formidable task, even if one restricts the models used to the smallest possible set of degrees of freedom. Here, modern computer architectures play an essential role, and the joint effort to devise efficient algorithms and implement them on state-of-the art hardware has become an extremely active field in condensed-matter research. To tackle this task single-handed is quite obviously not possible. The NSF-OISE funded PIRE collaboration ``Graduate Education and Research in Petascale Many Body Methods for Complex Correlated Systems'' is a successful initiative to bring together leading experts around the world to form a virtual international organization for addressing these emerging challenges and educate the next generation of computational condensed matter physicists. The collaboration includes research groups developing novel theoretical tools to reliably and systematically study correlated solids, experts in efficient computational algorithms needed to solve the emerging equations, and those able to use modern heterogeneous computer architectures to make then working tools for the growing community.

  15. A multi-dimensional sampling method for locating small scatterers

    International Nuclear Information System (INIS)

    Song, Rencheng; Zhong, Yu; Chen, Xudong

    2012-01-01

    A multiple signal classification (MUSIC)-like multi-dimensional sampling method (MDSM) is introduced to locate small three-dimensional scatterers using electromagnetic waves. The indicator is built with the most stable part of signal subspace of the multi-static response matrix on a set of combinatorial sampling nodes inside the domain of interest. It has two main advantages compared to the conventional MUSIC methods. First, the MDSM is more robust against noise. Second, it can work with a single incidence even for multi-scatterers. Numerical simulations are presented to show the good performance of the proposed method. (paper)

  16. Efficiency of snake sampling methods in the Brazilian semiarid region.

    Science.gov (United States)

    Mesquita, Paula C M D; Passos, Daniel C; Cechin, Sonia Z

    2013-09-01

    The choice of sampling methods is a crucial step in every field survey in herpetology. In countries where time and financial support are limited, the choice of the methods is critical. The methods used to sample snakes often lack objective criteria, and the traditional methods have apparently been more important when making the choice. Consequently researches using not-standardized methods are frequently found in the literature. We have compared four commonly used methods for sampling snake assemblages in a semiarid area in Brazil. We compared the efficacy of each method based on the cost-benefit regarding the number of individuals and species captured, time, and financial investment. We found that pitfall traps were the less effective method in all aspects that were evaluated and it was not complementary to the other methods in terms of abundance of species and assemblage structure. We conclude that methods can only be considered complementary if they are standardized to the objectives of the study. The use of pitfall traps in short-term surveys of the snake fauna in areas with shrubby vegetation and stony soil is not recommended.

  17. Bioassessment tools in novel habitats: an evaluation of indices and sampling methods in low-gradient streams in California.

    Science.gov (United States)

    Mazor, Raphael D; Schiff, Kenneth; Ritter, Kerry; Rehn, Andy; Ode, Peter

    2010-08-01

    Biomonitoring programs are often required to assess streams for which assessment tools have not been developed. For example, low-gradient streams (slopeindices in the state were developed in high-gradient systems. This study evaluated the performance of three sampling methods [targeted riffle composite (TRC), reach-wide benthos (RWB), and the margin-center-margin modification of RWB (MCM)] and two indices [the Southern California Index of Biotic Integrity (SCIBI) and the ratio of observed to expected taxa (O/E)] in low-gradient streams in California for application in this habitat type. Performance was evaluated in terms of efficacy (i.e., ability to collect enough individuals for index calculation), comparability (i.e., similarity of assemblages and index scores), sensitivity (i.e., responsiveness to disturbance), and precision (i.e., ability to detect small differences in index scores). The sampling methods varied in the degree to which they targeted macroinvertebrate-rich microhabitats, such as riffles and vegetated margins, which may be naturally scarce in low-gradient streams. The RWB method failed to collect sufficient numbers of individuals (i.e., >or=450) to calculate the SCIBI in 28 of 45 samples and often collected fewer than 100 individuals, suggesting it is inappropriate for low-gradient streams in California; failures for the other methods were less common (TRC, 16 samples; MCM, 11 samples). Within-site precision, measured as the minimum detectable difference (MDD) was poor but similar across methods for the SCIBI (ranging from 19 to 22). However, RWB had the lowest MDD for O/E scores (0.20 versus 0.24 and 0.28 for MCM and TRC, respectively). Mantel correlations showed that assemblages were more similar within sites among methods than within methods among sites, suggesting that the sampling methods were collecting similar assemblages of organisms. Statistically significant disagreements among methods were not detected, although O/E scores were higher

  18. Some remarks on estimating a covariance structure model from a sample correlation matrix

    OpenAIRE

    Maydeu Olivares, Alberto; Hernández Estrada, Adolfo

    2000-01-01

    A popular model in structural equation modeling involves a multivariate normal density with a structured covariance matrix that has been categorized according to a set of thresholds. In this setup one may estimate the covariance structure parameters from the sample tetrachoricl polychoric correlations but only if the covariance structure is scale invariant. Doing so when the covariance structure is not scale invariant results in estimating a more restricted covariance structure than the one i...

  19. Methods for Characterisation of unknown Suspect Radioactive Samples

    International Nuclear Information System (INIS)

    Sahagia, M.; Grigorescu, E.L.; Luca, A.; Razdolescu, A.C.; Ivan, C.

    2001-01-01

    Full text: The paper presents various identification and measurement methods, used for the expertise of a wide variety of suspect radioactive materials, whose circulation was not legally stated. The main types of examined samples were: radioactive sources, illegally trafficked; suspect radioactive materials or radioactively contaminated devices; uranium tablets; fire detectors containing 241 Am sources; osmium samples containing radioactive 185 Os or enriched 187 Os. The types of analyses and determination methods were as follows: the chemical composition was determined by using identification reagents or by neutron activation analysis; the radionuclide composition was determined by using gamma-ray spectrometry; the activity and particle emission rates were determined by using calibrated radiometric equipment; the absorbed dose rate at the wall of all types of containers and samples was determined by using calibrated dose ratemeters. The radiation exposure risk for population, due to these radioactive materials, was evaluated for every case. (author)

  20. Multielement methods of atomic fluorescence analysis of enviromental samples

    International Nuclear Information System (INIS)

    Rigin, V.I.

    1985-01-01

    A multielement method of atomic fluorescence analysis of environmental samples based on sample decomposition by autoclave fluorination and gas-phase atomization of volatile compounds in inductive araon plasma using a nondispersive polychromator is suggested. Detection limits of some elements (Be, Sr, Cd, V, Mo, Te, Ru etc.) for different sample forms introduced in to an analyzer are given

  1. A DOE manual: DOE Methods for Evaluating Environmental and Waste Management Samples

    International Nuclear Information System (INIS)

    Goheen, S.C.; McCulloch, M.; Riley, R.G.

    1994-01-01

    Waste Management inherently requires knowledge of the waste's chemical composition. The waste can often be analyzed by established methods; however, if the samples are radioactive, or are plagued by other complications, established methods may not be feasible. The US Department of Energy (DOE) has been faced with managing some waste types that are not amenable to standard or available methods, so new or modified sampling and analysis methods are required. These methods are incorporated into DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods), which is a guidance/methods document for sampling and analysis activities in support of DOE sites. It is a document generated by consensus of the DOE laboratory staff and is intended to fill the gap within existing guidance documents (e. g., the Environmental Protection Agency's (EPA's) Test Methods for Evaluating Solid Waste, SW-846), which apply to low-level or non-radioactive samples. DOE Methods fills the gap by including methods that take into account the complexities of DOE site matrices. The most recent update, distributed in October 1993, contained quality assurance (QA), quality control (QC), safety, sampling, organic analysis, inorganic analysis, and radioanalytical guidance as well as 29 methods. The next update, which will be distributed in April 1994, will contain 40 methods and will therefore have greater applicability. All new methods are either peer reviewed or labeled ''draft'' methods. Draft methods were added to speed the release of methods to field personnel

  2. Sampling and sample preparation methods for the analysis of trace elements in biological material

    International Nuclear Information System (INIS)

    Sansoni, B.; Iyengar, V.

    1978-05-01

    The authors attempt to give a most systamtic possible treatment of the sample taking and sample preparation of biological material (particularly in human medicine) for trace analysis (e.g. neutron activation analysis, atomic absorption spectrometry). Contamination and loss problems are discussed as well as the manifold problems of the different consistency of solid and liquid biological materials, as well as the stabilization of the sample material. The process of dry and wet ashing is particularly dealt with, where new methods are also described. (RB) [de

  3. [Progress in sample preparation and analytical methods for trace polar small molecules in complex samples].

    Science.gov (United States)

    Zhang, Qianchun; Luo, Xialin; Li, Gongke; Xiao, Xiaohua

    2015-09-01

    Small polar molecules such as nucleosides, amines, amino acids are important analytes in biological, food, environmental, and other fields. It is necessary to develop efficient sample preparation and sensitive analytical methods for rapid analysis of these polar small molecules in complex matrices. Some typical materials in sample preparation, including silica, polymer, carbon, boric acid and so on, are introduced in this paper. Meanwhile, the applications and developments of analytical methods of polar small molecules, such as reversed-phase liquid chromatography, hydrophilic interaction chromatography, etc., are also reviewed.

  4. Methods of sampling airborne fungi in working environments of waste treatment facilities.

    Science.gov (United States)

    Černá, Kristýna; Wittlingerová, Zdeňka; Zimová, Magdaléna; Janovský, Zdeněk

    2016-01-01

    The objective of the present study was to evaluate and compare the efficiency of a filter based sampling method and a high volume sampling method for sampling airborne culturable fungi present in waste sorting facilities. Membrane filters method was compared with surface air system method. The selected sampling methods were modified and tested in 2 plastic waste sorting facilities. The total number of colony-forming units (CFU)/m3 of airborne fungi was dependent on the type of sampling device, on the time of sampling, which was carried out every hour from the beginning of the work shift, and on the type of cultivation medium (p airborne fungi ranged 2×102-1.7×106 CFU/m3 when using the membrane filters (MF) method, and 3×102-6.4×104 CFU/m3 when using the surface air system (SAS) method. Both methods showed comparable sensitivity to the fluctuations of the concentrations of airborne fungi during the work shifts. The SAS method is adequate for a fast indicative determination of concentration of airborne fungi. The MF method is suitable for thorough assessment of working environment contamination by airborne fungi. Therefore we recommend the MF method for the implementation of a uniform standard methodology of airborne fungi sampling in working environments of waste treatment facilities. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  5. Effect of prewarming the forearm on the measurement of regional cerebral blood flow with one-point venous sampling by autoradiography method

    International Nuclear Information System (INIS)

    Itoh, Youko H.; Kurabe, Teruhisa; Kazaoka, Yoshiaki; Ishiguchi, Tsuneo; Kawashima, Sadao

    2004-01-01

    Autoradiography (ARG) using 123 I-iodoamphetamine ( 123 I-IMP) is widely performed as an efficient method of measuring local cerebral blood flow. Recently, ARG by a single collection of venous blood has been appreciated as a simple method. In this study, we investigated the effect of warming of the site for collecting venous blood (forearm). The coefficient of correlation of the local cerebral blood flow value obtained from arterial and venous blood samples was 0.766 (p<0.05) in the group without warming (38 patients). The coefficient of correlation similarly obtained in the group with warming (53 patients) was 0.908 (p<0.05). The difference in the correlation efficient was significant (p<0.05) between the two groups. From these results it was concluded that warming the blood-collecting site decreased the difference between the arterial and venous radioactive concentrations and increased the precision of the test. (author)

  6. Prospects of Frequency-Time Correlation Analysis for Detecting Pipeline Leaks by Acoustic Emission Method

    International Nuclear Information System (INIS)

    Faerman, V A; Cheremnov, A G; Avramchuk, V V; Luneva, E E

    2014-01-01

    In the current work the relevance of nondestructive test method development applied for pipeline leak detection is considered. It was shown that acoustic emission testing is currently one of the most widely spread leak detection methods. The main disadvantage of this method is that it cannot be applied in monitoring long pipeline sections, which in its turn complicates and slows down the inspection of the line pipe sections of main pipelines. The prospects of developing alternative techniques and methods based on the use of the spectral analysis of signals were considered and their possible application in leak detection on the basis of the correlation method was outlined. As an alternative, the time-frequency correlation function calculation is proposed. This function represents the correlation between the spectral components of the analyzed signals. In this work, the technique of time-frequency correlation function calculation is described. The experimental data that demonstrate obvious advantage of the time-frequency correlation function compared to the simple correlation function are presented. The application of the time-frequency correlation function is more effective in suppressing the noise components in the frequency range of the useful signal, which makes maximum of the function more pronounced. The main drawback of application of the time- frequency correlation function analysis in solving leak detection problems is a great number of calculations that may result in a further increase in pipeline time inspection. However, this drawback can be partially reduced by the development and implementation of efficient algorithms (including parallel) of computing the fast Fourier transform using computer central processing unit and graphic processing unit

  7. Shale characteristics impact on Nuclear Magnetic Resonance (NMR fluid typing methods and correlations

    Directory of Open Access Journals (Sweden)

    Mohamed Mehana

    2016-06-01

    Full Text Available The development of shale reservoirs has brought a paradigm shift in the worldwide energy equation. This entails developing robust techniques to properly evaluate and unlock the potential of those reservoirs. The application of Nuclear Magnetic Resonance techniques in fluid typing and properties estimation is well-developed in conventional reservoirs. However, Shale reservoirs characteristics like pore size, organic matter, clay content, wettability, adsorption, and mineralogy would limit the applicability of the used interpretation methods and correlation. Some of these limitations include the inapplicability of the controlling equations that were derived assuming fast relaxation regime, the overlap of different fluids peaks and the lack of robust correlation to estimate fluid properties in shale. This study presents a state-of-the-art review of the main contributions presented on fluid typing methods and correlations in both experimental and theoretical side. The study involves Dual Tw, Dual Te, and doping agent's application, T1-T2, D-T2 and T2sec vs. T1/T2 methods. In addition, fluid properties estimation such as density, viscosity and the gas-oil ratio is discussed. This study investigates the applicability of these methods along with a study of the current fluid properties correlations and their limitations. Moreover, it recommends the appropriate method and correlation which are capable of tackling shale heterogeneity.

  8. Correlations between different methods of UO2 pellet density measurement

    International Nuclear Information System (INIS)

    Yanagisawa, Kazuaki

    1977-07-01

    Density of UO 2 pellets was measured by three different methods, i.e., geometrical, water-immersed and meta-xylene immersed and treated statistically, to find out the correlations between UO 2 pellets are of six kinds but with same specifications. The correlations are linear 1 : 1 for pellets of 95% theoretical densities and above, but such do not exist below the level and variated statistically due to interaction between open and close pores. (auth.)

  9. Single- versus multiple-sample method to measure glomerular filtration rate.

    Science.gov (United States)

    Delanaye, Pierre; Flamant, Martin; Dubourg, Laurence; Vidal-Petiot, Emmanuelle; Lemoine, Sandrine; Cavalier, Etienne; Schaeffner, Elke; Ebert, Natalie; Pottel, Hans

    2018-01-08

    There are many different ways to measure glomerular filtration rate (GFR) using various exogenous filtration markers, each having their own strengths and limitations. However, not only the marker, but also the methodology may vary in many ways, including the use of urinary or plasma clearance, and, in the case of plasma clearance, the number of time points used to calculate the area under the concentration-time curve, ranging from only one (Jacobsson method) to eight (or more) blood samples. We collected the results obtained from 5106 plasma clearances (iohexol or 51Cr-ethylenediaminetetraacetic acid (EDTA)) using three to four time points, allowing GFR calculation using the slope-intercept method and the Bröchner-Mortensen correction. For each time point, the Jacobsson formula was applied to obtain the single-sample GFR. We used Bland-Altman plots to determine the accuracy of the Jacobsson method at each time point. The single-sample method showed within 10% concordances with the multiple-sample method of 66.4%, 83.6%, 91.4% and 96.0% at the time points 120, 180, 240 and ≥300 min, respectively. Concordance was poorer at lower GFR levels, and this trend is in parallel with increasing age. Results were similar in males and females. Some discordance was found in the obese subjects. Single-sample GFR is highly concordant with a multiple-sample strategy, except in the low GFR range (<30 mL/min). © The Author 2018. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  10. Air sampling methods to evaluate microbial contamination in operating theatres: results of a comparative study in an orthopaedics department.

    Science.gov (United States)

    Napoli, C; Tafuri, S; Montenegro, L; Cassano, M; Notarnicola, A; Lattarulo, S; Montagna, M T; Moretti, B

    2012-02-01

    To evaluate the level of microbial contamination of air in operating theatres using active [i.e. surface air system (SAS)] and passive [i.e. index of microbial air contamination (IMA) and nitrocellulose membranes positioned near the wound] sampling systems. Sampling was performed between January 2010 and January 2011 in the operating theatre of the orthopaedics department in a university hospital in Southern Italy. During surgery, the mean bacterial loads recorded were 2232.9 colony-forming units (cfu)/m(2)/h with the IMA method, 123.2 cfu/m(3) with the SAS method and 2768.2 cfu/m(2)/h with the nitrocellulose membranes. Correlation was found between the results of the three methods. Staphylococcus aureus was detected in 12 of 60 operations (20%) with the membranes, five (8.3%) operations with the SAS method, and three operations (5%) with the IMA method. Use of nitrocellulose membranes placed near a wound is a valid method for measuring the microbial contamination of air. This method was more sensitive than the IMA method and was not subject to any calibration bias, unlike active air monitoring systems. Copyright © 2011 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  11. Sample size determination for mediation analysis of longitudinal data.

    Science.gov (United States)

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  12. Surveying immigrants without sampling frames - evaluating the success of alternative field methods.

    Science.gov (United States)

    Reichel, David; Morales, Laura

    2017-01-01

    This paper evaluates the sampling methods of an international survey, the Immigrant Citizens Survey, which aimed at surveying immigrants from outside the European Union (EU) in 15 cities in seven EU countries. In five countries, no sample frame was available for the target population. Consequently, alternative ways to obtain a representative sample had to be found. In three countries 'location sampling' was employed, while in two countries traditional methods were used with adaptations to reach the target population. The paper assesses the main methodological challenges of carrying out a survey among a group of immigrants for whom no sampling frame exists. The samples of the survey in these five countries are compared to results of official statistics in order to assess the accuracy of the samples obtained through the different sampling methods. It can be shown that alternative sampling methods can provide meaningful results in terms of core demographic characteristics although some estimates differ to some extent from the census results.

  13. Sampling and analysis methods for geothermal fluids and gases

    Energy Technology Data Exchange (ETDEWEB)

    Shannon, D. W.

    1978-01-01

    The data obtained for the first round robin sample collected at Mesa 6-2 wellhead, East Mesa Test Site, Imperial Valley are summarized. Test results are listed by method used for cross reference to the analytic methods section. Results obtained for radioactive isotopes present in the brine sample are tabulated. The data obtained for the second round robin sample collected from the Woolsey No. 1 first stage flash unit, San Diego Gas and Electric Niland Test Facility are presented in the same manner. Lists of the participants of the two round robins are given. Data from miscellaneous analyses are included. Summaries of values derived from the round robin raw data are presented. (MHR)

  14. Efficient free energy calculations by combining two complementary tempering sampling methods.

    Science.gov (United States)

    Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun

    2017-01-14

    Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.

  15. A two-phase pressure drop calculation code based on a new method with a correlation factor obtained from an assessment of existing correlations

    International Nuclear Information System (INIS)

    Chun, Moon Hyun; Oh, Jae Guen

    1989-01-01

    Ten methods of the total two-phase pressure drop prediction based on five existing models and correlations have been examined for their accuracy and applicability to pressurized water reactor conditions. These methods were tested against 209 experimental data of local and bulk boiling conditions: Each correlations were evaluated for different ranges of pressure, mass velocity and quality, and best performing models were identified for each data subsets. A computer code entitled 'K-TWOPD' has been developed to calculate the total two phase pressure drop using the best performing existing correlations for a specific property range and a correction factor to compensate for the predicted error of the selected correlations. Assessment of this code shows that the present method fits all the available data within ±11% at a 95% confidence level compared with ± 25% for the existing correlations. (Author)

  16. SnagPRO: snag and tree sampling and analysis methods for wildlife

    Science.gov (United States)

    Lisa J. Bate; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe sampling methods and provide software to accurately and efficiently estimate snag and tree densities at desired scales to meet a variety of research and management objectives. The methods optimize sampling effort by choosing a plot size appropriate for the specified forest conditions and sampling goals. Plot selection and data analyses are supported by...

  17. Soil separator and sampler and method of sampling

    Science.gov (United States)

    O'Brien, Barry H [Idaho Falls, ID; Ritter, Paul D [Idaho Falls, ID

    2010-02-16

    A soil sampler includes a fluidized bed for receiving a soil sample. The fluidized bed may be in communication with a vacuum for drawing air through the fluidized bed and suspending particulate matter of the soil sample in the air. In a method of sampling, the air may be drawn across a filter, separating the particulate matter. Optionally, a baffle or a cyclone may be included within the fluidized bed for disentrainment, or dedusting, so only the finest particulate matter, including asbestos, will be trapped on the filter. The filter may be removable, and may be tested to determine the content of asbestos and other hazardous particulate matter in the soil sample.

  18. Quantifying regional cerebral blood flow with N-isopropyl-p-[123I]iodoamphetamine and SPECT by one-point sampling method

    International Nuclear Information System (INIS)

    Odano, Ikuo; Takahashi, Naoya; Noguchi, Eikichi; Ohtaki, Hiro; Hatano, Masayoshi; Yamazaki, Yoshihiro; Higuchi, Takeshi; Ohkubo, Masaki.

    1994-01-01

    We developed a new non-invasive technique; one-point sampling method, for quantitative measurement of regional cerebral blood flow (rCBF) with N-isopropyl-p-[ 123 I]iodoamphetamine and SPECT. Although the continuous withdrawal of arterial blood and octanol treatment of the blood are required in the conventional microsphere method, the new technique dose not require these two procedures. The total activity of 123 I-IMP obtained by the continuous withdrawal of arterial blood is inferred by the activity of 133 I-IMP obtained by the one point arterial sample using a regression line. To determine when one point sampling time was optimum for inferring integral input function of the continuous withdrawal and whether the treatment of sampled blood for octanol fraction was required, we examined a correlation between the total activity of arterial blood withdrawn from 0 to 5 min after the injection and the activity of one point sample obtained at time t, and calculated a regression line. As a result, the minimum % error for the inference using the regression line was obtained at 6 min after the 123 I-IMP injection, moreover, the octanol treatment was not required. Then examining an effect on the values of rCBF when the sampling time was deviated from 6 min, we could correct the values in approximately 3% error when the sample was obtained at 6±1 min after the injection. The one-point sampling method provides accurate and relatively non-invasive measurement of rCBF without octanol extraction of arterial blood. (author)

  19. Culture methods of allograft musculoskeletal tissue samples in Australian bacteriology laboratories.

    Science.gov (United States)

    Varettas, Kerry

    2013-12-01

    Samples of allograft musculoskeletal tissue are cultured by bacteriology laboratories to determine the presence of bacteria and fungi. In Australia, this testing is performed by 6 TGA-licensed clinical bacteriology laboratories with samples received from 10 tissue banks. Culture methods of swab and tissue samples employ a combination of solid agar and/or broth media to enhance micro-organism growth and maximise recovery. All six Australian laboratories receive Amies transport swabs and, except for one laboratory, a corresponding biopsy sample for testing. Three of the 6 laboratories culture at least one allograft sample directly onto solid agar. Only one laboratory did not use a broth culture for any sample received. An international literature review found that a similar combination of musculoskeletal tissue samples were cultured onto solid agar and/or broth media. Although variations of allograft musculoskeletal tissue samples, culture media and methods are used in Australian and international bacteriology laboratories, validation studies and method evaluations have challenged and supported their use in recovering fungi and aerobic and anaerobic bacteria.

  20. The Depressive Experiences Questionnaire: validity and psychological correlates in a clinical sample.

    Science.gov (United States)

    Riley, W T; McCranie, E W

    1990-01-01

    This study sought to compare the original and revised scoring systems of the Depressive Experiences Questionnaire (DEQ) and to assess the construct validity of the Dependent and Self-Critical subscales of the DEQ in a clinically depressed sample. Subjects were 103 depressed inpatients who completed the DEQ, the Beck Depression Inventory (BDI), the Hopelessness Scale, the Automatic Thoughts Questionnaire (ATQ), the Rathus Assertiveness Schedule (RAS), and the Minnesota Multiphasic Personality Inventory (MMPI). The original and revised scoring systems of the DEQ evidenced good concurrent validity for each factor scale, but the revised system did not sufficiently discriminate dependent and self-critical dimensions. Using the original scoring system, self-criticism was significantly and positively related to severity of depression, whereas dependency was not, particularly for males. Factor analysis of the DEQ scales and the other scales used in this study supported the dependent and self-critical dimensions. For men, the correlation of the DEQ with the MMPI scales indicated that self-criticism was associated with psychotic symptoms, hostility/conflict, and a distress/exaggerated response set, whereas dependency did not correlate significantly with any MMPI scales. Females, however, did not exhibit a differential pattern of correlations between either the Dependency or the Self-Criticism scales and the MMPI. These findings suggest possible gender differences in the clinical characteristics of male and female dependent and self-critical depressive subtypes.

  1. A method and algorithm for correlating scattered light and suspended particles in polluted water

    International Nuclear Information System (INIS)

    Sami Gumaan Daraigan; Mohd Zubir Matjafri; Khiruddin Abdullah; Azlan Abdul Aziz; Abdul Aziz Tajuddin; Mohd Firdaus Othman

    2005-01-01

    An optical model has been developed for measuring total suspended solids TSS concentrations in water. This approach is based on the characteristics of scattered light from the suspended particles in water samples. An optical sensor system (an active spectrometer) has been developed to correlate pollutant (total suspended solids TSS) concentration and the scattered radiation. Scattered light was measured in terms of the output voltage of the phototransistor of the sensor system. The developed algorithm was used to calculate and estimate the concentrations of the polluted water samples. The proposed algorithm was calibrated using the observed readings. The results display a strong correlation between the radiation values and the total suspended solids concentrations. The proposed system yields a high degree of accuracy with the correlation coefficient (R) of 0.99 and the root mean square error (RMS) of 63.57 mg/l. (Author)

  2. Effect of Malmquist bias on correlation studies with IRAS data base

    Science.gov (United States)

    Verter, Frances

    1993-01-01

    The relationships between galaxy properties in the sample of Trinchieri et al. (1989) are reexamined with corrections for Malmquist bias. The linear correlations are tested and linear regressions are fit for log-log plots of L(FIR), L(H-alpha), and L(B) as well as ratios of these quantities. The linear correlations for Malmquist bias are corrected using the method of Verter (1988), in which each galaxy observation is weighted by the inverse of its sampling volume. The linear regressions are corrected for Malmquist bias by a new method invented here in which each galaxy observation is weighted by its sampling volume. The results of correlation and regressions among the sample are significantly changed in the anticipated sense that the corrected correlation confidences are lower and the corrected slopes of the linear regressions are lower. The elimination of Malmquist bias eliminates the nonlinear rise in luminosity that has caused some authors to hypothesize additional components of FIR emission.

  3. A sensitive LC-MS/MS method for measurement of organophosphorus pesticides and their oxygen analogs in air sampling matrices.

    Science.gov (United States)

    Armstrong, Jenna L; Dills, Russell L; Yu, Jianbo; Yost, Michael G; Fenske, Richard A

    2014-01-01

    A rapid liquid chromatography tandem mass spectrometry (LC-MS/MS) method has been developed for determination of levels of the organophosphorus (OP) pesticides chlorpyrifos (CPF), azinphos methyl (AZM), and their oxygen analogs chlorpyrifos-oxon (CPF-O) and azinphos methyl-oxon (AZM-O) on common active air sampling matrices. XAD-2 resin and polyurethane foam (PUF) matrices were extracted with acetonitrile containing stable-isotope labeled internal standards (ISTD). Analysis was accomplished in Multiple Reaction Monitoring (MRM) mode, and analytes in unknown samples were identified by retention time (±0.1 min) and qualifier ratio (±30% absolute) as compared to the mean of calibrants. For all compounds, calibration linearity correlation coefficients were ≥0.996. Limits of detection (LOD) ranged from 0.15-1.1 ng/sample for CPF, CPF-O, AZM, and AZM-O on active sampling matrices. Spiked fortification recoveries were 78-113% from XAD-2 active air sampling tubes and 71-108% from PUF active air sampling tubes. Storage stability tests also yielded recoveries ranging from 74-94% after time periods ranging from 2-10 months. The results demonstrate that LC-MS/MS is a sensitive method for determining these compounds from two different matrices at the low concentrations that can result from spray drift and long range transport in non-target areas following agricultural applications. In an inter-laboratory comparison, the limit of quantification (LOQ) for LC-MS/MS was 100 times lower than a typical gas chromatography-mass spectrometry (GC-MS) method.

  4. Correlation of energy balance method to dynamic pipe rupture analysis

    International Nuclear Information System (INIS)

    Kuo, H.H.; Durkee, M.

    1983-01-01

    When using an energy balance approach in the design of pipe rupture restraints for nuclear power plants, the NRC specifies in its Standard Review Plan 3.6.2 that the input energy to the system must be multiplied by a factor of 1.1 unless a lower value can be justified. Since the energy balance method is already quite conservative, an across-the-board use of 1.1 to amplify the energy input appears unneccessary. The paper's purpose is to show that this 'correlation factor' could be substantially less than unity if certain design parameters are met. In this paper, result of nonlinear dynamic analyses were compared to the results of the corresponding analyses based on the energy balance method which assumes constant blowdown forces and rigid plastic material properties. The appropriate correlation factors required to match the energy balance results with the dynamic analyses results were correlated to design parameters such as restraint location from the break, yield strength of the energy absorbing component, and the restraint gap. It is shown that the correlation factor is related to a single nondimensional design parameter and can be limited to a value below unity if appropriate design parameters are chosen. It is also shown that the deformation of the restraints can be related to dimensionless system parameters. This, therefore, allows the maximum restraint deformation to be evaluated directly for design purposes. (orig.)

  5. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    OpenAIRE

    Wang Hao; Gao Wen; Huang Qingming; Zhao Feng

    2010-01-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matchin...

  6. Automated correlation and classification of secondary ion mass spectrometry images using a k-means cluster method.

    Science.gov (United States)

    Konicek, Andrew R; Lefman, Jonathan; Szakal, Christopher

    2012-08-07

    We present a novel method for correlating and classifying ion-specific time-of-flight secondary ion mass spectrometry (ToF-SIMS) images within a multispectral dataset by grouping images with similar pixel intensity distributions. Binary centroid images are created by employing a k-means-based custom algorithm. Centroid images are compared to grayscale SIMS images using a newly developed correlation method that assigns the SIMS images to classes that have similar spatial (rather than spectral) patterns. Image features of both large and small spatial extent are identified without the need for image pre-processing, such as normalization or fixed-range mass-binning. A subsequent classification step tracks the class assignment of SIMS images over multiple iterations of increasing n classes per iteration, providing information about groups of images that have similar chemistry. Details are discussed while presenting data acquired with ToF-SIMS on a model sample of laser-printed inks. This approach can lead to the identification of distinct ion-specific chemistries for mass spectral imaging by ToF-SIMS, as well as matrix-assisted laser desorption ionization (MALDI), and desorption electrospray ionization (DESI).

  7. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali; Pä tzold, Matthias

    2012-01-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  8. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali

    2012-04-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  9. Treatment of Nuclear Data Covariance Information in Sample Generation

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wieselquist, William [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division

    2017-10-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices.

  10. Treatment of Nuclear Data Covariance Information in Sample Generation

    International Nuclear Information System (INIS)

    Swiler, Laura Painton; Adams, Brian M.; Wieselquist, William

    2017-01-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices.

  11. Correlation methods in cutting arcs

    Energy Technology Data Exchange (ETDEWEB)

    Prevosto, L; Kelly, H, E-mail: prevosto@waycom.com.ar [Grupo de Descargas Electricas, Departamento Ing. Electromecanica, Universidad Tecnologica Nacional, Regional Venado Tuerto, Laprida 651, Venado Tuerto (2600), Santa Fe (Argentina)

    2011-05-01

    The present work applies similarity theory to the plasma emanating from transferred arc, gas-vortex stabilized plasma cutting torches, to analyze the existing correlation between the arc temperature and the physical parameters of such torches. It has been found that the enthalpy number significantly influence the temperature of the electric arc. The obtained correlation shows an average deviation of 3% from the temperature data points. Such correlation can be used, for instance, to predict changes in the peak value of the arc temperature at the nozzle exit of a geometrically similar cutting torch due to changes in its operation parameters.

  12. Correlation methods in cutting arcs

    International Nuclear Information System (INIS)

    Prevosto, L; Kelly, H

    2011-01-01

    The present work applies similarity theory to the plasma emanating from transferred arc, gas-vortex stabilized plasma cutting torches, to analyze the existing correlation between the arc temperature and the physical parameters of such torches. It has been found that the enthalpy number significantly influence the temperature of the electric arc. The obtained correlation shows an average deviation of 3% from the temperature data points. Such correlation can be used, for instance, to predict changes in the peak value of the arc temperature at the nozzle exit of a geometrically similar cutting torch due to changes in its operation parameters.

  13. Comparison of sampling methods for radiocarbon dating of carbonyls in air samples via accelerator mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Schindler, Matthias, E-mail: matthias.schindler@physik.uni-erlangen.de; Kretschmer, Wolfgang; Scharf, Andreas; Tschekalinskij, Alexander

    2016-05-15

    Three new methods to sample and prepare various carbonyl compounds for radiocarbon measurements were developed and tested. Two of these procedures utilized the Strecker synthetic method to form amino acids from carbonyl compounds with either sodium cyanide or trimethylsilyl cyanide. The third procedure used semicarbazide to form crystalline carbazones with the carbonyl compounds. The resulting amino acids and semicarbazones were then separated and purified using thin layer chromatography. The separated compounds were then combusted to CO{sub 2} and reduced to graphite to determine {sup 14}C content by accelerator mass spectrometry (AMS). All of these methods were also compared with the standard carbonyl compound sampling method wherein a compound is derivatized with 2,4-dinitrophenylhydrazine and then separated by high-performance liquid chromatography (HPLC).

  14. Comparison of sampling methods for radiocarbon dating of carbonyls in air samples via accelerator mass spectrometry

    Science.gov (United States)

    Schindler, Matthias; Kretschmer, Wolfgang; Scharf, Andreas; Tschekalinskij, Alexander

    2016-05-01

    Three new methods to sample and prepare various carbonyl compounds for radiocarbon measurements were developed and tested. Two of these procedures utilized the Strecker synthetic method to form amino acids from carbonyl compounds with either sodium cyanide or trimethylsilyl cyanide. The third procedure used semicarbazide to form crystalline carbazones with the carbonyl compounds. The resulting amino acids and semicarbazones were then separated and purified using thin layer chromatography. The separated compounds were then combusted to CO2 and reduced to graphite to determine 14C content by accelerator mass spectrometry (AMS). All of these methods were also compared with the standard carbonyl compound sampling method wherein a compound is derivatized with 2,4-dinitrophenylhydrazine and then separated by high-performance liquid chromatography (HPLC).

  15. Assessment of reagent effectiveness and preservation methods for equine faecal samples

    Directory of Open Access Journals (Sweden)

    Eva Vavrouchova

    2015-03-01

    Full Text Available The aim of our study was to identify the most suitable flotation solution and effective preservation method for the examination of equine faeces samples using the FLOTAC technique. Samples from naturally infected horses were transported to the laboratory andanalysed accordingly. The sample from each horse was homogenized and divided into four parts: one was frozen, another two were preserved in different reagents such as sodium acetate-acetic-acid–formalin (SAF or 5% formalin.The last part was examined as a fresh sample in three different flotation solutions (Sheather´s solution, sodium chloride and sodium nitrate solution, all with a specific gravity 1.200. The preserved samples were examined in the period from 14 to21days after collection. According to our results, the sucrose solution was the most suitable flotation solution for fresh samples (small strongyle egg per gram was 706 compared to 360 in sodium chlorid and 507 in sodium nitrate and the sodium nitrate solution was the most efficient for the preserved samples (egg per gram was 382 compared to 295 in salt solution and 305 in sucrose solution. Freezing appears to be the most effective method of sample preservation, resulting in minimal damage to fragile strongyle eggs and therefore it is the most simple and effective preservation method for the examination of large numbers of faecal samples without the necessity of examining them all within 48 hours of collection. Deep freezing as a preservation method for equine faeces samples has not, according to our knowledge, been yet published.

  16. Partial correlation analysis method in ultrarelativistic heavy-ion collisions

    Science.gov (United States)

    Olszewski, Adam; Broniowski, Wojciech

    2017-11-01

    We argue that statistical data analysis of two-particle longitudinal correlations in ultrarelativistic heavy-ion collisions may be efficiently carried out with the technique of partial covariance. In this method, the spurious event-by-event fluctuations due to imprecise centrality determination are eliminated via projecting out the component of the covariance influenced by the centrality fluctuations. We bring up the relationship of the partial covariance to the conditional covariance. Importantly, in the superposition approach, where hadrons are produced independently from a collection of sources, the framework allows us to impose centrality constraints on the number of sources rather than hadrons, that way unfolding of the trivial fluctuations from statistical hadronization and focusing better on the initial-state physics. We show, using simulated data from hydrodynamics followed with statistical hadronization, that the technique is practical and very simple to use, giving insight into the correlations generated in the initial stage. We also discuss the issues related to separation of the short- and long-range components of the correlation functions and show that in our example the short-range component from the resonance decays is largely reduced by considering pions of the same sign. We demonstrate the method explicitly on the cases where centrality is determined with a single central control bin or with two peripheral control bins.

  17. Determination of velocity vector angles using the directional cross-correlation method

    DEFF Research Database (Denmark)

    Kortbek, Jacob; Jensen, Jørgen Arendt

    2005-01-01

    and then select the angle with the highest normalized correlation between directional signals. The approach is investigated using Field II simulations and data from the experimental ultrasound scanner RASMUS and with a parabolic flow having a peak velocity of 0.3 m/s. A 7 MHz linear array transducer is used......A method for determining both velocity magnitude and angle in any direction is suggested. The method uses focusing along the velocity direction and cross-correlation for finding the correct velocity magnitude. The angle is found from beamforming directional signals in a number of directions......-time ) between signals to correlate, and a proper choice varies with flow angle and flow velocity. One performance example is given with a fixed value of k tprf for all flow angles. The angle estimation on measured data for flow at 60 ◦ to 90 ◦ , yields a probability of valid estimates between 68% and 98...

  18. A New Wavelet Threshold Determination Method Considering Interscale Correlation in Signal Denoising

    Directory of Open Access Journals (Sweden)

    Can He

    2015-01-01

    Full Text Available Due to simple calculation and good denoising effect, wavelet threshold denoising method has been widely used in signal denoising. In this method, the threshold is an important parameter that affects the denoising effect. In order to improve the denoising effect of the existing methods, a new threshold considering interscale correlation is presented. Firstly, a new correlation index is proposed based on the propagation characteristics of the wavelet coefficients. Then, a threshold determination strategy is obtained using the new index. At the end of the paper, a simulation experiment is given to verify the effectiveness of the proposed method. In the experiment, four benchmark signals are used as test signals. Simulation results show that the proposed method can achieve a good denoising effect under various signal types, noise intensities, and thresholding functions.

  19. Extending the alias Monte Carlo sampling method to general distributions

    International Nuclear Information System (INIS)

    Edwards, A.L.; Rathkopf, J.A.; Smidt, R.K.

    1991-01-01

    The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 12 figs., 2 tabs

  20. Research and application of sampling and analysis method of sodium aerosol

    International Nuclear Information System (INIS)

    Yu Xiaochen; Guo Qingzhou; Wen Ximeng

    1998-01-01

    Method of sampling-analysis for sodium aerosol is researched. The vacuum sampling technology is used in the sampling process, and the analysis method adopted is volumetric analysis and atomic absorption. When the absolute content of sodium is in the rang of 0.1 mg to 1.0 mg, the deviation of results between volumetric analysis and atomic absorption is less than 2%. The method has been applied in a sodium aerosol removal device successfully. The analysis range, accuracy and precision can meet the requirements for researching sodium aerosol

  1. A novel method for fission product noble gas sampling

    International Nuclear Information System (INIS)

    Jain, S.K.; Prakash, Vivek; Singh, G.K.; Vinay, Kr.; Awsthi, A.; Bihari, K.; Joyson, R.; Manu, K.; Gupta, Ashok

    2008-01-01

    Noble gases occur to some extent in the Earth's atmosphere, but the concentrations of all but argon are exceedingly low. Argon is plentiful, constituting almost 1 % of the air. Fission Product Noble Gases (FPNG) are produced by nuclear fission and large parts of FPNG is produced in Nuclear reactions. FPNG are b-j emitters and contributing significantly in public dose. During normal operation of reactor release of FPNG is negligible but its release increases in case of fuel failure. Xenon, a member of FPNG family helps in identification of fuel failure and its extent in PHWRs. Due to above reasons it becomes necessary to assess the FPNG release during operation of NPPs. Presently used methodology of assessment of FPNG, at almost all power stations is Computer based gamma ray spectrometry. This provides fission product Noble gases nuclide identification through peak search of spectra. The air sample for the same is collected by grab sampling method, which has inherent disadvantages. An alternate method was developed at Rajasthan Atomic Power Station (RAPS) - 3 and 4 for assessment of FPNG, which uses adsorption phenomena for collection of air samples. This report presents details of sampling method for FPNG and noble gases in different systems of Nuclear Power Plant. (author)

  2. Statistical analysis of latent generalized correlation matrix estimation in transelliptical distribution

    OpenAIRE

    Han, Fang; Liu, Han

    2016-01-01

    Correlation matrices play a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson's sample correlation matrix. Although Pearson's sample correlation matrix enjoys various good properties under Gaussian models, it is not an effective estimator when facing heavy-tailed distributions. As a robust alternative, Han and Liu [J. Am. Stat. Assoc. 109 (2015) 275-2...

  3. [Correlation coefficient-based classification method of hydrological dependence variability: With auto-regression model as example].

    Science.gov (United States)

    Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi

    2018-04-01

    Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.

  4. Random matrix theory analysis of cross-correlations in the US stock market: Evidence from Pearson’s correlation coefficient and detrended cross-correlation coefficient

    Science.gov (United States)

    Wang, Gang-Jin; Xie, Chi; Chen, Shou; Yang, Jiao-Jiao; Yang, Ming-Yan

    2013-09-01

    In this study, we first build two empirical cross-correlation matrices in the US stock market by two different methods, namely the Pearson’s correlation coefficient and the detrended cross-correlation coefficient (DCCA coefficient). Then, combining the two matrices with the method of random matrix theory (RMT), we mainly investigate the statistical properties of cross-correlations in the US stock market. We choose the daily closing prices of 462 constituent stocks of S&P 500 index as the research objects and select the sample data from January 3, 2005 to August 31, 2012. In the empirical analysis, we examine the statistical properties of cross-correlation coefficients, the distribution of eigenvalues, the distribution of eigenvector components, and the inverse participation ratio. From the two methods, we find some new results of the cross-correlations in the US stock market in our study, which are different from the conclusions reached by previous studies. The empirical cross-correlation matrices constructed by the DCCA coefficient show several interesting properties at different time scales in the US stock market, which are useful to the risk management and optimal portfolio selection, especially to the diversity of the asset portfolio. It will be an interesting and meaningful work to find the theoretical eigenvalue distribution of a completely random matrix R for the DCCA coefficient because it does not obey the Marčenko-Pastur distribution.

  5. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    Science.gov (United States)

    Zhao, Feng; Huang, Qingming; Wang, Hao; Gao, Wen

    2010-12-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  6. Transformation-cost time-series method for analyzing irregularly sampled data.

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations-with associated costs-to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  7. Transformation-cost time-series method for analyzing irregularly sampled data

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G. Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations—with associated costs—to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  8. Dual linear structured support vector machine tracking method via scale correlation filter

    Science.gov (United States)

    Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen

    2018-01-01

    Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.

  9. Different methods for volatile sampling in mammals.

    Directory of Open Access Journals (Sweden)

    Marlen Kücklich

    Full Text Available Previous studies showed that olfactory cues are important for mammalian communication. However, many specific compounds that convey information between conspecifics are still unknown. To understand mechanisms and functions of olfactory cues, olfactory signals such as volatile compounds emitted from individuals need to be assessed. Sampling of animals with and without scent glands was typically conducted using cotton swabs rubbed over the skin or fur and analysed by gas chromatography-mass spectrometry (GC-MS. However, this method has various drawbacks, including a high level of contaminations. Thus, we adapted two methods of volatile sampling from other research fields and compared them to sampling with cotton swabs. To do so we assessed the body odor of common marmosets (Callithrix jacchus using cotton swabs, thermal desorption (TD tubes and, alternatively, a mobile GC-MS device containing a thermal desorption trap. Overall, TD tubes comprised most compounds (N = 113, with half of those compounds being volatile (N = 52. The mobile GC-MS captured the fewest compounds (N = 35, of which all were volatile. Cotton swabs contained an intermediate number of compounds (N = 55, but very few volatiles (N = 10. Almost all compounds found with the mobile GC-MS were also captured with TD tubes (94%. Hence, we recommend TD tubes for state of the art sampling of body odor of mammals or other vertebrates, particularly for field studies, as they can be easily transported, stored and analysed with high performance instruments in the lab. Nevertheless, cotton swabs capture compounds which still may contribute to the body odor, e.g. after bacterial fermentation, while profiles from mobile GC-MS include only the most abundant volatiles of the body odor.

  10. Quantification of the neurotoxic beta-carboline harmane in barbecued/grilled meat samples and correlation with level of doneness.

    Science.gov (United States)

    Louis, Elan D; Zheng, Wei; Jiang, Wendy; Bogen, Kenneth T; Keating, Garrett A

    2007-06-01

    Harmane, one of the heterocyclic amines (HCAs), is a potent neurotoxin linked to human diseases. Dietary exposure, especially in cooked meats, is the major source of exogenous exposure for humans. However, knowledge of harmane concentrations in cooked meat samples is limited. Our goals were to (1) quantify the concentration of harmane in different types of cooked meat samples, (2) compare its concentration to that of other more well-understood HCAs, and (3) examine the relationship between harmane concentration and level of doneness. Thirty barbecued/grilled meat samples (8 beef steak, 12 hamburger, 10 chicken) were analyzed for harmane and four other HCAs (2-amino-1-methyl-6-phenylimidazo [4,5-b]pyridine [PhIP], amino-3,8-dimethylimidazo[4,5-f]quinoxaline [MeIQx], 2-amino-3,4,8-trimethylimidazo[4,5-f]quinoxaline [DiMeIQx], and 2-amino-1,6-dimethylfuro[3,2-e]imidazo[4,5-b]pyridine [IFP]). Mean (+/- SD) harmane concentration was 5.63 (+/- 6.63) ng/g; harmane concentration was highest in chicken (8.48 +/- 9.86 ng/g) and lowest in beef steak (3.80 +/- 3.6 ng/g). Harmane concentration was higher than that of the other HCAs and significantly correlated with PhIP concentration. Harmane concentration was associated with meat doneness in samples of cooked beef steak and hamburger, although the correlation between meat doneness and concentration was greater for PhIP than for harmane. Evidence indicates that harmane was detectable in nanograms per gram quantities in cooked meat (especially chicken) and, moreover, was more abundant than other HCAs. There was some correlation between meat doneness and harmane concentration, although this correlation was less robust than that observed for PhIP. Data such as these may be used to improve estimation of human dietary exposure to this neurotoxin.

  11. Quantification of the Neurotoxic β-Carboline Harmane in Barbecued/Grilled Meat Samples and Correlation with Level of Doneness

    Science.gov (United States)

    Louis, Elan D.; Zheng, Wei; Jiang, Wendy; Bogen, Kenneth T.; Keating, Garrett A.

    2016-01-01

    Harmane, one of the heterocyclic amines (HCAs), is a potent neurotoxin linked to human diseases. Dietary exposure, especially in cooked meats, is the major source of exogenous exposure for humans. However, knowledge of harmane concentrations in cooked meat samples is limited. Our goals were to (1) quantify the concentration of harmane in different types of cooked meat samples, (2) compare its concentration to that of other more well-understood HCAs, and (3) examine the relationship between harmane concentration and level of doneness. Thirty barbecued/grilled meat samples (8 beef steak, 12 hamburger, 10 chicken) were analyzed for harmane and four other HCAs (2-amino-1-methyl-6-phenylimidazo [4,5-b]pyridine [PhIP], amino-3,8-dimethylimidazo[4,5-f]quinoxaline [MeIQx], 2-amino-3,4,8-trimethylimidazo[4,5-f]quinoxaline [DiMeIQx], and 2-amino-1,6-dimethylfuro[3,2-e]imidazo[4,5-b]pyridine [IFP]). Mean (± SD) harmane concentration was 5.63 (± 6.63) ng/g; harmane concentration was highest in chicken (8.48 ± 9.86 ng/g) and lowest in beef steak (3.80 ± 3.6 ng/g). Harmane concentration was higher than that of the other HCAs and significantly correlated with PhIP concentration. Harmane concentration was associated with meat doneness in samples of cooked beef steak and hamburger, although the correlation between meat doneness and concentration was greater for PhIP than for harmane. Evidence indicates that harmane was detectable in nanograms per gram quantities in cooked meat (especially chicken) and, moreover, was more abundant than other HCAs. There was some correlation between meat doneness and harmane concentration, although this correlation was less robust than that observed for PhIP. Data such as these may be used to improve estimation of human dietary exposure to this neurotoxin. PMID:17497412

  12. Non-uniform sampling and wide range angular spectrum method

    International Nuclear Information System (INIS)

    Kim, Yong-Hae; Byun, Chun-Won; Oh, Himchan; Lee, JaeWon; Pi, Jae-Eun; Heon Kim, Gi; Lee, Myung-Lae; Ryu, Hojun; Chu, Hye-Yong; Hwang, Chi-Sun

    2014-01-01

    A novel method is proposed for simulating free space field propagation from a source plane to a destination plane that is applicable for both small and large propagation distances. The angular spectrum method (ASM) was widely used for simulating near field propagation, but it caused a numerical error when the propagation distance was large because of aliasing due to under sampling. Band limited ASM satisfied the Nyquist condition on sampling by limiting a bandwidth of a propagation field to avoid an aliasing error so that it could extend the applicable propagation distance of the ASM. However, the band limited ASM also made an error due to the decrease of an effective sampling number in a Fourier space when the propagation distance was large. In the proposed wide range ASM, we use a non-uniform sampling in a Fourier space to keep a constant effective sampling number even though the propagation distance is large. As a result, the wide range ASM can produce simulation results with high accuracy for both far and near field propagation. For non-paraxial wave propagation, we applied the wide range ASM to a shifted destination plane as well. (paper)

  13. CT in the staging of bronchogenic carcinoma: Analysis by correlative lymph node mapping and sampling

    International Nuclear Information System (INIS)

    McLoud, T.C.; Woldenberg, R.; Mathisen, D.J.; Grillo, H.C.; Bourgoulin, P.M.; Shepard, J.O.; Moore, E.H.

    1987-01-01

    Although previous studies have evaluated the accuracy of CT in staging the mediastinum in bronchogenic carcinoma, none has determined the sensitivity and specificity of CT in the assessment of individual lymph node groups by correlative nodal sampling at surgery. CT scans were performed on 84 patients with bronchogenic carcinoma. Abnormal nodes (≥ 1 cm) were localized according to the ATS classification of regional lymph node mapping. Seventy-nine patients had mediastinoscopy and 64 patients underwent thoracotomy. In each case, biopsies of lymph node groups 2R, 4R, 2L, 4L (paratracheal), 7 (subcarinal), and 5 (aorticopulmonary) were performed on the appropriate side. Hilar nodes (10R and 11R, 10L and 11L) were resected with the surgical specimen. A total of 292 nodes were sampled. Overall sensitivity for all lymph node groups was 40%, and specificity, 81%. Sensitivity was highest for the 4R (paratracheal) group (82%) and lowest for the subcarinal area (20%). Specificity ranged from 71% for 11R nodes (right hilar) to 94% for 10L (left peribronchial). The positive predictive value was 34%, and the negative predictive value, 84%. This study suggests that the more optimistic results previously reported may have resulted from lack of correlation of individual lymph node groups identified on CT with those sampled at surgery

  14. Reliability analysis based on a novel density estimation method for structures with correlations

    Directory of Open Access Journals (Sweden)

    Baoyu LI

    2017-06-01

    Full Text Available Estimating the Probability Density Function (PDF of the performance function is a direct way for structural reliability analysis, and the failure probability can be easily obtained by integration in the failure domain. However, efficiently estimating the PDF is still an urgent problem to be solved. The existing fractional moment based maximum entropy has provided a very advanced method for the PDF estimation, whereas the main shortcoming is that it limits the application of the reliability analysis method only to structures with independent inputs. While in fact, structures with correlated inputs always exist in engineering, thus this paper improves the maximum entropy method, and applies the Unscented Transformation (UT technique to compute the fractional moments of the performance function for structures with correlations, which is a very efficient moment estimation method for models with any inputs. The proposed method can precisely estimate the probability distributions of performance functions for structures with correlations. Besides, the number of function evaluations of the proposed method in reliability analysis, which is determined by UT, is really small. Several examples are employed to illustrate the accuracy and advantages of the proposed method.

  15. Closed-Form Representations of the Density Function and Integer Moments of the Sample Correlation Coefficient

    Directory of Open Access Journals (Sweden)

    Serge B. Provost

    2015-07-01

    Full Text Available This paper provides a simplified representation of the exact density function of R, the sample correlation coefficient. The odd and even moments of R are also obtained in closed forms. Being expressed in terms of generalized hypergeometric functions, the resulting representations are readily computable. Some numerical examples corroborate the validity of the results derived herein.

  16. Advanced Markov chain Monte Carlo methods learning from past samples

    CERN Document Server

    Liang, Faming; Carrol, Raymond J

    2010-01-01

    This book provides comprehensive coverage of simulation of complex systems using Monte Carlo methods. Developing algorithms that are immune to the local trap problem has long been considered as the most important topic in MCMC research. Various advanced MCMC algorithms which address this problem have been developed include, the modified Gibbs sampler, the methods based on auxiliary variables and the methods making use of past samples. The focus of this book is on the algorithms that make use of past samples. This book includes the multicanonical algorithm, dynamic weighting, dynamically weight

  17. The predictive value of childhood animal cruelty methods on later adult violence: examining demographic and situational correlates.

    Science.gov (United States)

    Hensley, Christopher; Tallichet, Suzanne E; Dutkiewicz, Erik L

    2012-04-01

    The present study seeks to replicate Tallichet, Hensley, and Singer's research on childhood animal cruelty methods by using a sample of 180 male inmates surveyed at both medium- and maximum-security prisons in a southern state. The purpose of the current study was to first reexamine the relationship between demographic and situational factors and specific methods of childhood animal cruelty. Second, the correlation between an abuser's chosen method(s) of childhood animal cruelty on later recurrent acts of adult violent crimes was reinvestigated. Regression analyses revealed that respondents who engaged in frequent animal cruelty were more likely to have drowned, shot, kicked, or had sex with animals. Those who had grown up in urban areas and those who did not become upset after abusing animals were more likely to have kicked animals. Respondents who covered up their abuse were more likely to have had sex with animals. Sex with animals was the only method of childhood animal cruelty that predicted the later commission of adult violent crimes.

  18. A Novel Analysis Method for Paired-Sample Microbial Ecology Experiments.

    Science.gov (United States)

    Olesen, Scott W; Vora, Suhani; Techtmann, Stephen M; Fortney, Julian L; Bastidas-Oyanedel, Juan R; Rodríguez, Jorge; Hazen, Terry C; Alm, Eric J

    2016-01-01

    Many microbial ecology experiments use sequencing data to measure a community's response to an experimental treatment. In a common experimental design, two units, one control and one experimental, are sampled before and after the treatment is applied to the experimental unit. The four resulting samples contain information about the dynamics of organisms that respond to the treatment, but there are no analytical methods designed to extract exactly this type of information from this configuration of samples. Here we present an analytical method specifically designed to visualize and generate hypotheses about microbial community dynamics in experiments that have paired samples and few or no replicates. The method is based on the Poisson lognormal distribution, long studied in macroecology, which we found accurately models the abundance distribution of taxa counts from 16S rRNA surveys. To demonstrate the method's validity and potential, we analyzed an experiment that measured the effect of crude oil on ocean microbial communities in microcosm. Our method identified known oil degraders as well as two clades, Maricurvus and Rhodobacteraceae, that responded to amendment with oil but do not include known oil degraders. Our approach is sensitive to organisms that increased in abundance only in the experimental unit but less sensitive to organisms that increased in both control and experimental units, thus mitigating the role of "bottle effects".

  19. correlation between maximum dry density and cohesion of ...

    African Journals Online (AJOL)

    HOD

    investigation on sandy soils to determine the correlation between relative density and compaction test parameter. Using twenty soil samples, they were able to develop correlations between relative density, coefficient of uniformity and maximum dry density. Khafaji [5] using standard proctor compaction method carried out an ...

  20. Investigation of Legionella Contamination in Bath Water Samples by Culture, Amoebic Co-Culture, and Real-Time Quantitative PCR Methods.

    Science.gov (United States)

    Edagawa, Akiko; Kimura, Akio; Kawabuchi-Kurata, Takako; Adachi, Shinichi; Furuhata, Katsunori; Miyamoto, Hiroshi

    2015-10-19

    We investigated Legionella contamination in bath water samples, collected from 68 bathing facilities in Japan, by culture, culture with amoebic co-culture, real-time quantitative PCR (qPCR), and real-time qPCR with amoebic co-culture. Using the conventional culture method, Legionella pneumophila was detected in 11 samples (11/68, 16.2%). Contrary to our expectation, the culture method with the amoebic co-culture technique did not increase the detection rate of Legionella (4/68, 5.9%). In contrast, a combination of the amoebic co-culture technique followed by qPCR successfully increased the detection rate (57/68, 83.8%) compared with real-time qPCR alone (46/68, 67.6%). Using real-time qPCR after culture with amoebic co-culture, more than 10-fold higher bacterial numbers were observed in 30 samples (30/68, 44.1%) compared with the same samples without co-culture. On the other hand, higher bacterial numbers were not observed after propagation by amoebae in 32 samples (32/68, 47.1%). Legionella was not detected in the remaining six samples (6/68, 8.8%), irrespective of the method. These results suggest that application of the amoebic co-culture technique prior to real-time qPCR may be useful for the sensitive detection of Legionella from bath water samples. Furthermore, a combination of amoebic co-culture and real-time qPCR might be useful to detect viable and virulent Legionella because their ability to invade and multiply within free-living amoebae is considered to correlate with their pathogenicity for humans. This is the first report evaluating the efficacy of the amoebic co-culture technique for detecting Legionella in bath water samples.

  1. Variable Selection via Partial Correlation.

    Science.gov (United States)

    Li, Runze; Liu, Jingyuan; Lou, Lejia

    2017-07-01

    Partial correlation based variable selection method was proposed for normal linear regression models by Bühlmann, Kalisch and Maathuis (2010) as a comparable alternative method to regularization methods for variable selection. This paper addresses two important issues related to partial correlation based variable selection method: (a) whether this method is sensitive to normality assumption, and (b) whether this method is valid when the dimension of predictor increases in an exponential rate of the sample size. To address issue (a), we systematically study this method for elliptical linear regression models. Our finding indicates that the original proposal may lead to inferior performance when the marginal kurtosis of predictor is not close to that of normal distribution. Our simulation results further confirm this finding. To ensure the superior performance of partial correlation based variable selection procedure, we propose a thresholded partial correlation (TPC) approach to select significant variables in linear regression models. We establish the selection consistency of the TPC in the presence of ultrahigh dimensional predictors. Since the TPC procedure includes the original proposal as a special case, our theoretical results address the issue (b) directly. As a by-product, the sure screening property of the first step of TPC was obtained. The numerical examples also illustrate that the TPC is competitively comparable to the commonly-used regularization methods for variable selection.

  2. A random spatial sampling method in a rural developing nation

    Science.gov (United States)

    Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas

    2014-01-01

    Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...

  3. Two-baryon systems from HAL QCD method and the mirage in the temporal correlation of the direct method

    Science.gov (United States)

    Iritani, Takumi

    2018-03-01

    Both direct and HAL QCD methods are currently used to study the hadron interactions in lattice QCD. In the direct method, the eigen-energy of two-particle is measured from the temporal correlation. Due to the contamination of excited states, however, the direct method suffers from the fake eigen-energy problem, which we call the "mirage problem," while the HAL QCD method can extract information from all elastic states by using the spatial correlation. In this work, we further investigate systematic uncertainties of the HAL QCD method such as the quark source operator dependence, the convergence of the derivative expansion of the non-local interaction kernel, and the single baryon saturation, which are found to be well controlled. We also confirm the consistency between the HAL QCD method and the Lüscher's finite volume formula. Based on the HAL QCD potential, we quantitatively confirm that the mirage plateau in the direct method is indeed caused by the contamination of excited states.

  4. Visualization of synchronization of the uterine contraction signals: running cross-correlation and wavelet running cross-correlation methods.

    Science.gov (United States)

    Oczeretko, Edward; Swiatecka, Jolanta; Kitlas, Agnieszka; Laudanski, Tadeusz; Pierzynski, Piotr

    2006-01-01

    In physiological research, we often study multivariate data sets, containing two or more simultaneously recorded time series. The aim of this paper is to present the cross-correlation and the wavelet cross-correlation methods to assess synchronization between contractions in different topographic regions of the uterus. From a medical point of view, it is important to identify time delays between contractions, which may be of potential diagnostic significance in various pathologies. The cross-correlation was computed in a moving window with a width corresponding to approximately two or three contractions. As a result, the running cross-correlation function was obtained. The propagation% parameter assessed from this function allows quantitative description of synchronization in bivariate time series. In general, the uterine contraction signals are very complicated. Wavelet transforms provide insight into the structure of the time series at various frequencies (scales). To show the changes of the propagation% parameter along scales, a wavelet running cross-correlation was used. At first, the continuous wavelet transforms as the uterine contraction signals were received and afterwards, a running cross-correlation analysis was conducted for each pair of transformed time series. The findings show that running functions are very useful in the analysis of uterine contractions.

  5. Sampling methods for terrestrial amphibians and reptiles.

    Science.gov (United States)

    Paul Stephen Corn; R. Bruce. Bury

    1990-01-01

    Methods described for sampling amphibians and reptiles in Douglas-fir forests in the Pacific Northwest include pitfall trapping, time-constrained collecting, and surveys of coarse woody debris. The herpetofauna of this region differ in breeding and nonbreeding habitats and vagility, so that no single technique is sufficient for a community study. A combination of...

  6. The correlation between selenium adsorption and the mineral and chemical composition of Taiwan local granite samples

    Energy Technology Data Exchange (ETDEWEB)

    Wang, TsingHai; Chiang, Chu-Ling; Wang, Chu-Fang [National Tsing Hua Univ., Hsinchu, Taiwan (China). Biomedical Engineering and Environmental Sciences

    2015-07-01

    Selenium-79 (Se-79) is a radioactive isotope of selenium, which is considered as one of the highly mobile nuclides since Se-79 would be presented in an anion species when dissolving into the intruded groundwater. Being an anionic species, the transport of Se-79 would be regulated by the metal oxides relevant minerals such as goethite and hematite (Jan et al., 2008). This is true that the transport of selenium in the shallow surface environment could be relatively easy to estimate by considering the amount of these metal oxides presenting in the soils and sediments. However, when dealing with deep geological repository, the transport of Se-79 becomes less predictable because of the much less content of metal oxide residing in the host rock such as granite. In order to conduct a reliable performance assessment of repository, it is very important to establish the correlation between selenium adsorption and the properties of potential host rock, in this study, the mineral and chemical compositions of Taiwan local granite. From this point of view, selenium adsorption experiments were conducted with 54 different Taiwan local granite samples collected from the depth ranging from 100 ∝ 400 meters below the surface. These granite samples represent a variety of deep geological environments, including the intact rock, groundwater intruded zones, and some weathered samples. Based upon our preliminary results, several solid conclusions could be made. First, the correlation coefficients between the Kd values and the mineral and chemical compositions are rather low (R-square values are often < 0.2). This points out the complexity of these geological samples and strongly suggests more efforts should be put into to acquire more relevant information. Second, the correlation between the selenium Kd values and the content of iron oxide (R-square 0.110) is much higher than that between the CEC of these granite samples (R-square 0.001). This clearly indicates that the minerals that

  7. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    Directory of Open Access Journals (Sweden)

    Wang Hao

    2010-01-01

    Full Text Available Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  8. Universal nucleic acids sample preparation method for cells, spores and their mixture

    Science.gov (United States)

    Bavykin, Sergei [Darien, IL

    2011-01-18

    The present invention relates to a method for extracting nucleic acids from biological samples. More specifically the invention relates to a universal method for extracting nucleic acids from unidentified biological samples. An advantage of the presently invented method is its ability to effectively and efficiently extract nucleic acids from a variety of different cell types including but not limited to prokaryotic or eukaryotic cells and/or recalcitrant organisms (i.e. spores). Unlike prior art methods which are focused on extracting nucleic acids from vegetative cell or spores, the present invention effectively extracts nucleic acids from spores, multiple cell types or mixtures thereof using a single method. Important that the invented method has demonstrated an ability to extract nucleic acids from spores and vegetative bacterial cells with similar levels effectiveness. The invented method employs a multi-step protocol which erodes the cell structure of the biological sample, isolates, labels, fragments nucleic acids and purifies labeled samples from the excess of dye.

  9. Fluidics platform and method for sample preparation and analysis

    Science.gov (United States)

    Benner, W. Henry; Dzenitis, John M.; Bennet, William J.; Baker, Brian R.

    2014-08-19

    Herein provided are fluidics platform and method for sample preparation and analysis. The fluidics platform is capable of analyzing DNA from blood samples using amplification assays such as polymerase-chain-reaction assays and loop-mediated-isothermal-amplification assays. The fluidics platform can also be used for other types of assays and analyzes. In some embodiments, a sample in a sealed tube can be inserted directly. The following isolation, detection, and analyzes can be performed without a user's intervention. The disclosed platform may also comprises a sample preparation system with a magnetic actuator, a heater, and an air-drying mechanism, and fluid manipulation processes for extraction, washing, elution, assay assembly, assay detection, and cleaning after reactions and between samples.

  10. Two media method for linear attenuation coefficient determination of irregular soil samples

    International Nuclear Information System (INIS)

    Vici, Carlos Henrique Georges

    2004-01-01

    In several situations of nuclear applications, the knowledge of gamma-ray linear attenuation coefficient for irregular samples is necessary, such as in soil physics and geology. This work presents the validation of a methodology for the determination of the linear attenuation coefficient (μ) of irregular shape samples, in such a way that it is not necessary to know the thickness of the considered sample. With this methodology irregular soil samples (undeformed field samples) from Londrina region, north of Parana were studied. It was employed the two media method for the μ determination. It consists of the μ determination through the measurement of a gamma-ray beam attenuation by the sample sequentially immersed in two different media, with known and appropriately chosen attenuation coefficients. For comparison, the theoretical value of μ was calculated by the product of the mass attenuation coefficient, obtained by the WinXcom code, and the measured value of the density sample. This software employs the chemical composition of the samples and supplies a table of the mass attenuation coefficients versus the photon energy. To verify the validity of the two media method, compared with the simple gamma ray transmission method, regular pome stone samples were used. With these results for the attenuation coefficients and their respective deviations, it was possible to compare the two methods. In this way we concluded that the two media method is a good tool for the determination of the linear attenuation coefficient of irregular materials, particularly in the study of soils samples. (author)

  11. Sampling and examination methods used for TMI-2 samples

    International Nuclear Information System (INIS)

    Marley, A.W.; Akers, D.W.; McIsaac, C.V.

    1988-01-01

    The purpose of this paper is to summarize the sampling and examination techniques that were used in the collection and analysis of TMI-2 samples. Samples ranging from auxiliary building air to core debris were collected and analyzed. Handling of the larger samples and many of the smaller samples had to be done remotely and many standard laboratory analytical techniques were modified to accommodate the extremely high radiation fields associated with these samples. The TMI-2 samples presented unique problems with sampling and the laboratory analysis of prior molten fuel debris. 14 refs., 8 figs

  12. [Sample preparation methods for chromatographic analysis of organic components in atmospheric particulate matter].

    Science.gov (United States)

    Hao, Liang; Wu, Dapeng; Guan, Yafeng

    2014-09-01

    The determination of organic composition in atmospheric particulate matter (PM) is of great importance in understanding how PM affects human health, environment, climate, and ecosystem. Organic components are also the scientific basis for emission source tracking, PM regulation and risk management. Therefore, the molecular characterization of the organic fraction of PM has become one of the priority research issues in the field of environmental analysis. Due to the extreme complexity of PM samples, chromatographic methods have been the chief selection. The common procedure for the analysis of organic components in PM includes several steps: sample collection on the fiber filters, sample preparation (transform the sample into a form suitable for chromatographic analysis), analysis by chromatographic methods. Among these steps, the sample preparation methods will largely determine the throughput and the data quality. Solvent extraction methods followed by sample pretreatment (e. g. pre-separation, derivatization, pre-concentration) have long been used for PM sample analysis, and thermal desorption methods have also mainly focused on the non-polar organic component analysis in PM. In this paper, the sample preparation methods prior to chromatographic analysis of organic components in PM are reviewed comprehensively, and the corresponding merits and limitations of each method are also briefly discussed.

  13. Application of the spectral-correlation method for diagnostics of cellulose paper

    Science.gov (United States)

    Kiesewetter, D.; Malyugin, V.; Reznik, A.; Yudin, A.; Zhuravleva, N.

    2017-11-01

    The spectral-correlation method was described for diagnostics of optically inhomogeneous biological objects and materials of natural origin. The interrelation between parameters of the studied objects and parameters of the cross correlation function of speckle patterns produced by scattering of coherent light at different wavelengths is shown for thickness, optical density and internal structure of the material. A detailed study was performed for cellulose electric insulating paper with different parameters.

  14. Column-Parallel Single Slope ADC with Digital Correlated Multiple Sampling for Low Noise CMOS Image Sensors

    NARCIS (Netherlands)

    Chen, Y.; Theuwissen, A.J.P.; Chae, Y.

    2011-01-01

    This paper presents a low noise CMOS image sensor (CIS) using 10/12 bit configurable column-parallel single slope ADCs (SS-ADCs) and digital correlated multiple sampling (CMS). The sensor used is a conventional 4T active pixel with a pinned-photodiode as photon detector. The test sensor was

  15. Sampling methods for amphibians in streams in the Pacific Northwest.

    Science.gov (United States)

    R. Bruce Bury; Paul Stephen. Corn

    1991-01-01

    Methods describing how to sample aquatic and semiaquatic amphibians in small streams and headwater habitats in the Pacific Northwest are presented. We developed a technique that samples 10-meter stretches of selected streams, which was adequate to detect presence or absence of amphibian species and provided sample sizes statistically sufficient to compare abundance of...

  16. Efficient Multilevel and Multi-index Sampling Methods in Stochastic Differential Equations

    KAUST Repository

    Haji-Ali, Abdul Lateef

    2016-05-22

    of this thesis is the novel Multi-index Monte Carlo (MIMC) method which is an extension of MLMC in high dimensional problems with significant computational savings. Under reasonable assumptions on the weak and variance convergence, which are related to the mixed regularity of the underlying problem and the discretization method, the order of the computational complexity of MIMC is, at worst up to a logarithmic factor, independent of the dimensionality of the underlying parametric equation. We also apply the same multi-index methodology to another sampling method, namely the Stochastic Collocation method. Hence, the novel Multi-index Stochastic Collocation method is proposed and is shown to be more efficient in problems with sufficient mixed regularity than our novel MIMC method and other standard methods. Finally, MIMC is applied to approximate quantities of interest of stochastic particle systems in the mean-field when the number of particles tends to infinity. To approximate these quantities of interest up to an error tolerance, TOL, MIMC has a computational complexity of O(TOL-2log(TOL)2). This complexity is achieved by building a hierarchy based on two discretization parameters: the number of time steps in an Milstein scheme and the number of particles in the particle system. Moreover, we use a partitioning estimator to increase the correlation between two stochastic particle systems with different sizes. In comparison, the optimal computational complexity of MLMC in this case is O(TOL-3) and the computational complexity of Monte Carlo is O(TOL-4).

  17. Are most samples of animals systematically biased? Consistent individual trait differences bias samples despite random sampling.

    Science.gov (United States)

    Biro, Peter A

    2013-02-01

    Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.

  18. Evaluation of sampling methods for Bacillus spore-contaminated HVAC filters

    OpenAIRE

    Calfee, M. Worth; Rose, Laura J.; Tufts, Jenia; Morse, Stephen; Clayton, Matt; Touati, Abderrahmane; Griffin-Gatchalian, Nicole; Slone, Christina; McSweeney, Neal

    2013-01-01

    The objective of this study was to compare an extraction-based sampling method to two vacuum-based sampling methods (vacuum sock and 37 mm cassette filter) with regards to their ability to recover Bacillus atrophaeus spores (surrogate for Bacillus anthracis) from pleated heating, ventilation, and air conditioning (HVAC) filters that are typically found in commercial and residential buildings. Electrostatic and mechanical HVAC filters were tested, both without and after loading with dust to 50...

  19. Health Correlates of Insomnia Symptoms and Comorbid Mental Disorders in a Nationally Representative Sample of US Adolescents

    NARCIS (Netherlands)

    Blank, M.; Zhang, J.H.; Lamers, F.; Taylor, A.D.; Hickie, I.B.; Merikangas, K.R.

    2015-01-01

    Study Objectives: To estimate the prevalence and health correlates of insomnia symptoms and their association with comorbid mental disorders in a nationally representative sample of adolescents in the United States. Design: National representative cross-sectional study. Setting: Population-based

  20. Sampling Methods and the Accredited Population in Athletic Training Education Research

    Science.gov (United States)

    Carr, W. David; Volberding, Jennifer

    2009-01-01

    Context: We describe methods of sampling the widely-studied, yet poorly defined, population of accredited athletic training education programs (ATEPs). Objective: There are two purposes to this study; first to describe the incidence and types of sampling methods used in athletic training education research, and second to clearly define the…

  1. Treating Sample Covariances for Use in Strongly Coupled Atmosphere-Ocean Data Assimilation

    Science.gov (United States)

    Smith, Polly J.; Lawless, Amos S.; Nichols, Nancy K.

    2018-01-01

    Strongly coupled data assimilation requires cross-domain forecast error covariances; information from ensembles can be used, but limited sampling means that ensemble derived error covariances are routinely rank deficient and/or ill-conditioned and marred by noise. Thus, they require modification before they can be incorporated into a standard assimilation framework. Here we compare methods for improving the rank and conditioning of multivariate sample error covariance matrices for coupled atmosphere-ocean data assimilation. The first method, reconditioning, alters the matrix eigenvalues directly; this preserves the correlation structures but does not remove sampling noise. We show that it is better to recondition the correlation matrix rather than the covariance matrix as this prevents small but dynamically important modes from being lost. The second method, model state-space localization via the Schur product, effectively removes sample noise but can dampen small cross-correlation signals. A combination that exploits the merits of each is found to offer an effective alternative.

  2. Gaussian graphical modeling reveals specific lipid correlations in glioblastoma cells

    Science.gov (United States)

    Mueller, Nikola S.; Krumsiek, Jan; Theis, Fabian J.; Böhm, Christian; Meyer-Bäse, Anke

    2011-06-01

    Advances in high-throughput measurements of biological specimens necessitate the development of biologically driven computational techniques. To understand the molecular level of many human diseases, such as cancer, lipid quantifications have been shown to offer an excellent opportunity to reveal disease-specific regulations. The data analysis of the cell lipidome, however, remains a challenging task and cannot be accomplished solely based on intuitive reasoning. We have developed a method to identify a lipid correlation network which is entirely disease-specific. A powerful method to correlate experimentally measured lipid levels across the various samples is a Gaussian Graphical Model (GGM), which is based on partial correlation coefficients. In contrast to regular Pearson correlations, partial correlations aim to identify only direct correlations while eliminating indirect associations. Conventional GGM calculations on the entire dataset can, however, not provide information on whether a correlation is truly disease-specific with respect to the disease samples and not a correlation of control samples. Thus, we implemented a novel differential GGM approach unraveling only the disease-specific correlations, and applied it to the lipidome of immortal Glioblastoma tumor cells. A large set of lipid species were measured by mass spectrometry in order to evaluate lipid remodeling as a result to a combination of perturbation of cells inducing programmed cell death, while the other perturbations served solely as biological controls. With the differential GGM, we were able to reveal Glioblastoma-specific lipid correlations to advance biomedical research on novel gene therapies.

  3. Clinical correlative evaluation of an iterative method for reconstruction of brain SPECT images

    International Nuclear Information System (INIS)

    Nobili, Flavio; Vitali, Paolo; Calvini, Piero; Bollati, Francesca; Girtler, Nicola; Delmonte, Marta; Mariani, Giuliano; Rodriguez, Guido

    2001-01-01

    Background: Brain SPECT and PET investigations have showed discrepancies in Alzheimer's disease (AD) when considering data deriving from deeply located structures, such as the mesial temporal lobe. These discrepancies could be due to a variety of factors, including substantial differences in gamma-cameras and underlying technology. Mesial temporal structures are deeply located within the brain and the commonly used Filtered Back-Projection (FBP) technique does not fully take into account either the physical parameters of gamma-cameras or geometry of collimators. In order to overcome these limitations, alternative reconstruction methods have been proposed, such as the iterative method of the Conjugate Gradients with modified matrix (CG). However, the clinical applications of these methods have so far been only anecdotal. The present study was planned to compare perfusional SPECT data as derived from the conventional FBP method and from the iterative CG method, which takes into account the geometrical and physical characteristics of the gamma-camera, by a correlative approach with neuropsychology. Methods: Correlations were compared between perfusion of the hippocampal region, as achieved by both the FBP and the CG reconstruction methods, and a short-memory test (Selective Reminding Test, SRT), specifically addressing one of its function. A brain-dedicated camera (CERASPECT) was used for SPECT studies with 99m Tc-hexamethylpropylene-amine-oxime in 23 consecutive patients (mean age: 74.2±6.5) with mild (Mini-Mental Status Examination score ≥15, mean 20.3±3), probable AD. Counts from a hippocampal region in each hemisphere were referred to the average thalamic counts. Results: Hippocampal perfusion significantly correlated with the MMSE score with similar statistical significance (p<0.01) between the two reconstruction methods. Correlation between hippocampal perfusion and the SRT score was better with the CG method (r=0.50 for both hemispheres, p<0.01) than with

  4. Clinical correlative evaluation of an iterative method for reconstruction of brain SPECT images

    Energy Technology Data Exchange (ETDEWEB)

    Nobili, Flavio E-mail: fnobili@smartino.ge.it; Vitali, Paolo; Calvini, Piero; Bollati, Francesca; Girtler, Nicola; Delmonte, Marta; Mariani, Giuliano; Rodriguez, Guido

    2001-08-01

    Background: Brain SPECT and PET investigations have showed discrepancies in Alzheimer's disease (AD) when considering data deriving from deeply located structures, such as the mesial temporal lobe. These discrepancies could be due to a variety of factors, including substantial differences in gamma-cameras and underlying technology. Mesial temporal structures are deeply located within the brain and the commonly used Filtered Back-Projection (FBP) technique does not fully take into account either the physical parameters of gamma-cameras or geometry of collimators. In order to overcome these limitations, alternative reconstruction methods have been proposed, such as the iterative method of the Conjugate Gradients with modified matrix (CG). However, the clinical applications of these methods have so far been only anecdotal. The present study was planned to compare perfusional SPECT data as derived from the conventional FBP method and from the iterative CG method, which takes into account the geometrical and physical characteristics of the gamma-camera, by a correlative approach with neuropsychology. Methods: Correlations were compared between perfusion of the hippocampal region, as achieved by both the FBP and the CG reconstruction methods, and a short-memory test (Selective Reminding Test, SRT), specifically addressing one of its function. A brain-dedicated camera (CERASPECT) was used for SPECT studies with {sup 99m}Tc-hexamethylpropylene-amine-oxime in 23 consecutive patients (mean age: 74.2{+-}6.5) with mild (Mini-Mental Status Examination score {>=}15, mean 20.3{+-}3), probable AD. Counts from a hippocampal region in each hemisphere were referred to the average thalamic counts. Results: Hippocampal perfusion significantly correlated with the MMSE score with similar statistical significance (p<0.01) between the two reconstruction methods. Correlation between hippocampal perfusion and the SRT score was better with the CG method (r=0.50 for both hemispheres, p<0

  5. Multivariate Methods for Prediction of Geologic Sample Composition with Laser-Induced Breakdown Spectroscopy

    Science.gov (United States)

    Morris, Richard; Anderson, R.; Clegg, S. M.; Bell, J. F., III

    2010-01-01

    Laser-induced breakdown spectroscopy (LIBS) uses pulses of laser light to ablate a material from the surface of a sample and produce an expanding plasma. The optical emission from the plasma produces a spectrum which can be used to classify target materials and estimate their composition. The ChemCam instrument on the Mars Science Laboratory (MSL) mission will use LIBS to rapidly analyze targets remotely, allowing more resource- and time-intensive in-situ analyses to be reserved for targets of particular interest. ChemCam will also be used to analyze samples that are not reachable by the rover's in-situ instruments. Due to these tactical and scientific roles, it is important that ChemCam-derived sample compositions are as accurate as possible. We have compared the results of partial least squares (PLS), multilayer perceptron (MLP) artificial neural networks (ANNs), and cascade correlation (CC) ANNs to determine which technique yields better estimates of quantitative element abundances in rock and mineral samples. The number of hidden nodes in the MLP ANNs was optimized using a genetic algorithm. The influence of two data preprocessing techniques were also investigated: genetic algorithm feature selection and averaging the spectra for each training sample prior to training the PLS and ANN algorithms. We used a ChemCam-like laboratory stand-off LIBS system to collect spectra of 30 pressed powder geostandards and a diverse suite of 196 geologic slab samples of known bulk composition. We tested the performance of PLS and ANNs on a subset of these samples, choosing to focus on silicate rocks and minerals with a loss on ignition of less than 2 percent. This resulted in a set of 22 pressed powder geostandards and 80 geologic samples. Four of the geostandards were used as a validation set and 18 were used as the training set for the algorithms. We found that PLS typically resulted in the lowest average absolute error in its predictions, but that the optimized MLP ANN and

  6. Method and apparatus for continuous sampling

    International Nuclear Information System (INIS)

    Marcussen, C.

    1982-01-01

    An apparatus and method for continuously sampling a pulverous material flow includes means for extracting a representative subflow from a pulverous material flow. A screw conveyor is provided to cause the extracted subflow to be pushed upwardly through a duct to an overflow. Means for transmitting a radiation beam transversely to the subflow in the duct, and means for sensing the transmitted beam through opposite pairs of windows in the duct are provided to measure the concentration of one or more constituents in the subflow. (author)

  7. Combining land use information and small stream sampling with PCR-based methods for better characterization of diffuse sources of human fecal pollution.

    Science.gov (United States)

    Peed, Lindsay A; Nietch, Christopher T; Kelty, Catherine A; Meckes, Mark; Mooney, Thomas; Sivaganesan, Mano; Shanks, Orin C

    2011-07-01

    Diffuse sources of human fecal pollution allow for the direct discharge of waste into receiving waters with minimal or no treatment. Traditional culture-based methods are commonly used to characterize fecal pollution in ambient waters, however these methods do not discern between human and other animal sources of fecal pollution making it difficult to identify diffuse pollution sources. Human-associated quantitative real-time PCR (qPCR) methods in combination with low-order headwatershed sampling, precipitation information, and high-resolution geographic information system land use data can be useful for identifying diffuse source of human fecal pollution in receiving waters. To test this assertion, this study monitored nine headwatersheds over a two-year period potentially impacted by faulty septic systems and leaky sanitary sewer lines. Human fecal pollution was measured using three different human-associated qPCR methods and a positive significant correlation was seen between abundance of human-associated genetic markers and septic systems following wet weather events. In contrast, a negative correlation was observed with sanitary sewer line densities suggesting septic systems are the predominant diffuse source of human fecal pollution in the study area. These results demonstrate the advantages of combining water sampling, climate information, land-use computer-based modeling, and molecular biology disciplines to better characterize diffuse sources of human fecal pollution in environmental waters.

  8. Small sample GEE estimation of regression parameters for longitudinal data.

    Science.gov (United States)

    Paul, Sudhir; Zhang, Xuemao

    2014-09-28

    Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.

  9. Fast methods for spatially correlated multilevel functional data

    KAUST Repository

    Staicu, A.-M.; Crainiceanu, C. M.; Carroll, R. J.

    2010-01-01

    -one-out analyses, and nonparametric bootstrap sampling. Our methods are inspired by and applied to data obtained from a state-of-the-art colon carcinogenesis scientific experiment. However, our models are general and will be relevant to many new data sets where

  10. Electronic correlation studies. III. Self-correlated field method. Application to 2S ground state and 2P excited state of three-electron atomic systems

    International Nuclear Information System (INIS)

    Lissillour, R.; Guerillot, C.R.

    1975-01-01

    The self-correlated field method is based on the insertion in the group product wave function of pair functions built upon a set of correlated ''local'' functions and of ''nonlocal'' functions. This work is an application to three-electron systems. The effects of the outer electron on the inner pair are studied. The total electronic energy and some intermediary results such as pair energies, Coulomb and exchange ''correlated'' integrals, are given. The results are always better than those given by conventional SCF computations and reach the same level of accuracy as those given by more laborious methods used in correlation studies. (auth)

  11. Correlating tephras and cryptotephras using glass compositional analyses and numerical and statistical methods: Review and evaluation

    Science.gov (United States)

    Lowe, David J.; Pearce, Nicholas J. G.; Jorgensen, Murray A.; Kuehn, Stephen C.; Tryon, Christian A.; Hayward, Chris L.

    2017-11-01

    We define tephras and cryptotephras and their components (mainly ash-sized particles of glass ± crystals in distal deposits) and summarize the basis of tephrochronology as a chronostratigraphic correlational and dating tool for palaeoenvironmental, geological, and archaeological research. We then document and appraise recent advances in analytical methods used to determine the major, minor, and trace elements of individual glass shards from tephra or cryptotephra deposits to aid their correlation and application. Protocols developed recently for the electron probe microanalysis of major elements in individual glass shards help to improve data quality and standardize reporting procedures. A narrow electron beam (diameter ∼3-5 μm) can now be used to analyze smaller glass shards than previously attainable. Reliable analyses of 'microshards' (defined here as glass shards T2 test). Randomization tests can be used where distributional assumptions such as multivariate normality underlying parametric tests are doubtful. Compositional data may be transformed and scaled before being subjected to multivariate statistical procedures including calculation of distance matrices, hierarchical cluster analysis, and PCA. Such transformations may make the assumption of multivariate normality more appropriate. A sequential procedure using Mahalanobis distance and the Hotelling two-sample T2 test is illustrated using glass major element data from trachytic to phonolitic Kenyan tephras. All these methods require a broad range of high-quality compositional data which can be used to compare 'unknowns' with reference (training) sets that are sufficiently complete to account for all possible correlatives, including tephras with heterogeneous glasses that contain multiple compositional groups. Currently, incomplete databases are tending to limit correlation efficacy. The development of an open, online global database to facilitate progress towards integrated, high

  12. Sparse feature learning for instrument identification: Effects of sampling and pooling methods.

    Science.gov (United States)

    Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu

    2016-05-01

    Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.

  13. Viewing child pornography: prevalence and correlates in a representative community sample of young Swedish men.

    Science.gov (United States)

    Seto, Michael C; Hermann, Chantal A; Kjellgren, Cecilia; Priebe, Gisela; Svedin, Carl Göran; Långström, Niklas

    2015-01-01

    Most research on child pornography use has been based on selected clinical or criminal justice samples; risk factors for child pornography use in the general population remain largely unexplored. In this study, we examined prevalence, risk factors, and correlates of viewing depictions of adult-child sex in a population-representative sample of 1,978 young Swedish men (17-20 years, Mdn = 18 years, overall response rate, 77 %). In an anonymous, school-based survey, participants self-reported sexual coercion experiences, attitudes and beliefs about sex, perceived peer attitudes, and sexual interests and behaviors; including pornography use, sexual interest in children, and sexually coercive behavior. A total of 84 (4.2 %) young men reported they had ever viewed child pornography. Most theory-based variables were moderately and significantly associated with child pornography viewing and were consistent with models of sexual offending implicating both antisociality and sexual deviance. In multivariate logistic regression analysis, 7 of 15 tested factors independently predicted child pornography viewing and explained 42 % of the variance: ever had sex with a male, likely to have sex with a child aged 12-14, likely to have sex with a child 12 or less, perception of children as seductive, having friends who have watched child pornography, frequent pornography use, and ever viewed violent pornography. From these, a 6-item Child Pornography Correlates Scale was constructed and then cross-validated in a similar but independent Norwegian sample.

  14. Active Search on Carcasses versus Pitfall Traps: a Comparison of Sampling Methods.

    Science.gov (United States)

    Zanetti, N I; Camina, R; Visciarelli, E C; Centeno, N D

    2016-04-01

    The study of insect succession in cadavers and the classification of arthropods have mostly been done by placing a carcass in a cage, protected from vertebrate scavengers, which is then visited periodically. An alternative is to use specific traps. Few studies on carrion ecology and forensic entomology involving the carcasses of large vertebrates have employed pitfall traps. The aims of this study were to compare both sampling methods (active search on a carcass and pitfall trapping) for each coleopteran family, and to establish whether there is a discrepancy (underestimation and/or overestimation) in the presence of each family by either method. A great discrepancy was found for almost all families with some of them being more abundant in samples obtained through active search on carcasses and others in samples from traps, whereas two families did not show any bias towards a given sampling method. The fact that families may be underestimated or overestimated by the type of sampling technique highlights the importance of combining both methods, active search on carcasses and pitfall traps, in order to obtain more complete information on decomposition, carrion habitat and cadaveric families or species. Furthermore, a hypothesis advanced on the reasons for the underestimation by either sampling method showing biases towards certain families. Information about the sampling techniques indicating which would be more appropriate to detect or find a particular family is provided.

  15. Perilymph sampling from the cochlear apex: a reliable method to obtain higher purity perilymph samples from scala tympani.

    Science.gov (United States)

    Salt, Alec N; Hale, Shane A; Plonkte, Stefan K R

    2006-05-15

    Measurements of drug levels in the fluids of the inner ear are required to establish kinetic parameters and to determine the influence of specific local delivery protocols. For most substances, this requires cochlear fluids samples to be obtained for analysis. When auditory function is of primary interest, the drug level in the perilymph of scala tympani (ST) is most relevant, since drug in this scala has ready access to the auditory sensory cells. In many prior studies, ST perilymph samples have been obtained from the basal turn, either by aspiration through the round window membrane (RWM) or through an opening in the bony wall. A number of studies have demonstrated that such samples are likely to be contaminated with cerebrospinal fluid (CSF). CSF enters the basal turn of ST through the cochlear aqueduct when the bony capsule is perforated or when fluid is aspirated. The degree of sample contamination has, however, not been widely appreciated. Recent studies have shown that perilymph samples taken through the round window membrane are highly contaminated with CSF, with samples greater than 2microL in volume containing more CSF than perilymph. In spite of this knowledge, many groups continue to sample from the base of the cochlea, as it is a well-established method. We have developed an alternative, technically simple method to increase the proportion of ST perilymph in a fluid sample. The sample is taken from the apex of the cochlea, a site that is distant from the cochlear aqueduct. A previous problem with sampling through a perforation in the bone was that the native perilymph rapidly leaked out driven by CSF pressure and was lost to the middle ear space. We therefore developed a procedure to collect all the fluid that emerged from the perforated apex after perforation. We evaluated the method using a marker ion trimethylphenylammonium (TMPA). TMPA was applied to the perilymph of guinea pigs either by RW irrigation or by microinjection into the apical turn. The

  16. Survey research with a random digit dial national mobile phone sample in Ghana: Methods and sample quality

    Science.gov (United States)

    Sefa, Eunice; Adimazoya, Edward Akolgo; Yartey, Emmanuel; Lenzi, Rachel; Tarpo, Cindy; Heward-Mills, Nii Lante; Lew, Katherine; Ampeh, Yvonne

    2018-01-01

    Introduction Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample. Materials and methods The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census. Results The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample. Conclusions The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in

  17. A Mixed Methods Sampling Methodology for a Multisite Case Study

    Science.gov (United States)

    Sharp, Julia L.; Mobley, Catherine; Hammond, Cathy; Withington, Cairen; Drew, Sam; Stringfield, Sam; Stipanovic, Natalie

    2012-01-01

    The flexibility of mixed methods research strategies makes such approaches especially suitable for multisite case studies. Yet the utilization of mixed methods to select sites for these studies is rarely reported. The authors describe their pragmatic mixed methods approach to select a sample for their multisite mixed methods case study of a…

  18. Irreducible Greens' Functions method in the theory of highly correlated systems

    International Nuclear Information System (INIS)

    Kuzemsky, A.L.

    1994-09-01

    The self-consistent theory of the correlation effects in Highly Correlated Systems (HCS) is presented. The novel Irreducible Green's Function (IGF) method is discussed in detail for the Hubbard model and random Hubbard model. The interpolation solution for the quasiparticle spectrum, which is valid for both the atomic and band limit is obtained. The (IGF) method permits to calculate the quasiparticle spectra of many-particle systems with the complicated spectra and strong interaction in a very natural and compact way. The essence of the method deeply related to the notion of the Generalized Mean Fields (GMF), which determine the elastic scattering corrections. The inelastic scattering corrections leads to the damping of the quasiparticles and are the main topic of the present consideration. The calculation of the damping has been done in a self-consistent way for both limits. For the random Hubbard model the weak coupling case has been considered and the self-energy operator has been calculated using the combination of the IGF method and Coherent Potential Approximation (CPA). The other applications of the method to the s-f model, Anderson model, Heisenberg antiferromagnet, electron-phonon interaction models and quasiparticle tunneling are discussed briefly. (author). 79 refs

  19. [Standard sample preparation method for quick determination of trace elements in plastic].

    Science.gov (United States)

    Yao, Wen-Qing; Zong, Rui-Long; Zhu, Yong-Fa

    2011-08-01

    Reference sample was prepared by masterbatch method, containing heavy metals with known concentration of electronic information products (plastic), the repeatability and precision were determined, and reference sample preparation procedures were established. X-Ray fluorescence spectroscopy (XRF) analysis method was used to determine the repeatability and uncertainty in the analysis of the sample of heavy metals and bromine element. The working curve and the metrical methods for the reference sample were carried out. The results showed that the use of the method in the 200-2000 mg x kg(-1) concentration range for Hg, Pb, Cr and Br elements, and in the 20-200 mg x kg(-1) range for Cd elements, exhibited a very good linear relationship, and the repeatability of analysis methods for six times is good. In testing the circuit board ICB288G and ICB288 from the Mitsubishi Heavy Industry Company, results agreed with the recommended values.

  20. Evaluation of sampling methods for the detection of Salmonella in broiler flocks

    DEFF Research Database (Denmark)

    Skov, Marianne N.; Carstensen, B.; Tornoe, N.

    1999-01-01

    The present study compares four different sampling methods potentially applicable to detection of Salmonella in broiler flocks, based on collection of faecal samples (i) by hand, 300 fresh faecal samples (ii) absorbed on five sheets of paper (iii) absorbed on five pairs of socks (elastic cotton...... horizontal or vertical) were found in the investigation. The results showed that the sock method (five pairs of socks) had a sensitivity comparable with the hand collection method (60 pools of five faecal samples); the paper collection method was inferior, as was the use of only one pair of socks, Estimation...... tubes pulled over the boots and termed 'socks') and (iv) by using only one pair of socks. Twenty-three broiler flocks were included in the investigation and 18 of these were found to be positive by at least one method. Seven serotypes of Salmonella with different patterns of transmission (mainly...

  1. Investigation of Legionella Contamination in Bath Water Samples by Culture, Amoebic Co-Culture, and Real-Time Quantitative PCR Methods

    Directory of Open Access Journals (Sweden)

    Akiko Edagawa

    2015-10-01

    Full Text Available We investigated Legionella contamination in bath water samples, collected from 68 bathing facilities in Japan, by culture, culture with amoebic co-culture, real-time quantitative PCR (qPCR, and real-time qPCR with amoebic co-culture. Using the conventional culture method, Legionella pneumophila was detected in 11 samples (11/68, 16.2%. Contrary to our expectation, the culture method with the amoebic co-culture technique did not increase the detection rate of Legionella (4/68, 5.9%. In contrast, a combination of the amoebic co-culture technique followed by qPCR successfully increased the detection rate (57/68, 83.8% compared with real-time qPCR alone (46/68, 67.6%. Using real-time qPCR after culture with amoebic co-culture, more than 10-fold higher bacterial numbers were observed in 30 samples (30/68, 44.1% compared with the same samples without co-culture. On the other hand, higher bacterial numbers were not observed after propagation by amoebae in 32 samples (32/68, 47.1%. Legionella was not detected in the remaining six samples (6/68, 8.8%, irrespective of the method. These results suggest that application of the amoebic co-culture technique prior to real-time qPCR may be useful for the sensitive detection of Legionella from bath water samples. Furthermore, a combination of amoebic co-culture and real-time qPCR might be useful to detect viable and virulent Legionella because their ability to invade and multiply within free-living amoebae is considered to correlate with their pathogenicity for humans. This is the first report evaluating the efficacy of the amoebic co-culture technique for detecting Legionella in bath water samples.

  2. The moderating effects of sample type as evidence of the effects of faking on personality scale correlations and factor structure

    Directory of Open Access Journals (Sweden)

    KEVIN M. BRADLEY

    2006-09-01

    Full Text Available Motivational differences as a function of sample type (applicants versus incumbents have frequently been suspected of causing meaningful differences in the psychometric properties of personality inventories due to the effects of faking. In this quantitative review, correlations among the Big Five personality constructs were estimated and sample type was examined as a potential moderator of the personality construct inter-correlations. The resulting subgroup meta-analytic correlation matrices were factor-analyzed, and the second order factor solutions for job incumbents and job applicants were compared. Results of the meta-analyses indicate frequent, but small moderating effects. The second order factor analyses indicated that the observed moderation had little effect on the congruence of factor loadings. Together, the results are consistent with the position that faking is of little practical consequence in selection settings.

  3. Phylogenetic representativeness: a new method for evaluating taxon sampling in evolutionary studies

    Directory of Open Access Journals (Sweden)

    Passamonti Marco

    2010-04-01

    Full Text Available Abstract Background Taxon sampling is a major concern in phylogenetic studies. Incomplete, biased, or improper taxon sampling can lead to misleading results in reconstructing evolutionary relationships. Several theoretical methods are available to optimize taxon choice in phylogenetic analyses. However, most involve some knowledge about the genetic relationships of the group of interest (i.e., the ingroup, or even a well-established phylogeny itself; these data are not always available in general phylogenetic applications. Results We propose a new method to assess taxon sampling developing Clarke and Warwick statistics. This method aims to measure the "phylogenetic representativeness" of a given sample or set of samples and it is based entirely on the pre-existing available taxonomy of the ingroup, which is commonly known to investigators. Moreover, our method also accounts for instability and discordance in taxonomies. A Python-based script suite, called PhyRe, has been developed to implement all analyses we describe in this paper. Conclusions We show that this method is sensitive and allows direct discrimination between representative and unrepresentative samples. It is also informative about the addition of taxa to improve taxonomic coverage of the ingroup. Provided that the investigators' expertise is mandatory in this field, phylogenetic representativeness makes up an objective touchstone in planning phylogenetic studies.

  4. Enhanced Sampling in Free Energy Calculations: Combining SGLD with the Bennett's Acceptance Ratio and Enveloping Distribution Sampling Methods.

    Science.gov (United States)

    König, Gerhard; Miller, Benjamin T; Boresch, Stefan; Wu, Xiongwu; Brooks, Bernard R

    2012-10-09

    One of the key requirements for the accurate calculation of free energy differences is proper sampling of conformational space. Especially in biological applications, molecular dynamics simulations are often confronted with rugged energy surfaces and high energy barriers, leading to insufficient sampling and, in turn, poor convergence of the free energy results. In this work, we address this problem by employing enhanced sampling methods. We explore the possibility of using self-guided Langevin dynamics (SGLD) to speed up the exploration process in free energy simulations. To obtain improved free energy differences from such simulations, it is necessary to account for the effects of the bias due to the guiding forces. We demonstrate how this can be accomplished for the Bennett's acceptance ratio (BAR) and the enveloping distribution sampling (EDS) methods. While BAR is considered among the most efficient methods available for free energy calculations, the EDS method developed by Christ and van Gunsteren is a promising development that reduces the computational costs of free energy calculations by simulating a single reference state. To evaluate the accuracy of both approaches in connection with enhanced sampling, EDS was implemented in CHARMM. For testing, we employ benchmark systems with analytical reference results and the mutation of alanine to serine. We find that SGLD with reweighting can provide accurate results for BAR and EDS where conventional molecular dynamics simulations fail. In addition, we compare the performance of EDS with other free energy methods. We briefly discuss the implications of our results and provide practical guidelines for conducting free energy simulations with SGLD.

  5. Some refinements on the comparison of areal sampling methods via simulation

    Science.gov (United States)

    Jeffrey Gove

    2017-01-01

    The design of forest inventories and development of new sampling methods useful in such inventories normally have a two-fold target of design unbiasedness and minimum variance in mind. Many considerations such as costs go into the choices of sampling method for operational and other levels of inventory. However, the variance in terms of meeting a specified level of...

  6. Preparation Of Deposited Sediment Sample By Casting Method For Environmental Study

    International Nuclear Information System (INIS)

    Hutabarat, Tommy; Ristin PI, Evarista

    2000-01-01

    The preparation of deposited sediment sample by c asting m ethod for environmental study has been carried out. This method comprises separation of size fraction and casting process. The deposited sediment samples were wet sieved to separate the size fraction of >500 mum, (250-500) mum, (125-250) mum and (63-125) mum and settling procedures were followed for the separation of (40-63) mum, (20-40) mum, (10-20) mum and o C, ashed at 450 o C, respectively. In the casting process of sample, it was used polyester rapid cure resin and methyl ethyl ketone peroxide (MEKP) hardener. The moulded sediment sample was poured onto caster, allow for 60 hours long. The aim of this method is to get the casted sample which can be used effectively, efficiently and to be avoided from contamination of each other samples. Before casting, samples were grinded up to be fine. The result shows that casting product is ready to be used for natural radionuclide analysis

  7. RNA Profiling for Biomarker Discovery: Practical Considerations for Limiting Sample Sizes

    Directory of Open Access Journals (Sweden)

    Danny J. Kelly

    2005-01-01

    Full Text Available We have compared microarray data generated on Affymetrix™ chips from standard (8 micrograms or low (100 nanograms amounts of total RNA. We evaluated the gene signals and gene fold-change estimates obtained from the two methods and validated a subset of the results by real time, polymerase chain reaction assays. The correlation of low RNA derived gene signals to gene signals obtained from standard RNA was poor for less to moderately abundant genes. Genes with high abundance showed better correlation in signals between the two methods. The signal correlation between the low RNA and standard RNA methods was improved by including a reference sample in the microarray analysis. In contrast, the fold-change estimates for genes were better correlated between the two methods regardless of the magnitude of gene signals. A reference sample based method is suggested for studies that would end up comparing gene signal data from a combination of low and standard RNA templates; no such referencing appears to be necessary when comparing fold-changes of gene expression between standard and low template reactions.

  8. Measuring decision weights in recognition experiments with multiple response alternatives: comparing the correlation and multinomial-logistic-regression methods.

    Science.gov (United States)

    Dai, Huanping; Micheyl, Christophe

    2012-11-01

    Psychophysical "reverse-correlation" methods allow researchers to gain insight into the perceptual representations and decision weighting strategies of individual subjects in perceptual tasks. Although these methods have gained momentum, until recently their development was limited to experiments involving only two response categories. Recently, two approaches for estimating decision weights in m-alternative experiments have been put forward. One approach extends the two-category correlation method to m > 2 alternatives; the second uses multinomial logistic regression (MLR). In this article, the relative merits of the two methods are discussed, and the issues of convergence and statistical efficiency of the methods are evaluated quantitatively using Monte Carlo simulations. The results indicate that, for a range of values of the number of trials, the estimated weighting patterns are closer to their asymptotic values for the correlation method than for the MLR method. Moreover, for the MLR method, weight estimates for different stimulus components can exhibit strong correlations, making the analysis and interpretation of measured weighting patterns less straightforward than for the correlation method. These and other advantages of the correlation method, which include computational simplicity and a close relationship to other well-established psychophysical reverse-correlation methods, make it an attractive tool to uncover decision strategies in m-alternative experiments.

  9. Restoring method for missing data of spatial structural stress monitoring based on correlation

    Science.gov (United States)

    Zhang, Zeyu; Luo, Yaozhi

    2017-07-01

    Long-term monitoring of spatial structures is of great importance for the full understanding of their performance and safety. The missing part of the monitoring data link will affect the data analysis and safety assessment of the structure. Based on the long-term monitoring data of the steel structure of the Hangzhou Olympic Center Stadium, the correlation between the stress change of the measuring points is studied, and an interpolation method of the missing stress data is proposed. Stress data of correlated measuring points are selected in the 3 months of the season when missing data is required for fitting correlation. Data of daytime and nighttime are fitted separately for interpolation. For a simple linear regression when single point's correlation coefficient is 0.9 or more, the average error of interpolation is about 5%. For multiple linear regression, the interpolation accuracy is not significantly increased after the number of correlated points is more than 6. Stress baseline value of construction step should be calculated before interpolating missing data in the construction stage, and the average error is within 10%. The interpolation error of continuous missing data is slightly larger than that of the discrete missing data. The data missing rate of this method should better not exceed 30%. Finally, a measuring point's missing monitoring data is restored to verify the validity of the method.

  10. Global metabolite analysis of yeast: evaluation of sample preparation methods

    DEFF Research Database (Denmark)

    Villas-Bôas, Silas Granato; Højer-Pedersen, Jesper; Åkesson, Mats Fredrik

    2005-01-01

    Sample preparation is considered one of the limiting steps in microbial metabolome analysis. Eukaryotes and prokaryotes behave very differently during the several steps of classical sample preparation methods for analysis of metabolites. Even within the eukaryote kingdom there is a vast diversity...

  11. Validation of Shape Context Based Image Registration Method Using Digital Image Correlation Measurement on a Rat Stomach

    DEFF Research Database (Denmark)

    Liao, Donghua; Wang, P; Zhao, Jingbo

    2014-01-01

    Recently we developed analysis for 3D visceral organ deformation by combining the shape context (SC) method with a full-field strain (strain distribution on a whole 3D surface) analysis for calculating distension-induced rat stomach deformation. The surface deformation detected by the SC method...... needs to be further verified by using a feature tracking measurement. Hence, the aim of this study was to verify the SC method-based calculation by using digital image correlation (DIC) measurement on a rat stomach. The rat stomach exposed to distension pressures 0.0, 0.2, 0.4, and 0.6 kPa were studied...... and the SC calculated correspondence surface was compared. Compared with DIC measurement, the SC calculated surface had errors from 5% to 23% at pressures from 0.2 to 0.6 kPa with different surface sample counts between the reference surface and the target surface. This indicates good qualitative...

  12. Partial distance correlation with methods for dissimilarities

    OpenAIRE

    Székely, Gábor J.; Rizzo, Maria L.

    2014-01-01

    Distance covariance and distance correlation are scalar coefficients that characterize independence of random vectors in arbitrary dimension. Properties, extensions, and applications of distance correlation have been discussed in the recent literature, but the problem of defining the partial distance correlation has remained an open question of considerable interest. The problem of partial distance correlation is more complex than partial correlation partly because the squared distance covari...

  13. Comparison of four sampling methods for the detection of Salmonella in broiler litter.

    Science.gov (United States)

    Buhr, R J; Richardson, L J; Cason, J A; Cox, N A; Fairchild, B D

    2007-01-01

    Experiments were conducted to compare litter sampling methods for the detection of Salmonella. In experiment 1, chicks were challenged orally with a suspension of naladixic acid-resistant Salmonella and wing banded, and additional nonchallenged chicks were placed into each of 2 challenge pens. Nonchallenged chicks were placed into each nonchallenge pen located adjacent to the challenge pens. At 7, 8, 10, and 11 wk of age the litter was sampled using 4 methods: fecal droppings, litter grab, drag swab, and sock. For the challenge pens, Salmonella-positive samples were detected in 3 of 16 fecal samples, 6 of 16 litter grab samples, 7 of 16 drag swabs samples, and 7 of 16 sock samples. Samples from the nonchallenge pens were Salmonella positive in 2 of 16 litter grab samples, 9 of 16 drag swab samples, and 9 of 16 sock samples. In experiment 2, chicks were challenged with Salmonella, and the litter in the challenge and adjacent nonchallenge pens were sampled at 4, 6, and 8 wk of age with broilers remaining in all pens. For the challenge pens, Salmonella was detected in 10 of 36 fecal samples, 20 of 36 litter grab samples, 14 of 36 drag swab samples, and 26 of 36 sock samples. Samples from the adjacent nonchallenge pens were positive for Salmonella in 6 of 36 fecal droppings samples, 4 of 36 litter grab samples, 7 of 36 drag swab samples, and 19 of 36 sock samples. Sock samples had the highest rates of Salmonella detection. In experiment 3, the litter from a Salmonella-challenged flock was sampled at 7, 8, and 9 wk by socks and drag swabs. In addition, comparisons with drag swabs that were stepped on during sampling were made. Both socks (24 of 36, 67%) and drag swabs that were stepped on (25 of 36, 69%) showed significantly more Salmonella-positive samples than the traditional drag swab method (16 of 36, 44%). Drag swabs that were stepped on had comparable Salmonella detection level to that for socks. Litter sampling methods that incorporate stepping on the sample

  14. A DATA FIELD METHOD FOR URBAN REMOTELY SENSED IMAGERY CLASSIFICATION CONSIDERING SPATIAL CORRELATION

    Directory of Open Access Journals (Sweden)

    Y. Zhang

    2016-06-01

    Full Text Available Spatial correlation between pixels is important information for remotely sensed imagery classification. Data field method and spatial autocorrelation statistics have been utilized to describe and model spatial information of local pixels. The original data field method can represent the spatial interactions of neighbourhood pixels effectively. However, its focus on measuring the grey level change between the central pixel and the neighbourhood pixels results in exaggerating the contribution of the central pixel to the whole local window. Besides, Geary’s C has also been proven to well characterise and qualify the spatial correlation between each pixel and its neighbourhood pixels. But the extracted object is badly delineated with the distracting salt-and-pepper effect of isolated misclassified pixels. To correct this defect, we introduce the data field method for filtering and noise limitation. Moreover, the original data field method is enhanced by considering each pixel in the window as the central pixel to compute statistical characteristics between it and its neighbourhood pixels. The last step employs a support vector machine (SVM for the classification of multi-features (e.g. the spectral feature and spatial correlation feature. In order to validate the effectiveness of the developed method, experiments are conducted on different remotely sensed images containing multiple complex object classes inside. The results show that the developed method outperforms the traditional method in terms of classification accuracies.

  15. On the measurment of specific energy of coals by means of 12C determination using a correlation method

    International Nuclear Information System (INIS)

    Cywicka-Jakiel, T.; Bogacz, J.; Czubek, J.A.; Dabrowski, J.M.; Loskiewicz, J.; Zazula, J.M.

    1982-01-01

    The most important industrial property of coal is its gross specific energy (combustion heat). It depends mainly on carbon concentration in coal. We propose to measure the carbon or more precisely, its 12 C content using the (n,n'γ) reaction in which 4.43 MeV gamma rays are emitted. We are using the correlation technique which can be used in high background measurements. The idea of correlation type measurement necessitates the existence of a reaction chain with primary and secondary radiations emitted and registered. By measuring the correlation or covariance function PHI we can obtain a measure of the number of excited 12 C nuclei i.e. a value which is connected to the carbon concentration. The dependence of PHI/t ratio (t being the sampling interval time) on carbon concentration shows a clear increase of the PHI/t value with carbon content. The relative standard deviations for different points vary from 1.3 to 4.4%. The preliminary results presented show that with improved experimental techniques this method can find application in industrial coal cobustion heat measurements. (author)

  16. Matrix elements and few-body calculations within the unitary correlation operator method

    International Nuclear Information System (INIS)

    Roth, R.; Hergert, H.; Papakonstantinou, P.

    2005-01-01

    We employ the unitary correlation operator method (UCOM) to construct correlated, low-momentum matrix elements of realistic nucleon-nucleon interactions. The dominant short-range central and tensor correlations induced by the interaction are included explicitly by an unitary transformation. Using correlated momentum-space matrix elements of the Argonne V18 potential, we show that the unitary transformation eliminates the strong off-diagonal contributions caused by the short-range repulsion and the tensor interaction and leaves a correlated interaction dominated by low-momentum contributions. We use correlated harmonic oscillator matrix elements as input for no-core shell model calculations for few-nucleon systems. Compared to the bare interaction, the convergence properties are dramatically improved. The bulk of the binding energy can already be obtained in very small model spaces or even with a single Slater determinant. Residual long-range correlations, not treated explicitly by the unitary transformation, can easily be described in model spaces of moderate size allowing for fast convergence. By varying the range of the tensor correlator we are able to map out the Tjon line and can in turn constrain the optimal correlator ranges. (orig.)

  17. Matrix elements and few-body calculations within the unitary correlation operator method

    International Nuclear Information System (INIS)

    Roth, R.; Hergert, H.; Papakonstantinou, P.; Neff, T.; Feldmeier, H.

    2005-01-01

    We employ the unitary correlation operator method (UCOM) to construct correlated, low-momentum matrix elements of realistic nucleon-nucleon interactions. The dominant short-range central and tensor correlations induced by the interaction are included explicitly by an unitary transformation. Using correlated momentum-space matrix elements of the Argonne V18 potential, we show that the unitary transformation eliminates the strong off-diagonal contributions caused by the short-range repulsion and the tensor interaction and leaves a correlated interaction dominated by low-momentum contributions. We use correlated harmonic oscillator matrix elements as input for no-core shell model calculations for few-nucleon systems. Compared to the bare interaction, the convergence properties are dramatically improved. The bulk of the binding energy can already be obtained in very small model spaces or even with a single Slater determinant. Residual long-range correlations, not treated explicitly by the unitary transformation, can easily be described in model spaces of moderate size allowing for fast convergence. By varying the range of the tensor correlator we are able to map out the Tjon line and can in turn constrain the optimal correlator ranges

  18. On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods

    Science.gov (United States)

    Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.

    2003-01-01

    Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.

  19. The method of Sample Management in Neutron Activation Analysis Laboratory-Serpong

    International Nuclear Information System (INIS)

    Elisabeth-Ratnawati

    2005-01-01

    In the testing laboratory used by neutron activation analysis method, sample preparation is the main factor and it can't be neglect. The error in the sample preparation can give result with lower accuracy. In this article is explained the scheme of sample preparation i.e sample receive administration, the separate of sample, fluid and solid sample preparation, sample grouping, irradiation, sample counting and holding the sample post irradiation. If the management of samples were good application based on Standard Operation Procedure, therefore each samples has good traceability. To optimize the management of samples is needed the trained and skilled personal and good facility. (author)

  20. Evaluating the effect of sampling and spatial correlation on ground-water travel time uncertainty coupling geostatistical, stochastic, and first order, second moment methods

    International Nuclear Information System (INIS)

    Andrews, R.W.; LaVenue, A.M.; McNeish, J.A.

    1989-01-01

    Ground-water travel time predictions at potential high-level waste repositories are subject to a degree of uncertainty due to the scale of averaging incorporated in conceptual models of the ground-water flow regime as well as the lack of data on the spatial variability of the hydrogeologic parameters. The present study describes the effect of limited observations of a spatially correlated permeability field on the predicted ground-water travel time uncertainty. Varying permeability correlation lengths have been used to investigate the importance of this geostatistical property on the tails of the travel time distribution. This study uses both geostatistical and differential analysis techniques. Following the generation of a spatially correlated permeability field which is considered reality, semivariogram analyses are performed upon small random subsets of the generated field to determine the geostatistical properties of the field represented by the observations. Kriging is then employed to generate a kriged permeability field and the corresponding standard deviation of the estimated field conditioned by the limited observations. Using both the real and kriged fields, the ground-water flow regime is simulated and ground-water travel paths and travel times are determined for various starting points. These results are used to define the ground-water travel time uncertainty due to path variability. The variance of the ground-water travel time along particular paths due to the variance of the permeability field estimated using kriging is then calculated using the first order, second moment method. The uncertainties in predicted travel time due to path and parameter uncertainties are then combined into a single distribution

  1. 222Rn in water: A comparison of two sample collection methods and two sample transport methods, and the determination of temporal variation in North Carolina ground water

    International Nuclear Information System (INIS)

    Hightower, J.H. III

    1994-01-01

    Objectives of this field experiment were: (1) determine whether there was a statistically significant difference between the radon concentrations of samples collected by EPA's standard method, using a syringe, and an alternative, slow-flow method; (2) determine whether there was a statistically significant difference between the measured radon concentrations of samples mailed vs samples not mailed; and (3) determine whether there was a temporal variation of water radon concentration over a 7-month period. The field experiment was conducted at 9 sites, 5 private wells, and 4 public wells, at various locations in North Carolina. Results showed that a syringe is not necessary for sample collection, there was generally no significant radon loss due to mailing samples, and there was statistically significant evidence of temporal variations in water radon concentrations

  2. [DOE method for evaluating environmental and waste management samples: Revision 1, Addendum 1

    Energy Technology Data Exchange (ETDEWEB)

    Goheen, S.C.

    1995-04-01

    The US Dapartment of Energy`s (DOE`s) environmental and waste management (EM) sampling and analysis activities require that large numbers of samples be analyzed for materials characterization, environmental surveillance, and site-remediation programs. The present document, DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods), is a supplemental resource for analyzing many of these samples.

  3. [DOE method for evaluating environmental and waste management samples: Revision 1, Addendum 1

    International Nuclear Information System (INIS)

    Goheen, S.C.

    1995-04-01

    The US Dapartment of Energy's (DOE's) environmental and waste management (EM) sampling and analysis activities require that large numbers of samples be analyzed for materials characterization, environmental surveillance, and site-remediation programs. The present document, DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods), is a supplemental resource for analyzing many of these samples

  4. Pharmacokinetic-pharmacodynamic correlation of imipenem in pediatric burn patients using a bioanalytical liquid chromatographic method

    Directory of Open Access Journals (Sweden)

    Silvia Regina Cavani Jorge Santos

    2015-06-01

    Full Text Available A bioanalytical method was developed and applied to quantify the free imipenem concentrations for pharmacokinetics and PK/PD correlation studies of the dose adjustments required to maintain antimicrobial effectiveness in pediatric burn patients. A reverse-phase Supelcosil LC18 column (250 x 4.6 mm 5 micra, binary mobile phase consisting of 0.01 M, pH 7.0 phosphate buffer and acetonitrile (99:1, v/v, flow rate of 0.8 mL/min, was applied. The method showed good absolute recovery (above 90%, good linearity (0.25-100.0 µg/mL, r2=0.999, good sensitivity (LLOQ: 0.25 µg/mL; LLOD: 0.12 µg/mL and acceptable stability. Inter/intraday precision values were 7.3/5.9%, and mean accuracy was 92.9%. A bioanalytical method was applied to quantify free drug concentrations in children with burns. Six pediatric burn patients (median 7.0 years old, 27.5 kg, normal renal function, and 33% total burn surface area were prospectively investigated; inhalation injuries were present in 4/6 (67% of the patients. Plasma monitoring and PK assessments were performed using a serial blood sample collection for each set, totaling 10 sets. The PK/PD target attained (40%T>MIC for each minimum inhibitory concentration (MIC: 0.5, 1.0, 2.0, 4.0 mg/L occurred at a percentage higher than 80% of the sets investigated and 100% after dose adjustment. In conclusion, the purification of plasma samples using an ultrafiltration technique followed by quantification of imipenem plasma measurements using the LC method is quite simple, useful, and requires small volumes for blood sampling. In addition, a small amount of plasma (0.25 mL is needed to guarantee drug effectiveness in pediatric burn patients. There is also a low risk of neurotoxicity, which is important because pharmacokinetics are unpredictable in these critical patients with severe hospital infection. Finally, the PK/PD target was attained for imipenem in the control of sepsis in pediatric patients with burns.

  5. Development of sample preparation method for honey analysis using PIXE

    International Nuclear Information System (INIS)

    Saitoh, Katsumi; Chiba, Keiko; Sera, Koichiro

    2008-01-01

    We developed an original preparation method for honey samples (samples in paste-like state) specifically designed for PIXE analysis. The results of PIXE analysis of thin targets prepared by adding a standard containing nine elements to honey samples demonstrated that the preparation method bestowed sufficient accuracy on quantitative values. PIXE analysis of 13 kinds of honey was performed, and eight mineral components (Si, P, S, K, Ca, Mn, Cu and Zn) were detected in all honey samples. The principal mineral components were K and Ca, and the quantitative value for K accounted for the majority of the total value for mineral components. K content in honey varies greatly depending on the plant source. Chestnuts had the highest K content. In fact, it was 2-3 times that of Manuka, which is known as a high quality honey. K content of false-acacia, which is produced in the greatest abundance, was 1/20 that of chestnuts. (author)

  6. Method of separate determination of high-ohmic sample resistance and contact resistance

    Directory of Open Access Journals (Sweden)

    Vadim A. Golubiatnikov

    2015-09-01

    Full Text Available A method of separate determination of two-pole sample volume resistance and contact resistance is suggested. The method is applicable to high-ohmic semiconductor samples: semi-insulating gallium arsenide, detector cadmium-zinc telluride (CZT, etc. The method is based on near-contact region illumination by monochromatic radiation of variable intensity from light emitting diodes with quantum energies exceeding the band gap of the material. It is necessary to obtain sample photo-current dependence upon light emitting diode current and to find the linear portion of this dependence. Extrapolation of this linear portion to the Y-axis gives the cut-off current. As the bias voltage is known, it is easy to calculate sample volume resistance. Then, using dark current value, one can determine the total contact resistance. The method was tested for n-type semi-insulating GaAs. The contact resistance value was shown to be approximately equal to the sample volume resistance. Thus, the influence of contacts must be taken into account when electrophysical data are analyzed.

  7. Method validation to determine total alpha beta emitters in water samples using LSC

    International Nuclear Information System (INIS)

    Al-Masri, M. S.; Nashawati, A.; Al-akel, B.; Saaid, S.

    2006-06-01

    In this work a method was validated to determine gross alpha and beta emitters in water samples using liquid scintillation counter. 200 ml of water from each sample were evaporated to 20 ml and 8 ml of them were mixed with 12 ml of the suitable cocktail to be measured by liquid scintillation counter Wallac Winspectral 1414. The lower detection limit by this method (LDL) was 0.33 DPM for total alpha emitters and 1.3 DPM for total beta emitters. and the reproducibility limit was (± 2.32 DPM) and (±1.41 DPM) for total alpha and beta emitters respectively, and the repeatability limit was (±2.19 DPM) and (±1.11 DPM) for total alpha and beta emitters respectively. The method is easy and fast because of the simple preparation steps and the large number of samples that can be measured at the same time. In addition, many real samples and standard samples were analyzed by the method and showed accurate results so it was concluded that the method can be used with various water samples. (author)

  8. Coupling methods for multistage sampling

    OpenAIRE

    Chauvet, Guillaume

    2015-01-01

    Multistage sampling is commonly used for household surveys when there exists no sampling frame, or when the population is scattered over a wide area. Multistage sampling usually introduces a complex dependence in the selection of the final units, which makes asymptotic results quite difficult to prove. In this work, we consider multistage sampling with simple random without replacement sampling at the first stage, and with an arbitrary sampling design for further stages. We consider coupling ...

  9. Wind direction correlated measurements of radon and radon progeny in atmosphere: a method for radon source identification

    International Nuclear Information System (INIS)

    Akber, R.A.; Pfitzner, J.; Johnston, A.

    1994-01-01

    This paper describes the basic principles and methodology of a wind direction correlated measurement technique which is used to distinguish the mine-related and background components of radon and radon progeny concentrations in the vicinity of the ERA Ranger Uranium Mine. Simultaneous measurements of atmospheric radon and radon progeny concentrations and wind speed and direction were conducted using automatic sampling stations. The data were recorded as a time series of half hourly averages and grouped into sixteen 22.5 degrees wind sectors. The sampling interval and the wind sector width were chosen considering wind direction variability (σ θ ) over the sampling time interval. The data were then analysed for radon and radon progeny concentrations in each wind sector. Information about the wind frequency wind speed seasonal and diurnal variations in wind direction and radon concentrations was required for proper data analysis and interpretation of results. A comparison with model-based estimates for an identical time period shows agreement within about a factor of two between the two methods. 15 refs., 1 tab., 5 figs

  10. Hidden cross-correlation patterns in stock markets based on permutation cross-sample entropy and PCA

    Science.gov (United States)

    Lin, Aijing; Shang, Pengjian; Zhong, Bo

    2014-12-01

    In this article, we investigate the hidden cross-correlation structures in Chinese stock markets and US stock markets by performing PCSE combined with PCA approach. It is suggested that PCSE can provide a more faithful and more interpretable description of the dynamic mechanism between time series than cross-correlation matrix. We show that this new technique can be adapted to observe stock markets especially during financial crisis. In order to identify and compare the interactions and structures of stock markets during financial crisis, as well as in normal periods, all the samples are divided into four sub-periods. The results imply that the cross-correlations between Chinese group are stronger than the US group in the most sub-periods. In particular, it is likely that the US stock markets are more integrated with each other during global financial crisis than during Asian financial crisis. However, our results illustrate that Chinese stock markets are not immune from the global financial crisis, although less integrated with other markets if they are compared with US stock markets.

  11. Standard methods for sampling freshwater fishes: Opportunities for international collaboration

    Science.gov (United States)

    Bonar, Scott A.; Mercado-Silva, Norman; Hubert, Wayne A.; Beard, Douglas; Dave, Göran; Kubečka, Jan; Graeb, Brian D. S.; Lester, Nigel P.; Porath, Mark T.; Winfield, Ian J.

    2017-01-01

    With publication of Standard Methods for Sampling North American Freshwater Fishes in 2009, the American Fisheries Society (AFS) recommended standard procedures for North America. To explore interest in standardizing at intercontinental scales, a symposium attended by international specialists in freshwater fish sampling was convened at the 145th Annual AFS Meeting in Portland, Oregon, in August 2015. Participants represented all continents except Australia and Antarctica and were employed by state and federal agencies, universities, nongovernmental organizations, and consulting businesses. Currently, standardization is practiced mostly in North America and Europe. Participants described how standardization has been important for management of long-term data sets, promoting fundamental scientific understanding, and assessing efficacy of large spatial scale management strategies. Academics indicated that standardization has been useful in fisheries education because time previously used to teach how sampling methods are developed is now more devoted to diagnosis and treatment of problem fish communities. Researchers reported that standardization allowed increased sample size for method validation and calibration. Group consensus was to retain continental standards where they currently exist but to further explore international and intercontinental standardization, specifically identifying where synergies and bridges exist, and identify means to collaborate with scientists where standardization is limited but interest and need occur.

  12. Computing Wigner distributions and time correlation functions using the quantum thermal bath method: application to proton transfer spectroscopy.

    Science.gov (United States)

    Basire, Marie; Borgis, Daniel; Vuilleumier, Rodolphe

    2013-08-14

    Langevin dynamics coupled to a quantum thermal bath (QTB) allows for the inclusion of vibrational quantum effects in molecular dynamics simulations at virtually no additional computer cost. We investigate here the ability of the QTB method to reproduce the quantum Wigner distribution of a variety of model potentials, designed to assess the performances and limits of the method. We further compute the infrared spectrum of a multidimensional model of proton transfer in the gas phase and in solution, using classical trajectories sampled initially from the Wigner distribution. It is shown that for this type of system involving large anharmonicities and strong nonlinear coupling to the environment, the quantum thermal bath is able to sample the Wigner distribution satisfactorily and to account for both zero point energy and tunneling effects. It leads to quantum time correlation functions having the correct short-time behavior, and the correct associated spectral frequencies, but that are slightly too overdamped. This is attributed to the classical propagation approximation rather than the generation of the quantized initial conditions themselves.

  13. Sampling methods for the study of pneumococcal carriage: a systematic review.

    Science.gov (United States)

    Gladstone, R A; Jefferies, J M; Faust, S N; Clarke, S C

    2012-11-06

    Streptococcus pneumoniae is an important pathogen worldwide. Accurate sampling of S. pneumoniae carriage is central to surveillance studies before and following conjugate vaccination programmes to combat pneumococcal disease. Any bias introduced during sampling will affect downstream recovery and typing. Many variables exist for the method of collection and initial processing, which can make inter-laboratory or international comparisons of data complex. In February 2003, a World Health Organisation working group published a standard method for the detection of pneumococcal carriage for vaccine trials to reduce or eliminate variability. We sought to describe the variables associated with the sampling of S. pneumoniae from collection to storage in the context of the methods recommended by the WHO and those used in pneumococcal carriage studies since its publication. A search of published literature in the online PubMed database was performed on the 1st June 2012, to identify published studies that collected pneumococcal carriage isolates, conducted after the publication of the WHO standard method. After undertaking a systematic analysis of the literature, we show that a number of differences in pneumococcal sampling protocol continue to exist between studies since the WHO publication. The majority of studies sample from the nasopharynx, but the choice of swab and swab transport media is more variable between studies. At present there is insufficient experimental data that supports the optimal sensitivity of any standard method. This may have contributed to incomplete adoption of the primary stages of the WHO detection protocol, alongside pragmatic or logistical issues associated with study design. Consequently studies may not provide a true estimate of pneumococcal carriage. Optimal sampling of carriage could lead to improvements in downstream analysis and the evaluation of pneumococcal vaccine impact and extrapolation to pneumococcal disease control therefore

  14. A faster sample preparation method for determination of polonium-210 in fish

    International Nuclear Information System (INIS)

    Sadi, B.B.; Jing Chen; Kochermin, Vera; Godwin Tung; Sorina Chiorean

    2016-01-01

    In order to facilitate Health Canada’s study on background radiation levels in country foods, an in-house radio-analytical method has been developed for determination of polonium-210 ( 210 Po) in fish samples. The method was validated by measurement of 210 Po in a certified reference material. It was also evaluated by comparing 210 Po concentrations in a number of fish samples by another method. The in-house method offers faster sample dissolution using an automated digestion system compared to currently used wet-ashing on a hot plate. It also utilizes pre-packed Sr-resin® cartridges for rapid and reproducible separation of 210 Po versus time-consuming manually packed Sr-resin® columns. (author)

  15. A hybrid measure-correlate-predict method for long-term wind condition assessment

    International Nuclear Information System (INIS)

    Zhang, Jie; Chowdhury, Souma; Messac, Achille; Hodge, Bri-Mathias

    2014-01-01

    Highlights: • A hybrid measure-correlate-predict (MCP) methodology with greater accuracy is developed. • Three sets of performance metrics are proposed to evaluate the hybrid MCP method. • Both wind speed and direction are considered in the hybrid MCP method. • The best combination of MCP algorithms is determined. • The developed hybrid MCP method is uniquely helpful for long-term wind resource assessment. - Abstract: This paper develops a hybrid measure-correlate-predict (MCP) strategy to assess long-term wind resource variations at a farm site. The hybrid MCP method uses recorded data from multiple reference stations to estimate long-term wind conditions at a target wind plant site with greater accuracy than is possible with data from a single reference station. The weight of each reference station in the hybrid strategy is determined by the (i) distance and (ii) elevation differences between the target farm site and each reference station. In this case, the wind data is divided into sectors according to the wind direction, and the MCP strategy is implemented for each wind direction sector separately. The applicability of the proposed hybrid strategy is investigated using five MCP methods: (i) the linear regression; (ii) the variance ratio; (iii) the Weibull scale; (iv) the artificial neural networks; and (v) the support vector regression. To implement the hybrid MCP methodology, we use hourly averaged wind data recorded at five stations in the state of Minnesota between 07-01-1996 and 06-30-2004. Three sets of performance metrics are used to evaluate the hybrid MCP method. The first set of metrics analyze the statistical performance, including the mean wind speed, wind speed variance, root mean square error, and mean absolute error. The second set of metrics evaluate the distribution of long-term wind speed; to this end, the Weibull distribution and the Multivariate and Multimodal Wind Distribution models are adopted. The third set of metrics analyze

  16. Comparison of DNA preservation methods for environmental bacterial community samples.

    Science.gov (United States)

    Gray, Michael A; Pratte, Zoe A; Kellogg, Christina A

    2013-02-01

    Field collections of environmental samples, for example corals, for molecular microbial analyses present distinct challenges. The lack of laboratory facilities in remote locations is common, and preservation of microbial community DNA for later study is critical. A particular challenge is keeping samples frozen in transit. Five nucleic acid preservation methods that do not require cold storage were compared for effectiveness over time and ease of use. Mixed microbial communities of known composition were created and preserved by DNAgard(™), RNAlater(®), DMSO-EDTA-salt (DESS), FTA(®) cards, and FTA Elute(®) cards. Automated ribosomal intergenic spacer analysis and clone libraries were used to detect specific changes in the faux communities over weeks and months of storage. A previously known bias in FTA(®) cards that results in lower recovery of pure cultures of Gram-positive bacteria was also detected in mixed community samples. There appears to be a uniform bias across all five preservation methods against microorganisms with high G + C DNA. Overall, the liquid-based preservatives (DNAgard(™), RNAlater(®), and DESS) outperformed the card-based methods. No single liquid method clearly outperformed the others, leaving method choice to be based on experimental design, field facilities, shipping constraints, and allowable cost. © 2012 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.

  17. Novel sample preparation method for surfactant containing suppositories: effect of micelle formation on drug recovery.

    Science.gov (United States)

    Kalmár, Éva; Ueno, Konomi; Forgó, Péter; Szakonyi, Gerda; Dombi, György

    2013-09-01

    Rectal drug delivery is currently at the focus of attention. Surfactants promote drug release from the suppository bases and enhance the formulation properties. The aim of our work was to develop a sample preparation method for HPLC analysis for a suppository base containing 95% hard fat, 2.5% Tween 20 and 2.5% Tween 60. A conventional sample preparation method did not provide successful results as the recovery of the drug failed to fulfil the validation criterion 95-105%. This was caused by the non-ionic surfactants in the suppository base incorporating some of the drug, preventing its release. As guidance for the formulation from an analytical aspect, we suggest a well defined surfactant content based on the turbidimetric determination of the CMC (critical micelle formation concentration) in the applied methanol-water solvent. Our CMC data correlate well with the results of previous studies. As regards the sample preparation procedure, a study was performed of the effects of ionic strength and pH on the drug recovery with the avoidance of degradation of the drug during the procedure. Aminophenazone and paracetamol were used as model drugs. The optimum conditions for drug release from the molten suppository base were found to be 100 mM NaCl, 20-40 mM NaOH and a 30 min ultrasonic treatment of the final sample solution. As these conditions could cause the degradation of the drugs in the solution, this was followed by NMR spectroscopy, and the results indicated that degradation did not take place. The determined CMCs were 0.08 mM for Tween 20, 0.06 mM for Tween 60 and 0.04 mM for a combined Tween 20, Tween 60 system. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. A generalized Levene's scale test for variance heterogeneity in the presence of sample correlation and group uncertainty.

    Science.gov (United States)

    Soave, David; Sun, Lei

    2017-09-01

    We generalize Levene's test for variance (scale) heterogeneity between k groups for more complex data, when there are sample correlation and group membership uncertainty. Following a two-stage regression framework, we show that least absolute deviation regression must be used in the stage 1 analysis to ensure a correct asymptotic χk-12/(k-1) distribution of the generalized scale (gS) test statistic. We then show that the proposed gS test is independent of the generalized location test, under the joint null hypothesis of no mean and no variance heterogeneity. Consequently, we generalize the recently proposed joint location-scale (gJLS) test, valuable in settings where there is an interaction effect but one interacting variable is not available. We evaluate the proposed method via an extensive simulation study and two genetic association application studies. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  19. Probability Sampling Method for a Hidden Population Using Respondent-Driven Sampling: Simulation for Cancer Survivors.

    Science.gov (United States)

    Jung, Minsoo

    2015-01-01

    When there is no sampling frame within a certain group or the group is concerned that making its population public would bring social stigma, we say the population is hidden. It is difficult to approach this kind of population survey-methodologically because the response rate is low and its members are not quite honest with their responses when probability sampling is used. The only alternative known to address the problems caused by previous methods such as snowball sampling is respondent-driven sampling (RDS), which was developed by Heckathorn and his colleagues. RDS is based on a Markov chain, and uses the social network information of the respondent. This characteristic allows for probability sampling when we survey a hidden population. We verified through computer simulation whether RDS can be used on a hidden population of cancer survivors. According to the simulation results of this thesis, the chain-referral sampling of RDS tends to minimize as the sample gets bigger, and it becomes stabilized as the wave progresses. Therefore, it shows that the final sample information can be completely independent from the initial seeds if a certain level of sample size is secured even if the initial seeds were selected through convenient sampling. Thus, RDS can be considered as an alternative which can improve upon both key informant sampling and ethnographic surveys, and it needs to be utilized for various cases domestically as well.

  20. OPTIMAL METHOD FOR PREPARATION OF SILICATE ROCK SAMPLES FOR ANALYTICAL PURPOSES

    Directory of Open Access Journals (Sweden)

    Maja Vrkljan

    2004-12-01

    Full Text Available The purpose of this study was to determine an optimal dissolution method for silicate rock samples for further analytical purposes. Analytical FAAS method of determining cobalt, chromium, copper, nickel, lead and zinc content in gabbro sample and geochemical standard AGV-1 has been applied for verification. Dissolution in mixtures of various inorganic acids has been tested, as well as Na2CO3 fusion technique. The results obtained by different methods have been compared and dissolution in the mixture of HNO3 + HF has been recommended as optimal.

  1. A generalized transmission method for gamma-efficiency determinations in soil samples

    International Nuclear Information System (INIS)

    Bolivar, J.P.; Garcia-Tenorio, R.; Garcia-Leon, M.

    1994-01-01

    In this paper, a generalization of the γ-ray transmission method which is useful for measurements on soil samples, for example, is presented. The correction factor, f, is given, which is a function of the apparent density of the soil and the γ-ray energy. With this method, the need for individual determinations of f, for each energy and apparent soil density is avoided. Although the method has been developed for soils, the general philosophy can be applied to other sample matrices, such as water or vegetables for example. (author)

  2. Chemometrics methods for the investigation of methylmercury and total mercury contamination in mollusks samples collected from coastal sites along the Chinese Bohai Sea.

    Science.gov (United States)

    Yawei, Wang; Lina, Liang; Jianbo, Shi; Guibin, Jiang

    2005-06-01

    The development and application of chemometrics methods, principal component analysis (PCA), cluster analysis and correlation analysis for the determination of methylmercury (MeHg) and total mecury (HgT) in gastropod and bivalve species collected from eight coastal sites along the Chinese Bohai Sea are described. HgT is directly determined by atomic fluorescence spectrometry (AFS), while MeHg is measured by a laboratory established high performance liquid chromatography-atomic fluorescence spectrometry system (HPLC-AFS). One-way ANOVA and cluster analysis indicated that the bioaccumulation of Rap to accumulate Hg was significantly (P<0.05) different from other mollusks. Correlation analysis shows that there is linear relationship between MeHg and HgT in mollusks samples collected from coastal sites along the Chinese Bohai Sea, while in mollusks samples collected from Hongqiao market in Beijing City, there is not any linear relationship.

  3. Rock sampling. [method for controlling particle size distribution

    Science.gov (United States)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  4. Interval estimation methods of the mean in small sample situation and the results' comparison

    International Nuclear Information System (INIS)

    Wu Changli; Guo Chunying; Jiang Meng; Lin Yuangen

    2009-01-01

    The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)

  5. Mathematical correlation of modal-parameter-identification methods via system-realization theory

    Science.gov (United States)

    Juang, Jer-Nan

    1987-01-01

    A unified approach is introduced using system-realization theory to derive and correlate modal-parameter-identification methods for flexible structures. Several different time-domain methods are analyzed and treated. A basic mathematical foundation is presented which provides insight into the field of modal-parameter identification for comparison and evaluation. The relation among various existing methods is established and discussed. This report serves as a starting point to stimulate additional research toward the unification of the many possible approaches for modal-parameter identification.

  6. Correlative Super-Resolution Microscopy: New Dimensions and New Opportunities.

    Science.gov (United States)

    Hauser, Meghan; Wojcik, Michal; Kim, Doory; Mahmoudi, Morteza; Li, Wan; Xu, Ke

    2017-06-14

    Correlative microscopy, the integration of two or more microscopy techniques performed on the same sample, produces results that emphasize the strengths of each technique while offsetting their individual weaknesses. Light microscopy has historically been a central method in correlative microscopy due to its widespread availability, compatibility with hydrated and live biological samples, and excellent molecular specificity through fluorescence labeling. However, conventional light microscopy can only achieve a resolution of ∼300 nm, undercutting its advantages in correlations with higher-resolution methods. The rise of super-resolution microscopy (SRM) over the past decade has drastically improved the resolution of light microscopy to ∼10 nm, thus creating exciting new opportunities and challenges for correlative microscopy. Here we review how these challenges are addressed to effectively correlate SRM with other microscopy techniques, including light microscopy, electron microscopy, cryomicroscopy, atomic force microscopy, and various forms of spectroscopy. Though we emphasize biological studies, we also discuss the application of correlative SRM to materials characterization and single-molecule reactions. Finally, we point out current limitations and discuss possible future improvements and advances. We thus demonstrate how a correlative approach adds new dimensions of information and provides new opportunities in the fast-growing field of SRM.

  7. Validation of curve-fitting method for blood retention of 99mTc-GSA. Comparison with blood sampling method

    International Nuclear Information System (INIS)

    Ha-Kawa, Sang Kil; Suga, Yutaka; Kouda, Katsuyasu; Ikeda, Koshi; Tanaka, Yoshimasa

    1997-01-01

    We investigated a curve-fitting method for the rate of blood retention of 99m Tc-galactosyl serum albumin (GSA) as a substitute for the blood sampling method. Seven healthy volunteers and 27 patients with liver disease underwent 99m Tc-GSA scanning. After normalization of the y-intercept as 100 percent, a biexponential regression curve for the precordial time-activity curve provided the percent injected dose (%ID) of 99m Tc-GSA in the blood without blood sampling. The discrepancy between %ID obtained by the curve-fitting method and that by the multiple blood samples was minimal in normal volunteers 3.1±2.1% (mean±standard deviation, n=77 sampling). Slightly greater discrepancy was observed in patients with liver disease (7.5±6.1%, n=135 sampling). The %ID at 15 min after injection obtained from the fitted curve was significantly greater in patients with liver cirrhosis than in the controls (53.2±11.6%, n=13; vs. 31.9±2.8%, n=7, p 99m Tc-GSA and the plasma retention rate for indocyanine green (r=-0.869, p 99m Tc-GSA and could be a substitute for the blood sampling method. (author)

  8. Evaluation of the point-centred-quarter method of sampling ...

    African Journals Online (AJOL)

    -quarter method.The parameter which was most efficiently sampled was species composition relativedensity) with 90% replicate similarity being achieved with 100 point-centred-quarters. However, this technique cannot be recommended, even ...

  9. An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples.

    Science.gov (United States)

    Riediger, Irina N; Hoffmaster, Alex R; Casanovas-Massana, Arnau; Biondo, Alexander W; Ko, Albert I; Stoddard, Robyn A

    2016-01-01

    Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden.

  10. The multiphonon method as a dynamical approach to octupole correlations in deformed nuclei

    International Nuclear Information System (INIS)

    Piepenbring, R.

    1986-09-01

    The octupole correlations in nuclei are studied within the framework of the multiphonon method which is mainly the exact diagonalization of the total Hamiltonian in the space spanned by collective phonons. This treatment takes properly into account the Pauli principle. It is a microscopic approach based on a reflection symmetry of the potential. The spectroscopic properties of double even and odd-mass nuclei are nicely reproduced. The multiphonon method appears as a dynamical approach to octupole correlations in nuclei which can be compared to other models based on stable octupole deformation. 66 refs

  11. A self-sampling method to obtain large volumes of undiluted cervicovaginal secretions.

    Science.gov (United States)

    Boskey, Elizabeth R; Moench, Thomas R; Hees, Paul S; Cone, Richard A

    2003-02-01

    Studies of vaginal physiology and pathophysiology sometime require larger volumes of undiluted cervicovaginal secretions than can be obtained by current methods. A convenient method for self-sampling these secretions outside a clinical setting can facilitate such studies of reproductive health. The goal was to develop a vaginal self-sampling method for collecting large volumes of undiluted cervicovaginal secretions. A menstrual collection device (the Instead cup) was inserted briefly into the vagina to collect secretions that were then retrieved from the cup by centrifugation in a 50-ml conical tube. All 16 women asked to perform this procedure found it feasible and acceptable. Among 27 samples, an average of 0.5 g of secretions (range, 0.1-1.5 g) was collected. This is a rapid and convenient self-sampling method for obtaining relatively large volumes of undiluted cervicovaginal secretions. It should prove suitable for a wide range of assays, including those involving sexually transmitted diseases, microbicides, vaginal physiology, immunology, and pathophysiology.

  12. Study of variance and covariance terms in linear attenuation coefficient measurements of irregular samples through the two media method by gamma-ray transmission

    International Nuclear Information System (INIS)

    Kuramoto, R.Y.R.Renato Yoichi Ribeiro.; Appoloni, Carlos Roberto

    2002-01-01

    The two media method permits the application of Beer's law (Thesis (Master Degree), Universidade Estadual de Londrina, PR, Brazil, pp. 23) for the linear attenuation coefficient determination of irregular thickness samples by gamma-ray transmission. However, the use of this methodology introduces experimental complexity due to the great number of variables to be measured. As consequence of this complexity, the uncertainties associated with each of these variables may be correlated. In this paper, we examine the covariance terms in the uncertainty propagation, and quantify the correlation among the uncertainties of each of the variables in question

  13. Joint image reconstruction method with correlative multi-channel prior for x-ray spectral computed tomography

    Science.gov (United States)

    Kazantsev, Daniil; Jørgensen, Jakob S.; Andersen, Martin S.; Lionheart, William R. B.; Lee, Peter D.; Withers, Philip J.

    2018-06-01

    Rapid developments in photon-counting and energy-discriminating detectors have the potential to provide an additional spectral dimension to conventional x-ray grayscale imaging. Reconstructed spectroscopic tomographic data can be used to distinguish individual materials by characteristic absorption peaks. The acquired energy-binned data, however, suffer from low signal-to-noise ratio, acquisition artifacts, and frequently angular undersampled conditions. New regularized iterative reconstruction methods have the potential to produce higher quality images and since energy channels are mutually correlated it can be advantageous to exploit this additional knowledge. In this paper, we propose a novel method which jointly reconstructs all energy channels while imposing a strong structural correlation. The core of the proposed algorithm is to employ a variational framework of parallel level sets to encourage joint smoothing directions. In particular, the method selects reference channels from which to propagate structure in an adaptive and stochastic way while preferring channels with a high data signal-to-noise ratio. The method is compared with current state-of-the-art multi-channel reconstruction techniques including channel-wise total variation and correlative total nuclear variation regularization. Realistic simulation experiments demonstrate the performance improvements achievable by using correlative regularization methods.

  14. Method for evaluation of radiative properties of glass samples

    Energy Technology Data Exchange (ETDEWEB)

    Mohelnikova, Jitka [Faculty of Civil Engineering, Brno University of Technology, Veveri 95, 602 00 Brno (Czech Republic)], E-mail: mohelnikova.j@fce.vutbr.cz

    2008-04-15

    The paper presents a simple calculation method which serves for an evaluation of radiative properties of window glasses. The method is based on a computer simulation model of the energy balance of a thermally insulated box with selected glass samples. A temperature profile of the air inside of the box with a glass sample exposed to affecting radiation was determined for defined boundary conditions. The spectral range of the radiation was considered in the interval between 280 and 2500 nm. This interval is adequate to the spectral range of solar radiation affecting windows in building facades. The air temperature rise within the box was determined in a response to the affecting radiation in the time between the beginning of the radiation exposition and the time of steady-state thermal conditions. The steady state temperature inside of the insulated box serves for the evaluation of the box energy balance and determination of the glass sample radiative properties. These properties are represented by glass characteristics as mean values of transmittance, reflectance and absorptance calculated for a defined spectral range. The data of the computer simulations were compared to experimental measurements on a real model of the insulated box. Results of both the calculations and measurements are in a good compliance. The method is recommended for preliminary evaluation of window glass radiative properties which serve as data for energy evaluation of buildings.

  15. PhyloChip™ microarray comparison of sampling methods used for coral microbial ecology

    Science.gov (United States)

    Kellogg, Christina A.; Piceno, Yvette M.; Tom, Lauren M.; DeSantis, Todd Z.; Zawada, David G.; Andersen, Gary L.

    2012-01-01

    Interest in coral microbial ecology has been increasing steadily over the last decade, yet standardized methods of sample collection still have not been defined. Two methods were compared for their ability to sample coral-associated microbial communities: tissue punches and foam swabs, the latter being less invasive and preferred by reef managers. Four colonies of star coral, Montastraea annularis, were sampled in the Dry Tortugas National Park (two healthy and two with white plague disease). The PhyloChip™ G3 microarray was used to assess microbial community structure of amplified 16S rRNA gene sequences. Samples clustered based on methodology rather than coral colony. Punch samples from healthy and diseased corals were distinct. All swab samples clustered closely together with the seawater control and did not group according to the health state of the corals. Although more microbial taxa were detected by the swab method, there is a much larger overlap between the water control and swab samples than punch samples, suggesting some of the additional diversity is due to contamination from water absorbed by the swab. While swabs are useful for noninvasive studies of the coral surface mucus layer, these results show that they are not optimal for studies of coral disease.

  16. A Proteomics Sample Preparation Method for Mature, Recalcitrant Leaves of Perennial Plants

    Science.gov (United States)

    Na, Zhang; Chengying, Lao; Bo, Wang; Dingxiang, Peng; Lijun, Liu

    2014-01-01

    Sample preparation is key to the success of proteomics studies. In the present study, two sample preparation methods were tested for their suitability on the mature, recalcitrant leaves of six representative perennial plants (grape, plum, pear, peach, orange, and ramie). An improved sample preparation method was obtained: Tris and Triton X-100 were added together instead of CHAPS to the lysis buffer, and a 20% TCA-water solution and 100% precooled acetone were added after the protein extraction for the further purification of protein. This method effectively eliminates nonprotein impurities and obtains a clear two-dimensional gel electrophoresis array. The method facilitates the separation of high-molecular-weight proteins and increases the resolution of low-abundance proteins. This method provides a widely applicable and economically feasible technology for the proteomic study of the mature, recalcitrant leaves of perennial plants. PMID:25028960

  17. A proteomics sample preparation method for mature, recalcitrant leaves of perennial plants.

    Directory of Open Access Journals (Sweden)

    Deng Gang

    Full Text Available Sample preparation is key to the success of proteomics studies. In the present study, two sample preparation methods were tested for their suitability on the mature, recalcitrant leaves of six representative perennial plants (grape, plum, pear, peach, orange, and ramie. An improved sample preparation method was obtained: Tris and Triton X-100 were added together instead of CHAPS to the lysis buffer, and a 20% TCA-water solution and 100% precooled acetone were added after the protein extraction for the further purification of protein. This method effectively eliminates nonprotein impurities and obtains a clear two-dimensional gel electrophoresis array. The method facilitates the separation of high-molecular-weight proteins and increases the resolution of low-abundance proteins. This method provides a widely applicable and economically feasible technology for the proteomic study of the mature, recalcitrant leaves of perennial plants.

  18. Correlation analysis of first phase monitoring results for uranium mill workers

    International Nuclear Information System (INIS)

    Davis, M.W.

    1983-05-01

    This report describes the determination of the existence and extent of correlations in data obtained during the first phase study of urinalysis, personal air sampling and lung burden measurements of uranium mill workers. It was shown that uranium excretions in urine as determined from spot urine samples at the end of the shift were correlated with intakes calculated from personal air sampling data at the 90 percent confidence level. When there are large variations in the rate of urine production, the time rate or uranium elimination was shown to be a more reliable indicator of uranium excretion than the uranium concentration in urine. Based on correlations between phantom and subject lung burden measurements in the presence of changing background radiation levels, a comparative lung burden measurement technique was developed. The sensitivity and accuracy of the method represent a significant improvement and the method is as applicable to females as to males

  19. A Sequential Kriging reliability analysis method with characteristics of adaptive sampling regions and parallelizability

    International Nuclear Information System (INIS)

    Wen, Zhixun; Pei, Haiqing; Liu, Hai; Yue, Zhufeng

    2016-01-01

    The sequential Kriging reliability analysis (SKRA) method has been developed in recent years for nonlinear implicit response functions which are expensive to evaluate. This type of method includes EGRA: the efficient reliability analysis method, and AK-MCS: the active learning reliability method combining Kriging model and Monte Carlo simulation. The purpose of this paper is to improve SKRA by adaptive sampling regions and parallelizability. The adaptive sampling regions strategy is proposed to avoid selecting samples in regions where the probability density is so low that the accuracy of these regions has negligible effects on the results. The size of the sampling regions is adapted according to the failure probability calculated by last iteration. Two parallel strategies are introduced and compared, aimed at selecting multiple sample points at a time. The improvement is verified through several troublesome examples. - Highlights: • The ISKRA method improves the efficiency of SKRA. • Adaptive sampling regions strategy reduces the number of needed samples. • The two parallel strategies reduce the number of needed iterations. • The accuracy of the optimal value impacts the number of samples significantly.

  20. Sample processing method for the determination of perchlorate in milk

    International Nuclear Information System (INIS)

    Dyke, Jason V.; Kirk, Andrea B.; Kalyani Martinelango, P.; Dasgupta, Purnendu K.

    2006-01-01

    In recent years, many different water sources and foods have been reported to contain perchlorate. Studies indicate that significant levels of perchlorate are present in both human and dairy milk. The determination of perchlorate in milk is particularly important due to its potential health impact on infants and children. As for many other biological samples, sample preparation is more time consuming than the analysis itself. The concurrent presence of large amounts of fats, proteins, carbohydrates, etc., demands some initial cleanup; otherwise the separation column lifetime and the limit of detection are both greatly compromised. Reported milk processing methods require the addition of chemicals such as ethanol, acetic acid or acetonitrile. Reagent addition is undesirable in trace analysis. We report here an essentially reagent-free sample preparation method for the determination of perchlorate in milk. Milk samples are spiked with isotopically labeled perchlorate and centrifuged to remove lipids. The resulting liquid is placed in a disposable centrifugal ultrafilter device with a molecular weight cutoff of 10 kDa, and centrifuged. Approximately 5-10 ml of clear liquid, ready for analysis, is obtained from a 20 ml milk sample. Both bovine and human milk samples have been successfully processed and analyzed by ion chromatography-mass spectrometry (IC-MS). Standard addition experiments show good recoveries. The repeatability of the analytical result for the same sample in multiple sample cleanup runs ranged from 3 to 6% R.S.D. This processing technique has also been successfully applied for the determination of iodide and thiocyanate in milk

  1. A Method for Microalgae Proteomics Analysis Based on Modified Filter-Aided Sample Preparation.

    Science.gov (United States)

    Li, Song; Cao, Xupeng; Wang, Yan; Zhu, Zhen; Zhang, Haowei; Xue, Song; Tian, Jing

    2017-11-01

    With the fast development of microalgal biofuel researches, the proteomics studies of microalgae increased quickly. A filter-aided sample preparation (FASP) method is widely used proteomics sample preparation method since 2009. Here, a method of microalgae proteomics analysis based on modified filter-aided sample preparation (mFASP) was described to meet the characteristics of microalgae cells and eliminate the error caused by over-alkylation. Using Chlamydomonas reinhardtii as the model, the prepared sample was tested by standard LC-MS/MS and compared with the previous reports. The results showed mFASP is suitable for most of occasions of microalgae proteomics studies.

  2. A Geostatistical Approach to Indoor Surface Sampling Strategies

    DEFF Research Database (Denmark)

    Schneider, Thomas; Petersen, Ole Holm; Nielsen, Allan Aasbjerg

    1990-01-01

    Particulate surface contamination is of concern in production industries such as food processing, aerospace, electronics and semiconductor manufacturing. There is also an increased awareness that surface contamination should be monitored in industrial hygiene surveys. A conceptual and theoretical...... framework for designing sampling strategies is thus developed. The distribution and spatial correlation of surface contamination can be characterized using concepts from geostatistical science, where spatial applications of statistics is most developed. The theory is summarized and particulate surface...... contamination, sampled from small areas on a table, have been used to illustrate the method. First, the spatial correlation is modelled and the parameters estimated from the data. Next, it is shown how the contamination at positions not measured can be estimated with kriging, a minimum mean square error method...

  3. Should the mass of a nanoferrite sample prepared by autocombustion method be considered as a realistic preparation parameter?

    Energy Technology Data Exchange (ETDEWEB)

    Wahba, Adel Maher, E-mail: adel.mousa@f-eng.tanta.edu.eg [Department of Engineering Physics and Mathematics, Faculty of Engineering, Tanta University (Egypt); Mohamed, Mohamed Bakr [Ain shams University, Faculty of Science, Physics Department, Cairo (Egypt)

    2017-02-15

    Detectable variations in structural, elastic and magnetic properties have been reported depending on the mass of the cobalt nanoferrite sample prepared by citrate autocombustion method. Heat released during the autocombustion process and its duration are directly proportional to the mass to be prepared, and is thus expected to affect both the crystallite size and the cation distribution giving rise to the reported variations in microstrain, magnetization, and coercivity. Formation of a pure spinel phase has been validated using X-ray diffraction patterns (XRD) and Fourier-transform infrared (FTIR) spectra. Crystallite sizes obtained from Williamson-Hall (W-H) method range from 28–87 nm, being further supported by images of high-resolution transmission electron microscope (HRTEM). Saturation magnetization and coercivity deduced from M-H hysteresis loops show a clear correlation with the cation distribution, which was proposed on the basis of experimentally obtained data of XRD, VSM, and IR. Elastic parameters have been estimated using the cation distribution and FTIR data, with a resulting trend quite opposite to that of the lattice parameter. - Highlights: • Samples with different masses of CoFe{sub 2}O{sub 4} were prepared by autocombustion method. • XRD and IR data confirmed a pure spinel cubic structure for all samples. • Structural and magnetic properties show detectable changes with the mass prepared. • Cation distribution was suggested from experimental data of XRD, IR, and M-H loops.

  4. Fast egg collection method greatly improves randomness of egg sampling in Drosophila melanogaster

    DEFF Research Database (Denmark)

    Schou, Mads Fristrup

    2013-01-01

    When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column...... and diminishes environmental variance. This method was compared with a traditional egg collection method where eggs are collected directly from the medium. Within each method the observed and expected standard deviations of egg-to-adult viability were compared, whereby the difference in the randomness...... and to obtain a representative collection of genotypes, the method presented here is strongly recommended when collecting eggs from Drosophila....

  5. Mathematical correlation of modal parameter identification methods via system realization theory

    Science.gov (United States)

    Juang, J. N.

    1986-01-01

    A unified approach is introduced using system realization theory to derive and correlate modal parameter identification methods for flexible structures. Several different time-domain and frequency-domain methods are analyzed and treated. A basic mathematical foundation is presented which provides insight into the field of modal parameter identification for comparison and evaluation. The relation among various existing methods is established and discussed. This report serves as a starting point to stimulate additional research towards the unification of the many possible approaches for modal parameter identification.

  6. Determination Total Phosphour of Maize Plant Samples by Continuous Flow Analyzer in Comparison with Vanadium Molybdate Yellow Colorimetric Method

    Directory of Open Access Journals (Sweden)

    LIU Yun-xia

    2015-12-01

    Full Text Available The vanadium molybdate yellow colorimetric method(VMYC method is regarded as one of conventional methods for determining total phosphorus(P in plants, but it is time consuming procedure. Continuous flow analyzer(CFA is a fluid stream segmentation technique with air segments. It is used to measure P concentration based on the molybdate-antimony-ascorbic acid method of Murphy and Riley. Sixty nine of maize plant samples were selected and digested with H2SO4-H2O2. P concentrations in the digests were determined by CFA and VMYC method, respectively. The t test found that there was no any significant difference of the plant P contents measured by the CFA and the VMYC method. A linear equation could best describe their relationship: Y(CFA-P=0.927X(VMYC-P-0.002. The Pearson's correlation coefficient was 0.985 with a significance level(n=69, P<0.01. The CFA method for plant P measurement had a high precision with relative standard deviation(RSD less than 1.5%. It is suggested that the CFA based on Murphy and Riley colorimetric detection can be used to determinate total plant P in the digests solutions with H2SO4-H2O2. The CFA method is labor saving and can handle large numbers of samples. The human error in mixing with other operations is reduced to a great extent.

  7. Adaptive cluster sampling: An efficient method for assessing inconspicuous species

    Science.gov (United States)

    Andrea M. Silletti; Joan Walker

    2003-01-01

    Restorationistis typically evaluate the success of a project by estimating the population sizes of species that have been planted or seeded. Because total census is raely feasible, they must rely on sampling methods for population estimates. However, traditional random sampling designs may be inefficient for species that, for one reason or another, are challenging to...

  8. A distance limited method for sampling downed coarse woody debris

    Science.gov (United States)

    Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine; Michael S. Williams

    2012-01-01

    A new sampling method for down coarse woody debris is proposed based on limiting the perpendicular distance from individual pieces to a randomly chosen sample point. Two approaches are presented that allow different protocols to be used to determine field measurements; estimators for each protocol are also developed. Both protocols are compared via simulation against...

  9. CORRELATION BETWEEN CAFFEINE CONTENTS OF GREEN ...

    African Journals Online (AJOL)

    KEY WORDS: Green coffee beans, Caffeine, Correlation between caffeine content and altitude of coffee plant,. UV-Vis .... The extraction of caffeine from green coffee bean samples in to water was carried out by the reported method ..... caffeine in proposed green tea standard reference materials by liquid chromatography.

  10. A novel heterogeneous training sample selection method on space-time adaptive processing

    Science.gov (United States)

    Wang, Qiang; Zhang, Yongshun; Guo, Yiduo

    2018-04-01

    The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.

  11. System reliability with correlated components: Accuracy of the Equivalent Planes method

    NARCIS (Netherlands)

    Roscoe, K.; Diermanse, F.; Vrouwenvelder, A.C.W.M.

    2015-01-01

    Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing

  12. System reliability with correlated components : Accuracy of the Equivalent Planes method

    NARCIS (Netherlands)

    Roscoe, K.; Diermanse, F.; Vrouwenvelder, T.

    2015-01-01

    Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing

  13. Comparing two sampling methods to engage hard-to-reach communities in research priority setting.

    Science.gov (United States)

    Valerio, Melissa A; Rodriguez, Natalia; Winkler, Paula; Lopez, Jaime; Dennison, Meagen; Liang, Yuanyuan; Turner, Barbara J

    2016-10-28

    Effective community-partnered and patient-centered outcomes research needs to address community priorities. However, optimal sampling methods to engage stakeholders from hard-to-reach, vulnerable communities to generate research priorities have not been identified. In two similar rural, largely Hispanic communities, a community advisory board guided recruitment of stakeholders affected by chronic pain using a different method in each community: 1) snowball sampling, a chain- referral method or 2) purposive sampling to recruit diverse stakeholders. In both communities, three groups of stakeholders attended a series of three facilitated meetings to orient, brainstorm, and prioritize ideas (9 meetings/community). Using mixed methods analysis, we compared stakeholder recruitment and retention as well as priorities from both communities' stakeholders on mean ratings of their ideas based on importance and feasibility for implementation in their community. Of 65 eligible stakeholders in one community recruited by snowball sampling, 55 (85 %) consented, 52 (95 %) attended the first meeting, and 36 (65 %) attended all 3 meetings. In the second community, the purposive sampling method was supplemented by convenience sampling to increase recruitment. Of 69 stakeholders recruited by this combined strategy, 62 (90 %) consented, 36 (58 %) attended the first meeting, and 26 (42 %) attended all 3 meetings. Snowball sampling recruited more Hispanics and disabled persons (all P research, focusing on non-pharmacologic interventions for management of chronic pain. Ratings on importance and feasibility for community implementation differed only on the importance of massage services (P = 0.045) which was higher for the purposive/convenience sampling group and for city improvements/transportation services (P = 0.004) which was higher for the snowball sampling group. In each of the two similar hard-to-reach communities, a community advisory board partnered with researchers

  14. WGCNA: an R package for weighted correlation network analysis.

    Science.gov (United States)

    Langfelder, Peter; Horvath, Steve

    2008-12-29

    Correlation networks are increasingly being used in bioinformatics applications. For example, weighted gene co-expression network analysis is a systems biology method for describing the correlation patterns among genes across microarray samples. Weighted correlation network analysis (WGCNA) can be used for finding clusters (modules) of highly correlated genes, for summarizing such clusters using the module eigengene or an intramodular hub gene, for relating modules to one another and to external sample traits (using eigengene network methodology), and for calculating module membership measures. Correlation networks facilitate network based gene screening methods that can be used to identify candidate biomarkers or therapeutic targets. These methods have been successfully applied in various biological contexts, e.g. cancer, mouse genetics, yeast genetics, and analysis of brain imaging data. While parts of the correlation network methodology have been described in separate publications, there is a need to provide a user-friendly, comprehensive, and consistent software implementation and an accompanying tutorial. The WGCNA R software package is a comprehensive collection of R functions for performing various aspects of weighted correlation network analysis. The package includes functions for network construction, module detection, gene selection, calculations of topological properties, data simulation, visualization, and interfacing with external software. Along with the R package we also present R software tutorials. While the methods development was motivated by gene expression data, the underlying data mining approach can be applied to a variety of different settings. The WGCNA package provides R functions for weighted correlation network analysis, e.g. co-expression network analysis of gene expression data. The R package along with its source code and additional material are freely available at http://www.genetics.ucla.edu/labs/horvath/CoexpressionNetwork/Rpackages/WGCNA.

  15. Ionizing radiation as optimization method for aluminum detection from drinking water samples

    International Nuclear Information System (INIS)

    Bazante-Yamguish, Renata; Geraldo, Aurea Beatriz C.; Moura, Eduardo; Manzoli, Jose Eduardo

    2013-01-01

    The presence of organic compounds in water samples is often responsible for metal complexation; depending on the analytic method, the organic fraction may dissemble the evaluation of the real values of metal concentration. Pre-treatment of the samples is advised when organic compounds are interfering agents, and thus sample mineralization may be accomplished by several chemical and/or physical methods. Here, the ionizing radiation was used as an advanced oxidation process (AOP), for sample pre-treatment before the analytic determination of total and dissolved aluminum by ICP-OES in drinking water samples from wells and spring source located at Billings dam region. Before irradiation, the spring source and wells' samples showed aluminum levels of 0.020 mg/l and 0.2 mg/l respectively; after irradiation, both samples showed a 8-fold increase of aluminum concentration. These results are discussed considering other physical and chemical parameters and peculiarities of sample sources. (author)

  16. Solvent extraction method for rapid separation of strontium-90 in milk and food samples

    International Nuclear Information System (INIS)

    Hingorani, S.B.; Sathe, A.P.

    1991-01-01

    A solvent extraction method, using tributyl phosphate, for rapid separation of strontium-90 in milk and other food samples has been presented in this report in view of large number of samples recieved after Chernobyl accident for checking radioactive contamination. The earlier nitration method in use for the determination of 90 Sr through its daughter 90 Y takes over two weeks for analysis of a sample. While by this extraction method it takes only 4 to 5 hours for sample analysis. Complete estimation including initial counting can be done in a single day. The chemical recovery varies between 80-90% compared to nitration method which is 65-80%. The purity of the method has been established by following the decay of yttrium-90 separated. Some of the results obtained by adopting this chemical method for food analysis are included. The method is, thus, found to be rapid and convenient for accurate estimation of strontium-90 in milk and food samples. (author). 2 tabs., 1 fig

  17. Mapping species distributions with MAXENT using a geographically biased sample of presence data: a performance assessment of methods for correcting sampling bias.

    Science.gov (United States)

    Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.

  18. Sampling for Patient Exit Interviews: Assessment of Methods Using Mathematical Derivation and Computer Simulations.

    Science.gov (United States)

    Geldsetzer, Pascal; Fink, Günther; Vaikath, Maria; Bärnighausen, Till

    2018-02-01

    (1) To evaluate the operational efficiency of various sampling methods for patient exit interviews; (2) to discuss under what circumstances each method yields an unbiased sample; and (3) to propose a new, operationally efficient, and unbiased sampling method. Literature review, mathematical derivation, and Monte Carlo simulations. Our simulations show that in patient exit interviews it is most operationally efficient if the interviewer, after completing an interview, selects the next patient exiting the clinical consultation. We demonstrate mathematically that this method yields a biased sample: patients who spend a longer time with the clinician are overrepresented. This bias can be removed by selecting the next patient who enters, rather than exits, the consultation room. We show that this sampling method is operationally more efficient than alternative methods (systematic and simple random sampling) in most primary health care settings. Under the assumption that the order in which patients enter the consultation room is unrelated to the length of time spent with the clinician and the interviewer, selecting the next patient entering the consultation room tends to be the operationally most efficient unbiased sampling method for patient exit interviews. © 2016 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.

  19. Sampling Methods for Detection and Monitoring of the Asian Citrus Psyllid (Hemiptera: Psyllidae).

    Science.gov (United States)

    Monzo, C; Arevalo, H A; Jones, M M; Vanaclocha, P; Croxton, S D; Qureshi, J A; Stansly, P A

    2015-06-01

    The Asian citrus psyllid (ACP), Diaphorina citri Kuwayama is a key pest of citrus due to its role as vector of citrus greening disease or "huanglongbing." ACP monitoring is considered an indispensable tool for management of vector and disease. In the present study, datasets collected between 2009 and 2013 from 245 citrus blocks were used to evaluate precision, sensitivity for detection, and efficiency of five sampling methods. The number of samples needed to reach a 0.25 standard error-mean ratio was estimated using Taylor's power law and used to compare precision among sampling methods. Comparison of detection sensitivity and time expenditure (cost) between stem-tap and other sampling methodologies conducted consecutively at the same location were also assessed. Stem-tap sampling was the most efficient sampling method when ACP densities were moderate to high and served as the basis for comparison with all other methods. Protocols that grouped trees near randomly selected locations across the block were more efficient than sampling trees at random across the block. Sweep net sampling was similar to stem-taps in number of captures per sampled unit, but less precise at any ACP density. Yellow sticky traps were 14 times more sensitive than stem-taps but much more time consuming and thus less efficient except at very low population densities. Visual sampling was efficient for detecting and monitoring ACP at low densities. Suction sampling was time consuming and taxing but the most sensitive of all methods for detection of sparse populations. This information can be used to optimize ACP monitoring efforts. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Computational methods and modeling. 1. Sampling a Position Uniformly in a Trilinear Hexahedral Volume

    International Nuclear Information System (INIS)

    Urbatsch, Todd J.; Evans, Thomas M.; Hughes, H. Grady

    2001-01-01

    Monte Carlo particle transport plays an important role in some multi-physics simulations. These simulations, which may additionally involve deterministic calculations, typically use a hexahedral or tetrahedral mesh. Trilinear hexahedrons are attractive for physics calculations because faces between cells are uniquely defined, distance-to-boundary calculations are deterministic, and hexahedral meshes tend to require fewer cells than tetrahedral meshes. We discuss one aspect of Monte Carlo transport: sampling a position in a tri-linear hexahedron, which is made up of eight control points, or nodes, and six bilinear faces, where each face is defined by four non-coplanar nodes in three-dimensional Cartesian space. We derive, code, and verify the exact sampling method and propose an approximation to it. Our proposed approximate method uses about one-third the memory and can be twice as fast as the exact sampling method, but we find that its inaccuracy limits its use to well-behaved hexahedrons. Daunted by the expense of the exact method, we propose an alternate approximate sampling method. First, calculate beforehand an approximate volume for each corner of the hexahedron by taking one-eighth of the volume of an imaginary parallelepiped defined by the corner node and the three nodes to which it is directly connected. For the sampling, assume separability in the parameters, and sample each parameter, in turn, from a linear pdf defined by the sum of the four corner volumes at each limit (-1 and 1) of the parameter. This method ignores the quadratic portion of the pdf, but it requires less storage, has simpler sampling, and needs no extra, on-the-fly calculations. We simplify verification by designing tests that consist of one or more cells that entirely fill a unit cube. Uniformly sampling complicated cells that fill a unit cube will result in uniformly sampling the unit cube. Unit cubes are easily analyzed. The first problem has four wedges (or tents, or A frames) whose