WorldWideScience

Sample records for sampling method based

  1. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  2. An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method

    International Nuclear Information System (INIS)

    Campolina, Daniel; Lima, Paulo Rubens I.; Pereira, Claubia; Veloso, Maria Auxiliadora F.

    2015-01-01

    Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)

  3. Survey of sampling-based methods for uncertainty and sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, J.C.; Johnson, J.D.; Sallaberry, C.J.; Storlie, C.B.

    2006-01-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (i) definition of probability distributions to characterize epistemic uncertainty in analysis inputs (ii) generation of samples from uncertain analysis inputs (iii) propagation of sampled inputs through an analysis (iv) presentation of uncertainty analysis results, and (v) determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two-dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition

  4. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD. (.; .); Storlie, Curt B. (Colorado State University, Fort Collins, CO)

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  5. A scalable method for parallelizing sampling-based motion planning algorithms

    KAUST Repository

    Jacobs, Sam Ade; Manavi, Kasra; Burgos, Juan; Denny, Jory; Thomas, Shawna; Amato, Nancy M.

    2012-01-01

    This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.

  6. A scalable method for parallelizing sampling-based motion planning algorithms

    KAUST Repository

    Jacobs, Sam Ade

    2012-05-01

    This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.

  7. Towards Cost-efficient Sampling Methods

    OpenAIRE

    Peng, Luo; Yongli, Li; Chong, Wu

    2014-01-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper presents two new sampling methods based on the perspective that a small part of vertices with high node degree can possess the most structure information of a network. The two proposed sampling methods are efficient in sampling the nodes with high degree. The first new sampling method is improved on the basis of the stratified random sampling method and...

  8. A Method for Microalgae Proteomics Analysis Based on Modified Filter-Aided Sample Preparation.

    Science.gov (United States)

    Li, Song; Cao, Xupeng; Wang, Yan; Zhu, Zhen; Zhang, Haowei; Xue, Song; Tian, Jing

    2017-11-01

    With the fast development of microalgal biofuel researches, the proteomics studies of microalgae increased quickly. A filter-aided sample preparation (FASP) method is widely used proteomics sample preparation method since 2009. Here, a method of microalgae proteomics analysis based on modified filter-aided sample preparation (mFASP) was described to meet the characteristics of microalgae cells and eliminate the error caused by over-alkylation. Using Chlamydomonas reinhardtii as the model, the prepared sample was tested by standard LC-MS/MS and compared with the previous reports. The results showed mFASP is suitable for most of occasions of microalgae proteomics studies.

  9. Toward cost-efficient sampling methods

    Science.gov (United States)

    Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie

    2015-09-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.

  10. A Proposal on the Advanced Sampling Based Sensitivity and Uncertainty Analysis Method for the Eigenvalue Uncertainty Analysis

    International Nuclear Information System (INIS)

    Kim, Song Hyun; Song, Myung Sub; Shin, Chang Ho; Noh, Jae Man

    2014-01-01

    In using the perturbation theory, the uncertainty of the response can be estimated by a single transport simulation, and therefore it requires small computational load. However, it has a disadvantage that the computation methodology must be modified whenever estimating different response type such as multiplication factor, flux, or power distribution. Hence, it is suitable for analyzing few responses with lots of perturbed parameters. Statistical approach is a sampling based method which uses randomly sampled cross sections from covariance data for analyzing the uncertainty of the response. XSUSA is a code based on the statistical approach. The cross sections are only modified with the sampling based method; thus, general transport codes can be directly utilized for the S/U analysis without any code modifications. However, to calculate the uncertainty distribution from the result, code simulation should be enough repeated with randomly sampled cross sections. Therefore, this inefficiency is known as a disadvantage of the stochastic method. In this study, an advanced sampling method of the cross sections is proposed and verified to increase the estimation efficiency of the sampling based method. In this study, to increase the estimation efficiency of the sampling based S/U method, an advanced sampling and estimation method was proposed. The main feature of the proposed method is that the cross section averaged from each single sampled cross section is used. For the use of the proposed method, the validation was performed using the perturbation theory

  11. A copula-based sampling method for data-driven prognostics

    International Nuclear Information System (INIS)

    Xi, Zhimin; Jing, Rong; Wang, Pingfeng; Hu, Chao

    2014-01-01

    This paper develops a Copula-based sampling method for data-driven prognostics. The method essentially consists of an offline training process and an online prediction process: (i) the offline training process builds a statistical relationship between the failure time and the time realizations at specified degradation levels on the basis of off-line training data sets; and (ii) the online prediction process identifies probable failure times for online testing units based on the statistical model constructed in the offline process and the online testing data. Our contributions in this paper are three-fold, namely the definition of a generic health index system to quantify the health degradation of an engineering system, the construction of a Copula-based statistical model to learn the statistical relationship between the failure time and the time realizations at specified degradation levels, and the development of a simulation-based approach for the prediction of remaining useful life (RUL). Two engineering case studies, namely the electric cooling fan health prognostics and the 2008 IEEE PHM challenge problem, are employed to demonstrate the effectiveness of the proposed methodology. - Highlights: • We develop a novel mechanism for data-driven prognostics. • A generic health index system quantifies health degradation of engineering systems. • Off-line training model is constructed based on the Bayesian Copula model. • Remaining useful life is predicted from a simulation-based approach

  12. Comparison between conservative perturbation and sampling based methods for propagation of Non-Neutronic uncertainties

    International Nuclear Information System (INIS)

    Campolina, Daniel de A.M.; Pereira, Claubia; Veloso, Maria Auxiliadora F.

    2013-01-01

    For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using sampling based method is recent because of the huge computational effort required. In this work a sample space of MCNP calculations were used as a black box model to propagate the uncertainty of system parameters. The efficiency of the method was compared to a conservative method. Uncertainties in input parameters of the reactor considered non-neutronic uncertainties, including geometry dimensions and density. The effect of the uncertainties on the effective multiplication factor of the system was analyzed respect to the possibility of using many uncertainties in the same input. If the case includes more than 46 parameters with uncertainty in the same input, the sampling based method is proved to be more efficient than the conservative method. (author)

  13. A Proposal of Estimation Methodology to Improve Calculation Efficiency of Sampling-based Method in Nuclear Data Sensitivity and Uncertainty Analysis

    International Nuclear Information System (INIS)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2014-01-01

    The uncertainty with the sampling-based method is evaluated by repeating transport calculations with a number of cross section data sampled from the covariance uncertainty data. In the transport calculation with the sampling-based method, the transport equation is not modified; therefore, all uncertainties of the responses such as k eff , reaction rates, flux and power distribution can be directly obtained all at one time without code modification. However, a major drawback with the sampling-based method is that it requires expensive computational load for statistically reliable results (inside confidence level 0.95) in the uncertainty analysis. The purpose of this study is to develop a method for improving the computational efficiency and obtaining highly reliable uncertainty result in using the sampling-based method with Monte Carlo simulation. The proposed method is a method to reduce the convergence time of the response uncertainty by using the multiple sets of sampled group cross sections in a single Monte Carlo simulation. The proposed method was verified by estimating GODIVA benchmark problem and the results were compared with that of conventional sampling-based method. In this study, sampling-based method based on central limit theorem is proposed to improve calculation efficiency by reducing the number of repetitive Monte Carlo transport calculation required to obtain reliable uncertainty analysis results. Each set of sampled group cross sections is assigned to each active cycle group in a single Monte Carlo simulation. The criticality uncertainty for the GODIVA problem is evaluated by the proposed and previous method. The results show that the proposed sampling-based method can efficiently decrease the number of Monte Carlo simulation required for evaluate uncertainty of k eff . It is expected that the proposed method will improve computational efficiency of uncertainty analysis with sampling-based method

  14. Multidimensional scaling analysis of financial time series based on modified cross-sample entropy methods

    Science.gov (United States)

    He, Jiayi; Shang, Pengjian; Xiong, Hui

    2018-06-01

    Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.

  15. Individual and pen-based oral fluid sampling: A welfare-friendly sampling method for group-housed gestating sows.

    Science.gov (United States)

    Pol, Françoise; Dorenlor, Virginie; Eono, Florent; Eudier, Solveig; Eveno, Eric; Liégard-Vanhecke, Dorine; Rose, Nicolas; Fablet, Christelle

    2017-11-01

    The aims of this study were to assess the feasibility of individual and pen-based oral fluid sampling (OFS) in 35 pig herds with group-housed sows, compare these methods to blood sampling, and assess the factors influencing the success of sampling. Individual samples were collected from at least 30 sows per herd. Pen-based OFS was performed using devices placed in at least three pens for 45min. Information related to the farm, the sows, and their living conditions were collected. Factors significantly associated with the duration of sampling and the chewing behaviour of sows were identified by logistic regression. Individual OFS took 2min 42s on average; the type of floor, swab size, and operator were associated with a sampling time >2min. Pen-based OFS was obtained from 112 devices (62.2%). The type of floor, parity, pen-level activity, and type of feeding were associated with chewing behaviour. Pen activity was associated with the latency to interact with the device. The type of floor, gestation stage, parity, group size, and latency to interact with the device were associated with a chewing time >10min. After 15, 30 and 45min of pen-based OFS, 48%, 60% and 65% of the sows were lying down, respectively. The time spent after the beginning of sampling, genetic type, and time elapsed since the last meal were associated with 50% of the sows lying down at one time point. The mean time to blood sample the sows was 1min 16s and 2min 52s if the number of operators required was considered in the sampling time estimation. The genetic type, parity, and type of floor were significantly associated with a sampling time higher than 1min 30s. This study shows that individual OFS is easy to perform in group-housed sows by a single operator, even though straw-bedded animals take longer to sample than animals housed on slatted floors, and suggests some guidelines to optimise pen-based OFS success. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Molecular-based rapid inventories of sympatric diversity: a comparison of DNA barcode clustering methods applied to geography-based vs clade-based sampling of amphibians.

    Science.gov (United States)

    Paz, Andrea; Crawford, Andrew J

    2012-11-01

    Molecular markers offer a universal source of data for quantifying biodiversity. DNA barcoding uses a standardized genetic marker and a curated reference database to identify known species and to reveal cryptic diversity within wellsampled clades. Rapid biological inventories, e.g. rapid assessment programs (RAPs), unlike most barcoding campaigns, are focused on particular geographic localities rather than on clades. Because of the potentially sparse phylogenetic sampling, the addition of DNA barcoding to RAPs may present a greater challenge for the identification of named species or for revealing cryptic diversity. In this article we evaluate the use of DNA barcoding for quantifying lineage diversity within a single sampling site as compared to clade-based sampling, and present examples from amphibians. We compared algorithms for identifying DNA barcode clusters (e.g. species, cryptic species or Evolutionary Significant Units) using previously published DNA barcode data obtained from geography-based sampling at a site in Central Panama, and from clade-based sampling in Madagascar. We found that clustering algorithms based on genetic distance performed similarly on sympatric as well as clade-based barcode data, while a promising coalescent-based method performed poorly on sympatric data. The various clustering algorithms were also compared in terms of speed and software implementation. Although each method has its shortcomings in certain contexts, we recommend the use of the ABGD method, which not only performs fairly well under either sampling method, but does so in a few seconds and with a user-friendly Web interface.

  17. Comparison of sampling methods for animal manure

    NARCIS (Netherlands)

    Derikx, P.J.L.; Ogink, N.W.M.; Hoeksma, P.

    1997-01-01

    Currently available and recently developed sampling methods for slurry and solid manure were tested for bias and reproducibility in the determination of total phosphorus and nitrogen content of samples. Sampling methods were based on techniques in which samples were taken either during loading from

  18. An efficient modularized sample-based method to estimate the first-order Sobol' index

    International Nuclear Information System (INIS)

    Li, Chenzhao; Mahadevan, Sankaran

    2016-01-01

    Sobol' index is a prominent methodology in global sensitivity analysis. This paper aims to directly estimate the Sobol' index based only on available input–output samples, even if the underlying model is unavailable. For this purpose, a new method to calculate the first-order Sobol' index is proposed. The innovation is that the conditional variance and mean in the formula of the first-order index are calculated at an unknown but existing location of model inputs, instead of an explicit user-defined location. The proposed method is modularized in two aspects: 1) index calculations for different model inputs are separate and use the same set of samples; and 2) model input sampling, model evaluation, and index calculation are separate. Due to this modularization, the proposed method is capable to compute the first-order index if only input–output samples are available but the underlying model is unavailable, and its computational cost is not proportional to the dimension of the model inputs. In addition, the proposed method can also estimate the first-order index with correlated model inputs. Considering that the first-order index is a desired metric to rank model inputs but current methods can only handle independent model inputs, the proposed method contributes to fill this gap. - Highlights: • An efficient method to estimate the first-order Sobol' index. • Estimate the index from input–output samples directly. • Computational cost is not proportional to the number of model inputs. • Handle both uncorrelated and correlated model inputs.

  19. Some connections between importance sampling and enhanced sampling methods in molecular dynamics.

    Science.gov (United States)

    Lie, H C; Quer, J

    2017-11-21

    In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.

  20. Subrandom methods for multidimensional nonuniform sampling.

    Science.gov (United States)

    Worley, Bradley

    2016-08-01

    Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Comparison of PCR-Based Diagnosis with Centrifuged-Based Enrichment Method for Detection of Borrelia persica in Animal Blood Samples.

    Science.gov (United States)

    Naddaf, S R; Kishdehi, M; Siavashi, Mr

    2011-01-01

    The mainstay of diagnosis of relapsing fever (RF) is demonstration of the spirochetes in Giemsa-stained thick blood smears, but during non fever periods the bacteria are very scanty and rarely detected in blood smears by microscopy. This study is aimed to evaluate the sensitivity of different methods developed for detection of low-grade spirochetemia. Animal blood samples with low degrees of spirochetemia were tested with two PCRs and a nested PCR targeting flaB, GlpQ, and rrs genes. Also, a centrifuged-based enrichment method and Giemsa staining were performed on blood samples with various degrees of spirochetemia. The flaB-PCR and nested rrs-PCR turned positive with various degrees of spirochetemia including the blood samples that turned negative with dark-field microscopy. The GlpQ-PCR was positive as far as at least one spirochete was seen in 5-10 microscopic fields. The sensitivity of GlpQ-PCR increased when DNA from Buffy Coat Layer (BCL) was used as template. The centrifuged-based enrichment method turned positive with as low concentration as 50 bacteria/ml blood, while Giemsa thick staining detected bacteria with concentrations ≥ 25000 bacteria/ml. Centrifuged-based enrichment method appeared as much as 500-fold more sensitive than thick smears, which makes it even superior to some PCR assays. Due to simplicity and minimal laboratory requirements, this method can be considered a valuable tool for diagnosis of RF in rural health centers.

  2. Comparison of PCR-Based Diagnosis with Centrifuged-Based Enrichment Method for Detection of Borrelia Persica in Animal Blood Samples

    Directory of Open Access Journals (Sweden)

    SR Naddaf

    2011-06-01

    Full Text Available Background: The mainstay of diagnosis of relapsing fever (RF is demonstration of the spirochetes in Giemsa-stained thick blood smears, but during non fever periods the bacteria are very scanty and rarely detected in blood smears by mi­cros­copy. This study is aimed to evaluate the sensitivity of different methods developed for detection of low-grade spi­ro­chetemia. Methods: Animal blood samples with low degrees of spirochetemia were tested with two PCRs and a nested PCR tar­get­ing flaB, GlpQ, and rrs genes. Also, a centrifuged-based enrichment method and Giemsa staining were per­formed on blood samples with various degrees of spirochetemia. Results: The flaB-PCR and nested rrs-PCR turned positive with various degrees of spirochetemia including the blood samples that turned negative with dark-field microscopy. The GlpQ-PCR was positive as far as at least one spi­ro­chete was seen in 5-10 microscopic fields. The sensitivity of GlpQ-PCR increased when DNA from Buffy Coat Layer (BCL was used as template. The centrifuged-based enrichment method turned positive with as low concentra­tion as 50 bacteria/ml blood, while Giemsa thick staining detected bacteria with concentrations ≥ 25000 bacteria/ml. Conclusion: Centrifuged-based enrichment method appeared as much as 500-fold more sensitive than thick smears, which makes it even superior to some PCR assays. Due to simplicity and minimal laboratory requirements, this method can be considered a valuable tool for diagnosis of RF in rural health centers.

  3. Comparison of PCR-Based Diagnosis with Centrifuged-Based Enrichment Method for Detection of Borrelia persica in Animal Blood Samples

    Directory of Open Access Journals (Sweden)

    SR Naddaf

    2011-06-01

    Background: The mainstay of diagnosis of relapsing fever (RF is demonstration of the spirochetes in Giemsa-stained thick blood smears, but during non fever periods the bacteria are very scanty and rarely detected in blood smears by mi­cros­copy. This study is aimed to evaluate the sensitivity of different methods developed for detection of low-grade spi­ro­chetemia. Methods: Animal blood samples with low degrees of spirochetemia were tested with two PCRs and a nested PCR tar­get­ing flaB, GlpQ, and rrs genes. Also, a centrifuged-based enrichment method and Giemsa staining were per­formed on blood samples with various degrees of spirochetemia. Results: The flaB-PCR and nested rrs-PCR turned positive with various degrees of spirochetemia including the blood samples that turned negative with dark-field microscopy. The GlpQ-PCR was positive as far as at least one spi­ro­chete was seen in 5-10 microscopic fields. The sensitivity of GlpQ-PCR increased when DNA from Buffy Coat Layer (BCL was used as template. The centrifuged-based enrichment method turned positive with as low concentra­tion as 50 bacteria/ml blood, while Giemsa thick staining detected bacteria with concentrations ≥ 25000 bacteria/ml.  Conclusion: Centrifuged-based enrichment method appeared as much as 500-fold more sensitive than thick smears, which makes it even superior to some PCR assays. Due to simplicity and minimal laboratory requirements, this method can be considered a valuable tool for diagnosis of RF in rural health centers.  

  4. Sampling method of environmental radioactivity monitoring

    International Nuclear Information System (INIS)

    1984-01-01

    This manual provides sampling methods of environmental samples of airborne dust, precipitated dust, precipitated water (rain or snow), fresh water, soil, river sediment or lake sediment, discharged water from a nuclear facility, grains, tea, milk, pasture grass, limnetic organisms, daily diet, index organisms, sea water, marine sediment, marine organisms, and that for tritium and radioiodine determination for radiation monitoring from radioactive fallout or radioactivity release by nuclear facilities. This manual aims at the presentation of standard sampling procedures for environmental radioactivity monitoring regardless of monitoring objectives, and shows preservation method of environmental samples acquired at the samplingpoint for radiation counting for those except human body. Sampling techniques adopted in this manual is decided by the criteria that they are suitable for routine monitoring and any special skillfulness is not necessary. Based on the above-mentioned principle, this manual presents outline and aims of sampling, sampling position or object, sampling quantity, apparatus, equipment or vessel for sampling, sampling location, sampling procedures, pretreatment and preparation procedures of a sample for radiation counting, necessary recording items for sampling and sample transportation procedures. Special attention is described in the chapter of tritium and radioiodine because these radionuclides might be lost by the above-mentioned sample preservation method for radiation counting of less volatile radionuclides than tritium or radioiodine. (Takagi, S.)

  5. Optimization of the solvent-based dissolution method to sample volatile organic compound vapors for compound-specific isotope analysis.

    Science.gov (United States)

    Bouchard, Daniel; Wanner, Philipp; Luo, Hong; McLoughlin, Patrick W; Henderson, James K; Pirkle, Robert J; Hunkeler, Daniel

    2017-10-20

    The methodology of the solvent-based dissolution method used to sample gas phase volatile organic compounds (VOC) for compound-specific isotope analysis (CSIA) was optimized to lower the method detection limits for TCE and benzene. The sampling methodology previously evaluated by [1] consists in pulling the air through a solvent to dissolve and accumulate the gaseous VOC. After the sampling process, the solvent can then be treated similarly as groundwater samples to perform routine CSIA by diluting an aliquot of the solvent into water to reach the required concentration of the targeted contaminant. Among solvents tested, tetraethylene glycol dimethyl ether (TGDE) showed the best aptitude for the method. TGDE has a great affinity with TCE and benzene, hence efficiently dissolving the compounds during their transition through the solvent. The method detection limit for TCE (5±1μg/m 3 ) and benzene (1.7±0.5μg/m 3 ) is lower when using TGDE compared to methanol, which was previously used (385μg/m 3 for TCE and 130μg/m 3 for benzene) [2]. The method detection limit refers to the minimal gas phase concentration in ambient air required to load sufficient VOC mass into TGDE to perform δ 13 C analysis. Due to a different analytical procedure, the method detection limit associated with δ 37 Cl analysis was found to be 156±6μg/m 3 for TCE. Furthermore, the experimental results validated the relationship between the gas phase TCE and the progressive accumulation of dissolved TCE in the solvent during the sampling process. Accordingly, based on the air-solvent partitioning coefficient, the sampling methodology (e.g. sampling rate, sampling duration, amount of solvent) and the final TCE concentration in the solvent, the concentration of TCE in the gas phase prevailing during the sampling event can be determined. Moreover, the possibility to analyse for TCE concentration in the solvent after sampling (or other targeted VOCs) allows the field deployment of the sampling

  6. A nanosilver-based spectrophotometric method for determination of malachite green in surface water samples.

    Science.gov (United States)

    Sahraei, R; Farmany, A; Mortazavi, S S; Noorizadeh, H

    2013-07-01

    A new spectrophotometric method is reported for the determination of nanomolar level of malachite green in surface water samples. The method is based on the catalytic effect of silver nanoparticles on the oxidation of malachite green by hexacyanoferrate (III) in acetate-acetic acid medium. The absorbance is measured at 610 nm with the fixed-time method. Under the optimum conditions, the linear range was 8.0 × 10(-9)-2.0 × 10(-7) mol L(-1) malachite green with a correlation coefficient of 0.996. The limit of detection (S/N = 3) was 2.0 × 10(-9) mol L(-1). Relative standard deviation for ten replicate determinations of 1.0 × 10(-8) mol L(-1) malachite green was 1.86%. The method is featured with good accuracy and reproducibility for malachite green determination in surface water samples without any pre-concentration and separation step.

  7. Standard methods for sampling and sample preparation for gamma spectroscopy

    International Nuclear Information System (INIS)

    Taskaeva, M.; Taskaev, E.; Nikolov, P.

    1993-01-01

    The strategy for sampling and sample preparation is outlined: necessary number of samples; analysis and treatment of the results received; quantity of the analysed material according to the radionuclide concentrations and analytical methods; the minimal quantity and kind of the data needed for making final conclusions and decisions on the base of the results received. This strategy was tested in gamma spectroscopic analysis of radionuclide contamination of the region of Eleshnitsa Uranium Mines. The water samples was taken and stored according to the ASTM D 3370-82. The general sampling procedures were in conformity with the recommendations of ISO 5667. The radionuclides was concentrated by coprecipitation with iron hydroxide and ion exchange. The sampling of soil samples complied with the rules of ASTM C 998, and their sample preparation - with ASTM C 999. After preparation the samples were sealed hermetically and measured. (author)

  8. Development of a Univariate Membrane-Based Mid-Infrared Method for Protein Quantitation and Total Lipid Content Analysis of Biological Samples

    Directory of Open Access Journals (Sweden)

    Ivona Strug

    2014-01-01

    Full Text Available Biological samples present a range of complexities from homogeneous purified protein to multicomponent mixtures. Accurate qualification of such samples is paramount to downstream applications. We describe the development of an MIR spectroscopy-based analytical method offering simultaneous protein quantitation (0.25–5 mg/mL and analysis of total lipid or detergent species, as well as the identification of other biomolecules present in biological samples. The method utilizes a hydrophilic PTFE membrane engineered for presentation of aqueous samples in a dried format compatible with fast infrared analysis. Unlike classical quantification techniques, the reported method is amino acid sequence independent and thus applicable to complex samples of unknown composition. By comparison to existing platforms, this MIR-based method enables direct quantification using minimal sample volume (2 µL; it is well-suited where repeat access and limited sample size are critical parameters. Further, accurate results can be derived without specialized training or knowledge of IR spectroscopy. Overall, the simplified application and analysis system provides a more cost-effective alternative to high-throughput IR systems for research laboratories with minimal throughput demands. In summary, the MIR-based system provides a viable alternative to current protein quantitation methods; it also uniquely offers simultaneous qualification of other components, notably lipids and detergents.

  9. Validation of single-sample doubly labeled water method

    International Nuclear Information System (INIS)

    Webster, M.D.; Weathers, W.W.

    1989-01-01

    We have experimentally validated a single-sample variant of the doubly labeled water method for measuring metabolic rate and water turnover in a very small passerine bird, the verdin (Auriparus flaviceps). We measured CO 2 production using the Haldane gravimetric technique and compared these values with estimates derived from isotopic data. Doubly labeled water results based on the one-sample calculations differed from Haldane values by less than 0.5% on average (range -8.3 to 11.2%, n = 9). Water flux computed by the single-sample method differed by -1.5% on average from results for the same birds based on the standard, two-sample technique (range -13.7 to 2.0%, n = 9)

  10. Assessing protein conformational sampling methods based on bivariate lag-distributions of backbone angles

    KAUST Repository

    Maadooliat, Mehdi; Gao, Xin; Huang, Jianhua Z.

    2012-01-01

    Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence-structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu. edu/~madoliat/LagSVD) that can be used to produce informative animations. © The Author 2012. Published by Oxford University Press.

  11. Assessing protein conformational sampling methods based on bivariate lag-distributions of backbone angles

    KAUST Repository

    Maadooliat, Mehdi

    2012-08-27

    Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence-structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu. edu/~madoliat/LagSVD) that can be used to produce informative animations. © The Author 2012. Published by Oxford University Press.

  12. On-capillary sample cleanup method for the electrophoretic determination of carbohydrates in juice samples.

    Science.gov (United States)

    Morales-Cid, Gabriel; Simonet, Bartolomé M; Cárdenas, Soledad; Valcárcel, Miguel

    2007-05-01

    On many occasions, sample treatment is a critical step in electrophoretic analysis. As an alternative to batch procedures, in this work, a new strategy is presented with a view to develop an on-capillary sample cleanup method. This strategy is based on the partial filling of the capillary with carboxylated single-walled carbon nanotube (c-SWNT). The nanoparticles retain interferences from the matrix allowing the determination and quantification of carbohydrates (viz glucose, maltose and fructose). The precision of the method for the analysis of real samples ranged from 5.3 to 6.4%. The proposed method was compared with a method based on a batch filtration of the juice sample through diatomaceous earth and further electrophoretic determination. This method was also validated in this work. The RSD for this other method ranged from 5.1 to 6%. The results obtained by both methods were statistically comparable demonstrating the accuracy of the proposed methods and their effectiveness. Electrophoretic separation of carbohydrates was achieved using 200 mM borate solution as a buffer at pH 9.5 and applying 15 kV. During separation, the capillary temperature was kept constant at 40 degrees C. For the on-capillary cleanup method, a solution containing 50 mg/L of c-SWNTs prepared in 300 mM borate solution at pH 9.5 was introduced for 60 s into the capillary just before sample introduction. For the electrophoretic analysis of samples cleaned in batch with diatomaceous earth, it is also recommended to introduce into the capillary, just before the sample, a 300 mM borate solution as it enhances the sensitivity and electrophoretic resolution.

  13. Adaptive list sequential sampling method for population-based observational studies

    NARCIS (Netherlands)

    Hof, Michel H.; Ravelli, Anita C. J.; Zwinderman, Aeilko H.

    2014-01-01

    In population-based observational studies, non-participation and delayed response to the invitation to participate are complications that often arise during the recruitment of a sample. When both are not properly dealt with, the composition of the sample can be different from the desired

  14. Approximation of the exponential integral (well function) using sampling methods

    Science.gov (United States)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  15. Improvement of a sample preparation method assisted by sodium deoxycholate for mass-spectrometry-based shotgun membrane proteomics.

    Science.gov (United States)

    Lin, Yong; Lin, Haiyan; Liu, Zhonghua; Wang, Kunbo; Yan, Yujun

    2014-11-01

    In current shotgun-proteomics-based biological discovery, the identification of membrane proteins is a challenge. This is especially true for integral membrane proteins due to their highly hydrophobic nature and low abundance. Thus, much effort has been directed at sample preparation strategies such as use of detergents, chaotropes, and organic solvents. We previously described a sample preparation method for shotgun membrane proteomics, the sodium deoxycholate assisted method, which cleverly circumvents many of the challenges associated with traditional sample preparation methods. However, the method is associated with significant sample loss due to the slightly weaker extraction/solubilization ability of sodium deoxycholate when it is used at relatively low concentrations such as 1%. Hence, we present an enhanced sodium deoxycholate sample preparation strategy that first uses a high concentration of sodium deoxycholate (5%) to lyse membranes and extract/solubilize hydrophobic membrane proteins, and then dilutes the detergent to 1% for a more efficient digestion. We then applied the improved method to shotgun analysis of proteins from rat liver membrane enriched fraction. Compared with other representative sample preparation strategies including our previous sodium deoxycholate assisted method, the enhanced sodium deoxycholate method exhibited superior sensitivity, coverage, and reliability for the identification of membrane proteins particularly those with high hydrophobicity and/or multiple transmembrane domains. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Study on a pattern classification method of soil quality based on simplified learning sample dataset

    Science.gov (United States)

    Zhang, Jiahua; Liu, S.; Hu, Y.; Tian, Y.

    2011-01-01

    Based on the massive soil information in current soil quality grade evaluation, this paper constructed an intelligent classification approach of soil quality grade depending on classical sampling techniques and disordered multiclassification Logistic regression model. As a case study to determine the learning sample capacity under certain confidence level and estimation accuracy, and use c-means algorithm to automatically extract the simplified learning sample dataset from the cultivated soil quality grade evaluation database for the study area, Long chuan county in Guangdong province, a disordered Logistic classifier model was then built and the calculation analysis steps of soil quality grade intelligent classification were given. The result indicated that the soil quality grade can be effectively learned and predicted by the extracted simplified dataset through this method, which changed the traditional method for soil quality grade evaluation. ?? 2011 IEEE.

  17. Surface plasmon resonance based sensing of different chemical and biological samples using admittance loci method

    Science.gov (United States)

    Brahmachari, Kaushik; Ghosh, Sharmila; Ray, Mina

    2013-06-01

    The admittance loci method plays an important role in the design of multilayer thin film structures. In this paper, admittance loci method has been explored theoretically for sensing of various chemical and biological samples based on surface plasmon resonance (SPR) phenomenon. A dielectric multilayer structure consisting of a Boro silicate glass (BSG) substrate, calcium fluoride (CaF2) and zirconium dioxide (ZrO2) along with different dielectric layers has been investigated. Moreover, admittance loci as well as SPR curves of metal-dielectric multilayer structure consisting of the BSG substrate, gold metal film and various dielectric samples has been simulated in MATLAB environment. To validate the proposed simulation results, calibration curves have also been provided.

  18. A nanosilver-based spectrophotometry method for sensitive determination of tartrazine in food samples.

    Science.gov (United States)

    Sahraei, R; Farmany, A; Mortazavi, S S

    2013-06-01

    A new method is reported for sensitive determination of tartrazine in the food samples. The method is based on the catalytic effect of silver nanoparticle (AgNPs) on the oxidation reaction of tartrazine by potassium iodate in the acetate buffer medium. The reaction is followed spectrophotometrically by measuring the change in absorbance (ΔA) at 420 nm using a fixed time method (70 s). The reaction variables were optimised in order to achieve the highest sensitivity. The thirty six criterion detection limit was 0.3 ng/mL, and the relative standard deviation for ten replicate measurements of 30 ng/mL of tartrazine was 0.98% (n=10). The method was successfully applied to the determination of tartrazine in lemon, and papaya-flavoured gelatin, candy, and in fruit syrup. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Damage evolution analysis of coal samples under cyclic loading based on single-link cluster method

    Science.gov (United States)

    Zhang, Zhibo; Wang, Enyuan; Li, Nan; Li, Xuelong; Wang, Xiaoran; Li, Zhonghui

    2018-05-01

    In this paper, the acoustic emission (AE) response of coal samples under cyclic loading is measured. The results show that there is good positive relation between AE parameters and stress. The AE signal of coal samples under cyclic loading exhibits an obvious Kaiser Effect. The single-link cluster (SLC) method is applied to analyze the spatial evolution characteristics of AE events and the damage evolution process of coal samples. It is found that a subset scale of the SLC structure becomes smaller and smaller when the number of cyclic loading increases, and there is a negative linear relationship between the subset scale and the degree of damage. The spatial correlation length ξ of an SLC structure is calculated. The results show that ξ fluctuates around a certain value from the second cyclic loading process to the fifth cyclic loading process, but spatial correlation length ξ clearly increases in the sixth loading process. Based on the criterion of microcrack density, the coal sample failure process is the transformation from small-scale damage to large-scale damage, which is the reason for changes in the spatial correlation length. Through a systematic analysis, the SLC method is an effective method to research the damage evolution process of coal samples under cyclic loading, and will provide important reference values for studying coal bursts.

  20. Sampling methods

    International Nuclear Information System (INIS)

    Loughran, R.J.; Wallbrink, P.J.; Walling, D.E.; Appleby, P.G.

    2002-01-01

    Methods for the collection of soil samples to determine levels of 137 Cs and other fallout radionuclides, such as excess 210 Pb and 7 Be, will depend on the purposes (aims) of the project, site and soil characteristics, analytical capacity, the total number of samples that can be analysed and the sample mass required. The latter two will depend partly on detector type and capabilities. A variety of field methods have been developed for different field conditions and circumstances over the past twenty years, many of them inherited or adapted from soil science and sedimentology. The use of them inherited or adapted from soil science and sedimentology. The use of 137 Cs in erosion studies has been widely developed, while the application of fallout 210 Pb and 7 Be is still developing. Although it is possible to measure these nuclides simultaneously, it is common for experiments to designed around the use of 137 Cs along. Caesium studies typically involve comparison of the inventories found at eroded or sedimentation sites with that of a 'reference' site. An accurate characterization of the depth distribution of these fallout nuclides is often required in order to apply and/or calibrate the conversion models. However, depending on the tracer involved, the depth distribution, and thus the sampling resolution required to define it, differs. For example, a depth resolution of 1 cm is often adequate when using 137 Cs. However, fallout 210 Pb and 7 Be commonly has very strong surface maxima that decrease exponentially with depth, and fine depth increments are required at or close to the soil surface. Consequently, different depth incremental sampling methods are required when using different fallout radionuclides. Geomorphic investigations also frequently require determination of the depth-distribution of fallout nuclides on slopes and depositional sites as well as their total inventories

  1. Efficient sample preparation method based on solvent-assisted dispersive solid-phase extraction for the trace detection of butachlor in urine and waste water samples.

    Science.gov (United States)

    Aladaghlo, Zolfaghar; Fakhari, Alireza; Behbahani, Mohammad

    2016-10-01

    In this work, an efficient sample preparation method termed solvent-assisted dispersive solid-phase extraction was applied. The used sample preparation method was based on the dispersion of the sorbent (benzophenone) into the aqueous sample to maximize the interaction surface. In this approach, the dispersion of the sorbent at a very low milligram level was achieved by inserting a solution of the sorbent and disperser solvent into the aqueous sample. The cloudy solution created from the dispersion of the sorbent in the bulk aqueous sample. After pre-concentration of the butachlor, the cloudy solution was centrifuged and butachlor in the sediment phase dissolved in ethanol and determined by gas chromatography with flame ionization detection. Under the optimized conditions (solution pH = 7.0, sorbent: benzophenone, 2%, disperser solvent: ethanol, 500 μL, centrifuged at 4000 rpm for 3 min), the method detection limit for butachlor was 2, 3 and 3 μg/L for distilled water, waste water, and urine sample, respectively. Furthermore, the preconcentration factor was 198.8, 175.0, and 174.2 in distilled water, waste water, and urine sample, respectively. Solvent-assisted dispersive solid-phase extraction was successfully used for the trace monitoring of butachlor in urine and waste water samples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. A quick method based on SIMPLISMA-KPLS for simultaneously selecting outlier samples and informative samples for model standardization in near infrared spectroscopy

    Science.gov (United States)

    Li, Li-Na; Ma, Chang-Ming; Chang, Ming; Zhang, Ren-Cheng

    2017-12-01

    A novel method based on SIMPLe-to-use Interactive Self-modeling Mixture Analysis (SIMPLISMA) and Kernel Partial Least Square (KPLS), named as SIMPLISMA-KPLS, is proposed in this paper for selection of outlier samples and informative samples simultaneously. It is a quick algorithm used to model standardization (or named as model transfer) in near infrared (NIR) spectroscopy. The NIR experiment data of the corn for analysis of the protein content is introduced to evaluate the proposed method. Piecewise direct standardization (PDS) is employed in model transfer. And the comparison of SIMPLISMA-PDS-KPLS and KS-PDS-KPLS is given in this research by discussion of the prediction accuracy of protein content and calculation speed of each algorithm. The conclusions include that SIMPLISMA-KPLS can be utilized as an alternative sample selection method for model transfer. Although it has similar accuracy to Kennard-Stone (KS), it is different from KS as it employs concentration information in selection program. This means that it ensures analyte information is involved in analysis, and the spectra (X) of the selected samples is interrelated with concentration (y). And it can be used for outlier sample elimination simultaneously by validation of calibration. According to the statistical data results of running time, it is clear that the sample selection process is more rapid when using KPLS. The quick algorithm of SIMPLISMA-KPLS is beneficial to improve the speed of online measurement using NIR spectroscopy.

  3. Analysis of {sup 129}I in lichens by accelerator mass spectrometry through a microwave-based sample preparation method

    Energy Technology Data Exchange (ETDEWEB)

    Gomez-Guzman, J.M. [Centro Nacional de Aceleradores (CNA), Avda. Thomas Alva Edison 7, Isla de la Cartuja, 41092 Seville (Spain); Lopez-Gutierrez, J.M., E-mail: lguti@us.e [Centro Nacional de Aceleradores (CNA), Avda. Thomas Alva Edison 7, Isla de la Cartuja, 41092 Seville (Spain); Dpto. de Fisica Aplicada I, Escuela Universitaria Politecnica, c/. Virgen de Africa 7, 41011 Seville (Spain); Pinto, A.R. [Centro Nacional de Aceleradores (CNA), Avda. Thomas Alva Edison 7, Isla de la Cartuja, 41092 Seville (Spain); Holm, E. [Department of Radiation Physics, Lund University Hospital, S-22185 Lund (Sweden); Garcia-Leon, M. [Centro Nacional de Aceleradores (CNA), Avda. Thomas Alva Edison 7, Isla de la Cartuja, 41092 Seville (Spain); Dpto. Fisica Atomica, Molecular y Nuclear, Avda. Reina Mercedes, s/n, 41012 Seville (Spain)

    2010-04-15

    The presence of {sup 129}I in the environment has been strongly influenced by the artificial nuclear emissions since the beginning of the nuclear era in the mid 20th century. In order to know more about the different sources and their relative impact in different zones, it is necessary to complete the amount of measurements of this radionuclide in environmental samples. In this work, {sup 129}I has been determined in lichen samples (Cladonia alpestris) from Rogen Lake in Central Sweden. A method based on microwave digestion was developed for these measurements in order to improve speed and reduce contamination. Based on this method, {sup 129}I concentrations in some lichen samples from Lake Rogen (Sweden) have been measured, showing the impact of the Chernobyl accident and nuclear fuel reprocessing plants.

  4. Sample normalization methods in quantitative metabolomics.

    Science.gov (United States)

    Wu, Yiman; Li, Liang

    2016-01-22

    To reveal metabolomic changes caused by a biological event in quantitative metabolomics, it is critical to use an analytical tool that can perform accurate and precise quantification to examine the true concentration differences of individual metabolites found in different samples. A number of steps are involved in metabolomic analysis including pre-analytical work (e.g., sample collection and storage), analytical work (e.g., sample analysis) and data analysis (e.g., feature extraction and quantification). Each one of them can influence the quantitative results significantly and thus should be performed with great care. Among them, the total sample amount or concentration of metabolites can be significantly different from one sample to another. Thus, it is critical to reduce or eliminate the effect of total sample amount variation on quantification of individual metabolites. In this review, we describe the importance of sample normalization in the analytical workflow with a focus on mass spectrometry (MS)-based platforms, discuss a number of methods recently reported in the literature and comment on their applicability in real world metabolomics applications. Sample normalization has been sometimes ignored in metabolomics, partially due to the lack of a convenient means of performing sample normalization. We show that several methods are now available and sample normalization should be performed in quantitative metabolomics where the analyzed samples have significant variations in total sample amounts. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. A Fast and Robust Feature-Based Scan-Matching Method in 3D SLAM and the Effect of Sampling Strategies

    Directory of Open Access Journals (Sweden)

    Cihan Ulas

    2013-11-01

    Full Text Available Simultaneous localization and mapping (SLAM plays an important role in fully autonomous systems when a GNSS (global navigation satellite system is not available. Studies in both 2D indoor and 3D outdoor SLAM are based on the appearance of environments and utilize scan-matching methods to find rigid body transformation parameters between two consecutive scans. In this study, a fast and robust scan-matching method based on feature extraction is introduced. Since the method is based on the matching of certain geometric structures, like plane segments, the outliers and noise in the point cloud are considerably eliminated. Therefore, the proposed scan-matching algorithm is more robust than conventional methods. Besides, the registration time and the number of iterations are significantly reduced, since the number of matching points is efficiently decreased. As a scan-matching framework, an improved version of the normal distribution transform (NDT is used. The probability density functions (PDFs of the reference scan are generated as in the traditional NDT, and the feature extraction - based on stochastic plane detection - is applied to the only input scan. By using experimental dataset belongs to an outdoor environment like a university campus, we obtained satisfactory performance results. Moreover, the feature extraction part of the algorithm is considered as a special sampling strategy for scan-matching and compared to other sampling strategies, such as random sampling and grid-based sampling, the latter of which is first used in the NDT. Thus, this study also shows the effect of the subsampling on the performance of the NDT.

  6. Rapid filtration separation-based sample preparation method for Bacillus spores in powdery and environmental matrices.

    Science.gov (United States)

    Isabel, Sandra; Boissinot, Maurice; Charlebois, Isabelle; Fauvel, Chantal M; Shi, Lu-E; Lévesque, Julie-Christine; Paquin, Amélie T; Bastien, Martine; Stewart, Gale; Leblanc, Eric; Sato, Sachiko; Bergeron, Michel G

    2012-03-01

    Authorities frequently need to analyze suspicious powders and other samples for biothreat agents in order to assess environmental safety. Numerous nucleic acid detection technologies have been developed to detect and identify biowarfare agents in a timely fashion. The extraction of microbial nucleic acids from a wide variety of powdery and environmental samples to obtain a quality level adequate for these technologies still remains a technical challenge. We aimed to develop a rapid and versatile method of separating bacteria from these samples and then extracting their microbial DNA. Bacillus atrophaeus subsp. globigii was used as a simulant of Bacillus anthracis. We studied the effects of a broad variety of powdery and environmental samples on PCR detection and the steps required to alleviate their interference. With a benchmark DNA extraction procedure, 17 of the 23 samples investigated interfered with bacterial lysis and/or PCR-based detection. Therefore, we developed the dual-filter method for applied recovery of microbial particles from environmental and powdery samples (DARE). The DARE procedure allows the separation of bacteria from contaminating matrices that interfere with PCR detection. This procedure required only 2 min, while the DNA extraction process lasted 7 min, for a total of sample preparation procedure allowed the recovery of cleaned bacterial spores and relieved detection interference caused by a wide variety of samples. Our procedure was easily completed in a laboratory facility and is amenable to field application and automation.

  7. On Angular Sampling Methods for 3-D Spatial Channel Models

    DEFF Research Database (Denmark)

    Fan, Wei; Jämsä, Tommi; Nielsen, Jesper Ødum

    2015-01-01

    This paper discusses generating three dimensional (3D) spatial channel models with emphasis on the angular sampling methods. Three angular sampling methods, i.e. modified uniform power sampling, modified uniform angular sampling, and random pairing methods are proposed and investigated in detail....... The random pairing method, which uses only twenty sinusoids in the ray-based model for generating the channels, presents good results if the spatial channel cluster is with a small elevation angle spread. For spatial clusters with large elevation angle spreads, however, the random pairing method would fail...... and the other two methods should be considered....

  8. Comparisons of methods for generating conditional Poisson samples and Sampford samples

    OpenAIRE

    Grafström, Anton

    2005-01-01

    Methods for conditional Poisson sampling (CP-sampling) and Sampford sampling are compared and the focus is on the efficiency of the methods. The efficiency is investigated by simulation in different sampling situations. It was of interest to compare methods since new methods for both CP-sampling and Sampford sampling were introduced by Bondesson, Traat & Lundqvist in 2004. The new methods are acceptance rejection methods that use the efficient Pareto sampling method. They are found to be ...

  9. A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling

    Directory of Open Access Journals (Sweden)

    Ying Yan

    2017-01-01

    Full Text Available Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix between interval evidences is constructed, and expert’s information is fused. Comment grades are quantified using the interval number, and cumulative probability function for evaluating the importance of indices is constructed based on the fused information. Finally, index weights are obtained by Monte Carlo random sampling. The method can process expert’s information with varying degrees of uncertainties, which possesses good compatibility. Difficulty in effectively fusing high-conflict group decision-making information and large information loss after fusion is avertible. Original expert judgments are retained rather objectively throughout the processing procedure. Cumulative probability function constructing and random sampling processes do not require any human intervention or judgment. It can be implemented by computer programs easily, thus having an apparent advantage in evaluation practices of fairly huge index systems.

  10. Alpha Matting with KL-Divergence Based Sparse Sampling.

    Science.gov (United States)

    Karacan, Levent; Erdem, Aykut; Erdem, Erkut

    2017-06-22

    In this paper, we present a new sampling-based alpha matting approach for the accurate estimation of foreground and background layers of an image. Previous sampling-based methods typically rely on certain heuristics in collecting representative samples from known regions, and thus their performance deteriorates if the underlying assumptions are not satisfied. To alleviate this, we take an entirely new approach and formulate sampling as a sparse subset selection problem where we propose to pick a small set of candidate samples that best explains the unknown pixels. Moreover, we describe a new dissimilarity measure for comparing two samples which is based on KLdivergence between the distributions of features extracted in the vicinity of the samples. The proposed framework is general and could be easily extended to video matting by additionally taking temporal information into account in the sampling process. Evaluation on standard benchmark datasets for image and video matting demonstrates that our approach provides more accurate results compared to the state-of-the-art methods.

  11. Note: A simple image processing based fiducial auto-alignment method for sample registration.

    Science.gov (United States)

    Robertson, Wesley D; Porto, Lucas R; Ip, Candice J X; Nantel, Megan K T; Tellkamp, Friedjof; Lu, Yinfei; Miller, R J Dwayne

    2015-08-01

    A simple method for the location and auto-alignment of sample fiducials for sample registration using widely available MATLAB/LabVIEW software is demonstrated. The method is robust, easily implemented, and applicable to a wide variety of experiment types for improved reproducibility and increased setup speed. The software uses image processing to locate and measure the diameter and center point of circular fiducials for distance self-calibration and iterative alignment and can be used with most imaging systems. The method is demonstrated to be fast and reliable in locating and aligning sample fiducials, provided here by a nanofabricated array, with accuracy within the optical resolution of the imaging system. The software was further demonstrated to register, load, and sample the dynamically wetted array.

  12. Evaluation of sampling methods for Bacillus spore-contaminated HVAC filters.

    Science.gov (United States)

    Calfee, M Worth; Rose, Laura J; Tufts, Jenia; Morse, Stephen; Clayton, Matt; Touati, Abderrahmane; Griffin-Gatchalian, Nicole; Slone, Christina; McSweeney, Neal

    2014-01-01

    The objective of this study was to compare an extraction-based sampling method to two vacuum-based sampling methods (vacuum sock and 37mm cassette filter) with regards to their ability to recover Bacillus atrophaeus spores (surrogate for Bacillus anthracis) from pleated heating, ventilation, and air conditioning (HVAC) filters that are typically found in commercial and residential buildings. Electrostatic and mechanical HVAC filters were tested, both without and after loading with dust to 50% of their total holding capacity. The results were analyzed by one-way ANOVA across material types, presence or absence of dust, and sampling device. The extraction method gave higher relative recoveries than the two vacuum methods evaluated (p≤0.001). On average, recoveries obtained by the vacuum methods were about 30% of those achieved by the extraction method. Relative recoveries between the two vacuum methods were not significantly different (p>0.05). Although extraction methods yielded higher recoveries than vacuum methods, either HVAC filter sampling approach may provide a rapid and inexpensive mechanism for understanding the extent of contamination following a wide-area biological release incident. Published by Elsevier B.V.

  13. Evaluation of common methods for sampling invertebrate pollinator assemblages: net sampling out-perform pan traps.

    Science.gov (United States)

    Popic, Tony J; Davila, Yvonne C; Wardle, Glenda M

    2013-01-01

    Methods for sampling ecological assemblages strive to be efficient, repeatable, and representative. Unknowingly, common methods may be limited in terms of revealing species function and so of less value for comparative studies. The global decline in pollination services has stimulated surveys of flower-visiting invertebrates, using pan traps and net sampling. We explore the relative merits of these two methods in terms of species discovery, quantifying abundance, function, and composition, and responses of species to changing floral resources. Using a spatially-nested design we sampled across a 5000 km(2) area of arid grasslands, including 432 hours of net sampling and 1296 pan trap-days, between June 2010 and July 2011. Net sampling yielded 22% more species and 30% higher abundance than pan traps, and better reflected the spatio-temporal variation of floral resources. Species composition differed significantly between methods; from 436 total species, 25% were sampled by both methods, 50% only by nets, and the remaining 25% only by pans. Apart from being less comprehensive, if pan traps do not sample flower-visitors, the link to pollination is questionable. By contrast, net sampling functionally linked species to pollination through behavioural observations of flower-visitation interaction frequency. Netted specimens are also necessary for evidence of pollen transport. Benefits of net-based sampling outweighed minor differences in overall sampling effort. As pan traps and net sampling methods are not equivalent for sampling invertebrate-flower interactions, we recommend net sampling of invertebrate pollinator assemblages, especially if datasets are intended to document declines in pollination and guide measures to retain this important ecosystem service.

  14. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  15. An efficient method for sampling the essential subspace of proteins

    NARCIS (Netherlands)

    Amadei, A; Linssen, A.B M; de Groot, B.L.; van Aalten, D.M.F.; Berendsen, H.J.C.

    A method is presented for a more efficient sampling of the configurational space of proteins as compared to conventional sampling techniques such as molecular dynamics. The method is based on the large conformational changes in proteins revealed by the ''essential dynamics'' analysis. A form of

  16. Evaluation of sampling methods for Bacillus spore-contaminated HVAC filters

    OpenAIRE

    Calfee, M. Worth; Rose, Laura J.; Tufts, Jenia; Morse, Stephen; Clayton, Matt; Touati, Abderrahmane; Griffin-Gatchalian, Nicole; Slone, Christina; McSweeney, Neal

    2013-01-01

    The objective of this study was to compare an extraction-based sampling method to two vacuum-based sampling methods (vacuum sock and 37 mm cassette filter) with regards to their ability to recover Bacillus atrophaeus spores (surrogate for Bacillus anthracis) from pleated heating, ventilation, and air conditioning (HVAC) filters that are typically found in commercial and residential buildings. Electrostatic and mechanical HVAC filters were tested, both without and after loading with dust to 50...

  17. A comparative proteomics method for multiple samples based on a 18O-reference strategy and a quantitation and identification-decoupled strategy.

    Science.gov (United States)

    Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin

    2017-08-15

    Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Evaluation of common methods for sampling invertebrate pollinator assemblages: net sampling out-perform pan traps.

    Directory of Open Access Journals (Sweden)

    Tony J Popic

    Full Text Available Methods for sampling ecological assemblages strive to be efficient, repeatable, and representative. Unknowingly, common methods may be limited in terms of revealing species function and so of less value for comparative studies. The global decline in pollination services has stimulated surveys of flower-visiting invertebrates, using pan traps and net sampling. We explore the relative merits of these two methods in terms of species discovery, quantifying abundance, function, and composition, and responses of species to changing floral resources. Using a spatially-nested design we sampled across a 5000 km(2 area of arid grasslands, including 432 hours of net sampling and 1296 pan trap-days, between June 2010 and July 2011. Net sampling yielded 22% more species and 30% higher abundance than pan traps, and better reflected the spatio-temporal variation of floral resources. Species composition differed significantly between methods; from 436 total species, 25% were sampled by both methods, 50% only by nets, and the remaining 25% only by pans. Apart from being less comprehensive, if pan traps do not sample flower-visitors, the link to pollination is questionable. By contrast, net sampling functionally linked species to pollination through behavioural observations of flower-visitation interaction frequency. Netted specimens are also necessary for evidence of pollen transport. Benefits of net-based sampling outweighed minor differences in overall sampling effort. As pan traps and net sampling methods are not equivalent for sampling invertebrate-flower interactions, we recommend net sampling of invertebrate pollinator assemblages, especially if datasets are intended to document declines in pollination and guide measures to retain this important ecosystem service.

  19. Distance sampling methods and applications

    CERN Document Server

    Buckland, S T; Marques, T A; Oedekoven, C S

    2015-01-01

    In this book, the authors cover the basic methods and advances within distance sampling that are most valuable to practitioners and in ecology more broadly. This is the fourth book dedicated to distance sampling. In the decade since the last book published, there have been a number of new developments. The intervening years have also shown which advances are of most use. This self-contained book covers topics from the previous publications, while also including recent developments in method, software and application. Distance sampling refers to a suite of methods, including line and point transect sampling, in which animal density or abundance is estimated from a sample of distances to detected individuals. The book illustrates these methods through case studies; data sets and computer code are supplied to readers through the book’s accompanying website.  Some of the case studies use the software Distance, while others use R code. The book is in three parts.  The first part addresses basic methods, the ...

  20. Sampling of temporal networks: Methods and biases

    Science.gov (United States)

    Rocha, Luis E. C.; Masuda, Naoki; Holme, Petter

    2017-11-01

    Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of sampling methods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.

  1. Neutron activation analysis of certified samples by the absolute method

    Science.gov (United States)

    Kadem, F.; Belouadah, N.; Idiri, Z.

    2015-07-01

    The nuclear reactions analysis technique is mainly based on the relative method or the use of activation cross sections. In order to validate nuclear data for the calculated cross section evaluated from systematic studies, we used the neutron activation analysis technique (NAA) to determine the various constituent concentrations of certified samples for animal blood, milk and hay. In this analysis, the absolute method is used. The neutron activation technique involves irradiating the sample and subsequently performing a measurement of the activity of the sample. The fundamental equation of the activation connects several physical parameters including the cross section that is essential for the quantitative determination of the different elements composing the sample without resorting to the use of standard sample. Called the absolute method, it allows a measurement as accurate as the relative method. The results obtained by the absolute method showed that the values are as precise as the relative method requiring the use of standard sample for each element to be quantified.

  2. THE USE OF RANKING SAMPLING METHOD WITHIN MARKETING RESEARCH

    Directory of Open Access Journals (Sweden)

    CODRUŢA DURA

    2011-01-01

    Full Text Available Marketing and statistical literature available to practitioners provides a wide range of sampling methods that can be implemented in the context of marketing research. Ranking sampling method is based on taking apart the general population into several strata, namely into several subdivisions which are relatively homogenous regarding a certain characteristic. In fact, the sample will be composed by selecting, from each stratum, a certain number of components (which can be proportional or non-proportional to the size of the stratum until the pre-established volume of the sample is reached. Using ranking sampling within marketing research requires the determination of some relevant statistical indicators - average, dispersion, sampling error etc. To that end, the paper contains a case study which illustrates the actual approach used in order to apply the ranking sample method within a marketing research made by a company which provides Internet connection services, on a particular category of customers – small and medium enterprises.

  3. Systems and methods for self-synchronized digital sampling

    Science.gov (United States)

    Samson, Jr., John R. (Inventor)

    2008-01-01

    Systems and methods for self-synchronized data sampling are provided. In one embodiment, a system for capturing synchronous data samples is provided. The system includes an analog to digital converter adapted to capture signals from one or more sensors and convert the signals into a stream of digital data samples at a sampling frequency determined by a sampling control signal; and a synchronizer coupled to the analog to digital converter and adapted to receive a rotational frequency signal from a rotating machine, wherein the synchronizer is further adapted to generate the sampling control signal, and wherein the sampling control signal is based on the rotational frequency signal.

  4. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    Science.gov (United States)

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  5. Statistical sampling method for releasing decontaminated vehicles

    International Nuclear Information System (INIS)

    Lively, J.W.; Ware, J.A.

    1996-01-01

    Earth moving vehicles (e.g., dump trucks, belly dumps) commonly haul radiologically contaminated materials from a site being remediated to a disposal site. Traditionally, each vehicle must be surveyed before being released. The logistical difficulties of implementing the traditional approach on a large scale demand that an alternative be devised. A statistical method (MIL-STD-105E, open-quotes Sampling Procedures and Tables for Inspection by Attributesclose quotes) for assessing product quality from a continuous process was adapted to the vehicle decontamination process. This method produced a sampling scheme that automatically compensates and accommodates fluctuating batch sizes and changing conditions without the need to modify or rectify the sampling scheme in the field. Vehicles are randomly selected (sampled) upon completion of the decontamination process to be surveyed for residual radioactive surface contamination. The frequency of sampling is based on the expected number of vehicles passing through the decontamination process in a given period and the confidence level desired. This process has been successfully used for 1 year at the former uranium mill site in Monticello, Utah (a CERCLA regulated clean-up site). The method forces improvement in the quality of the decontamination process and results in a lower likelihood that vehicles exceeding the surface contamination standards are offered for survey. Implementation of this statistical sampling method on Monticello Projects has resulted in more efficient processing of vehicles through decontamination and radiological release, saved hundreds of hours of processing time, provided a high level of confidence that release limits are met, and improved the radiological cleanliness of vehicles leaving the controlled site

  6. Performance of local information-based link prediction: a sampling perspective

    Science.gov (United States)

    Zhao, Jichang; Feng, Xu; Dong, Li; Liang, Xiao; Xu, Ke

    2012-08-01

    Link prediction is pervasively employed to uncover the missing links in the snapshots of real-world networks, which are usually obtained through different kinds of sampling methods. In the previous literature, in order to evaluate the performance of the prediction, known edges in the sampled snapshot are divided into the training set and the probe set randomly, without considering the underlying sampling approaches. However, different sampling methods might lead to different missing links, especially for the biased ways. For this reason, random partition-based evaluation of performance is no longer convincing if we take the sampling method into account. In this paper, we try to re-evaluate the performance of local information-based link predictions through sampling method governed division of the training set and the probe set. It is interesting that we find that for different sampling methods, each prediction approach performs unevenly. Moreover, most of these predictions perform weakly when the sampling method is biased, which indicates that the performance of these methods might have been overestimated in the prior works.

  7. ON SAMPLING BASED METHODS FOR THE DUBINS TRAVELING SALESMAN PROBLEM WITH NEIGHBORHOODS

    Directory of Open Access Journals (Sweden)

    Petr Váňa

    2015-12-01

    Full Text Available In this paper, we address the problem of path planning to visit a set of regions by Dubins vehicle, which is also known as the Dubins Traveling Salesman Problem Neighborhoods (DTSPN. We propose a modification of the existing sampling-based approach to determine increasing number of samples per goal region and thus improve the solution quality if a more computational time is available. The proposed modification of the sampling-based algorithm has been compared with performance of existing approaches for the DTSPN and results of the quality of the found solutions and the required computational time are presented in the paper.

  8. Multielement methods of atomic fluorescence analysis of enviromental samples

    International Nuclear Information System (INIS)

    Rigin, V.I.

    1985-01-01

    A multielement method of atomic fluorescence analysis of environmental samples based on sample decomposition by autoclave fluorination and gas-phase atomization of volatile compounds in inductive araon plasma using a nondispersive polychromator is suggested. Detection limits of some elements (Be, Sr, Cd, V, Mo, Te, Ru etc.) for different sample forms introduced in to an analyzer are given

  9. Estimation of functional failure probability of passive systems based on adaptive importance sampling method

    International Nuclear Information System (INIS)

    Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)

  10. A novel one-class SVM based negative data sampling method for reconstructing proteome-wide HTLV-human protein interaction networks.

    Science.gov (United States)

    Mei, Suyu; Zhu, Hao

    2015-01-26

    Protein-protein interaction (PPI) prediction is generally treated as a problem of binary classification wherein negative data sampling is still an open problem to be addressed. The commonly used random sampling is prone to yield less representative negative data with considerable false negatives. Meanwhile rational constraints are seldom exerted on model selection to reduce the risk of false positive predictions for most of the existing computational methods. In this work, we propose a novel negative data sampling method based on one-class SVM (support vector machine, SVM) to predict proteome-wide protein interactions between HTLV retrovirus and Homo sapiens, wherein one-class SVM is used to choose reliable and representative negative data, and two-class SVM is used to yield proteome-wide outcomes as predictive feedback for rational model selection. Computational results suggest that one-class SVM is more suited to be used as negative data sampling method than two-class PPI predictor, and the predictive feedback constrained model selection helps to yield a rational predictive model that reduces the risk of false positive predictions. Some predictions have been validated by the recent literature. Lastly, gene ontology based clustering of the predicted PPI networks is conducted to provide valuable cues for the pathogenesis of HTLV retrovirus.

  11. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    Energy Technology Data Exchange (ETDEWEB)

    Ridolfi, E.; Napolitano, F., E-mail: francesco.napolitano@uniroma1.it [Sapienza Università di Roma, Dipartimento di Ingegneria Civile, Edile e Ambientale (Italy); Alfonso, L. [Hydroinformatics Chair Group, UNESCO-IHE, Delft (Netherlands); Di Baldassarre, G. [Department of Earth Sciences, Program for Air, Water and Landscape Sciences, Uppsala University (Sweden)

    2016-06-08

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  12. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    International Nuclear Information System (INIS)

    Ridolfi, E.; Napolitano, F.; Alfonso, L.; Di Baldassarre, G.

    2016-01-01

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  13. Efficient sampling algorithms for Monte Carlo based treatment planning

    International Nuclear Information System (INIS)

    DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.

    1998-01-01

    Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed

  14. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  15. Independent random sampling methods

    CERN Document Server

    Martino, Luca; Míguez, Joaquín

    2018-01-01

    This book systematically addresses the design and analysis of efficient techniques for independent random sampling. Both general-purpose approaches, which can be used to generate samples from arbitrary probability distributions, and tailored techniques, designed to efficiently address common real-world practical problems, are introduced and discussed in detail. In turn, the monograph presents fundamental results and methodologies in the field, elaborating and developing them into the latest techniques. The theory and methods are illustrated with a varied collection of examples, which are discussed in detail in the text and supplemented with ready-to-run computer code. The main problem addressed in the book is how to generate independent random samples from an arbitrary probability distribution with the weakest possible constraints or assumptions in a form suitable for practical implementation. The authors review the fundamental results and methods in the field, address the latest methods, and emphasize the li...

  16. Comparability of river suspended-sediment sampling and laboratory analysis methods

    Science.gov (United States)

    Groten, Joel T.; Johnson, Gregory D.

    2018-03-06

    Accurate measurements of suspended sediment, a leading water-quality impairment in many Minnesota rivers, are important for managing and protecting water resources; however, water-quality standards for suspended sediment in Minnesota are based on grab field sampling and total suspended solids (TSS) laboratory analysis methods that have underrepresented concentrations of suspended sediment in rivers compared to U.S. Geological Survey equal-width-increment or equal-discharge-increment (EWDI) field sampling and suspended sediment concentration (SSC) laboratory analysis methods. Because of this underrepresentation, the U.S. Geological Survey, in collaboration with the Minnesota Pollution Control Agency, collected concurrent grab and EWDI samples at eight sites to compare results obtained using different combinations of field sampling and laboratory analysis methods.Study results determined that grab field sampling and TSS laboratory analysis results were biased substantially low compared to EWDI sampling and SSC laboratory analysis results, respectively. Differences in both field sampling and laboratory analysis methods caused grab and TSS methods to be biased substantially low. The difference in laboratory analysis methods was slightly greater than field sampling methods.Sand-sized particles had a strong effect on the comparability of the field sampling and laboratory analysis methods. These results indicated that grab field sampling and TSS laboratory analysis methods fail to capture most of the sand being transported by the stream. The results indicate there is less of a difference among samples collected with grab field sampling and analyzed for TSS and concentration of fines in SSC. Even though differences are present, the presence of strong correlations between SSC and TSS concentrations provides the opportunity to develop site specific relations to address transport processes not captured by grab field sampling and TSS laboratory analysis methods.

  17. A method of language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik; Hengeveld, Kees

    1993-01-01

    In recent years more attention is paid to the quality of language samples in typological work. Without an adequate sampling strategy, samples may suffer from various kinds of bias. In this article we propose a sampling method in which the genetic criterion is taken as the most important: samples...... created with this method will reflect optimally the diversity of the languages of the world. On the basis of the internal structure of each genetic language tree a measure is computed that reflects the linguistic diversity in the language families represented by these trees. This measure is used...... to determine how many languages from each phylum should be selected, given any required sample size....

  18. DOE methods for evaluating environmental and waste management samples

    International Nuclear Information System (INIS)

    Goheen, S.C.; McCulloch, M.; Thomas, B.L.; Riley, R.G.; Sklarew, D.S.; Mong, G.M.; Fadeff, S.K.

    1994-04-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) is a resource intended to support sampling and analytical activities for the evaluation of environmental and waste management samples from U.S. Department of Energy (DOE) sites. DOE Methods is the result of extensive cooperation from all DOE analytical laboratories. All of these laboratories have contributed key information and provided technical reviews as well as significant moral support leading to the success of this document. DOE Methods is designed to encompass methods for collecting representative samples and for determining the radioisotope activity and organic and inorganic composition of a sample. These determinations will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the U.S. Environmental Protection Agency, or others. The development of DOE Methods is supported by the Laboratory Management Division of the DOE. Methods are prepared for entry into DOE Methods as chapter editors, together with DOE and other participants in this program, identify analytical and sampling method needs. Unique methods or methods consolidated from similar procedures in the DOE Procedures Database are selected for potential inclusion in this document. Initial selection is based largely on DOE needs and procedure applicability and completeness. Methods appearing in this document are one of two types. open-quotes Draftclose quotes or open-quotes Verified.close quotes. open-quotes Draftclose quotes methods that have been reviewed internally and show potential for eventual verification are included in this document, but they have not been reviewed externally, and their precision and bias may not be known. open-quotes Verifiedclose quotes methods in DOE Methods have been reviewed by volunteers from various DOE sites and private corporations

  19. A random spatial sampling method in a rural developing nation

    Science.gov (United States)

    Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas

    2014-01-01

    Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...

  20. Sample preparation method for ICP-MS measurement of 99Tc in a large amount of environmental samples

    International Nuclear Information System (INIS)

    Kondo, M.; Seki, R.

    2002-01-01

    Sample preparation for measurement of 99 Tc in a large amount of soil and water samples by ICP-MS has been developed using 95m Tc as a yield tracer. This method is based on the conventional method for a small amount of soil samples using incineration, acid digestion, extraction chromatography (TEVA resin) and ICP-MS measurement. Preliminary concentration of Tc has been introduced by co-precipitation with ferric oxide. The matrix materials in a large amount of samples were more sufficiently removed with keeping the high recovery of Tc than previous method. The recovery of Tc was 70-80% for 100 g soil samples and 60-70% for 500 g of soil and 500 L of water samples. The detection limit of this method was evaluated as 0.054 mBq/kg in 500 g soil and 0.032 μBq/L in 500 L water. The determined value of 99 Tc in the IAEA-375 (soil sample collected near the Chernobyl Nuclear Reactor) was 0.25 ± 0.02 Bq/kg. (author)

  1. Particle System Based Adaptive Sampling on Spherical Parameter Space to Improve the MDL Method for Construction of Statistical Shape Models

    Directory of Open Access Journals (Sweden)

    Rui Xu

    2013-01-01

    Full Text Available Minimum description length (MDL based group-wise registration was a state-of-the-art method to determine the corresponding points of 3D shapes for the construction of statistical shape models (SSMs. However, it suffered from the problem that determined corresponding points did not uniformly spread on original shapes, since corresponding points were obtained by uniformly sampling the aligned shape on the parameterized space of unit sphere. We proposed a particle-system based method to obtain adaptive sampling positions on the unit sphere to resolve this problem. Here, a set of particles was placed on the unit sphere to construct a particle system whose energy was related to the distortions of parameterized meshes. By minimizing this energy, each particle was moved on the unit sphere. When the system became steady, particles were treated as vertices to build a spherical mesh, which was then relaxed to slightly adjust vertices to obtain optimal sampling-positions. We used 47 cases of (left and right lungs and 50 cases of livers, (left and right kidneys, and spleens for evaluations. Experiments showed that the proposed method was able to resolve the problem of the original MDL method, and the proposed method performed better in the generalization and specificity tests.

  2. A Method for Choosing the Best Samples for Mars Sample Return.

    Science.gov (United States)

    Gordon, Peter R; Sephton, Mark A

    2018-05-01

    Success of a future Mars Sample Return mission will depend on the correct choice of samples. Pyrolysis-FTIR can be employed as a triage instrument for Mars Sample Return. The technique can thermally dissociate minerals and organic matter for detection. Identification of certain mineral types can determine the habitability of the depositional environment, past or present, while detection of organic matter may suggest past or present habitation. In Mars' history, the Theiikian era represents an attractive target for life search missions and the acquisition of samples. The acidic and increasingly dry Theiikian may have been habitable and followed a lengthy neutral and wet period in Mars' history during which life could have originated and proliferated to achieve relatively abundant levels of biomass with a wide distribution. Moreover, the sulfate minerals produced in the Theiikian are also known to be good preservers of organic matter. We have used pyrolysis-FTIR and samples from a Mars analog ferrous acid stream with a thriving ecosystem to test the triage concept. Pyrolysis-FTIR identified those samples with the greatest probability of habitability and habitation. A three-tier scoring system was developed based on the detection of (i) organic signals, (ii) carbon dioxide and water, and (iii) sulfur dioxide. The presence of each component was given a score of A, B, or C depending on whether the substance had been detected, tentatively detected, or not detected, respectively. Single-step (for greatest possible sensitivity) or multistep (for more diagnostic data) pyrolysis-FTIR methods informed the assignments. The system allowed the highest-priority samples to be categorized as AAA (or A*AA if the organic signal was complex), while the lowest-priority samples could be categorized as CCC. Our methods provide a mechanism with which to rank samples and identify those that should take the highest priority for return to Earth during a Mars Sample Return mission. Key Words

  3. SWOT ANALYSIS ON SAMPLING METHOD

    Directory of Open Access Journals (Sweden)

    CHIS ANCA OANA

    2014-07-01

    Full Text Available Audit sampling involves the application of audit procedures to less than 100% of items within an account balance or class of transactions. Our article aims to study audit sampling in audit of financial statements. As an audit technique largely used, in both its statistical and nonstatistical form, the method is very important for auditors. It should be applied correctly for a fair view of financial statements, to satisfy the needs of all financial users. In order to be applied correctly the method must be understood by all its users and mainly by auditors. Otherwise the risk of not applying it correctly would cause loose of reputation and discredit, litigations and even prison. Since there is not a unitary practice and methodology for applying the technique, the risk of incorrectly applying it is pretty high. The SWOT analysis is a technique used that shows the advantages, disadvantages, threats and opportunities. We applied SWOT analysis in studying the sampling method, from the perspective of three players: the audit company, the audited entity and users of financial statements. The study shows that by applying the sampling method the audit company and the audited entity both save time, effort and money. The disadvantages of the method are difficulty in applying and understanding its insight. Being largely used as an audit method and being a factor of a correct audit opinion, the sampling method’s advantages, disadvantages, threats and opportunities must be understood by auditors.

  4. Characterization of full set material constants of piezoelectric materials based on ultrasonic method and inverse impedance spectroscopy using only one sample.

    Science.gov (United States)

    Li, Shiyang; Zheng, Limei; Jiang, Wenhua; Sahul, Raffi; Gopalan, Venkatraman; Cao, Wenwu

    2013-09-14

    The most difficult task in the characterization of complete set material properties for piezoelectric materials is self-consistency. Because there are many independent elastic, dielectric, and piezoelectric constants, several samples are needed to obtain the full set constants. Property variation from sample to sample often makes the obtained data set lack of self-consistency. Here, we present a method, based on pulse-echo ultrasound and inverse impedance spectroscopy, to precisely determine the full set physical properties of piezoelectric materials using only one small sample, which eliminated the sample to sample variation problem to guarantee self-consistency. The method has been applied to characterize the [001] C poled Mn modified 0.27Pb(In 1/2 Nb 1/2 )O 3 -0.46Pb(Mg 1/3 Nb 2/3 )O 3 -0.27PbTiO 3 single crystal and the validity of the measured data is confirmed by a previously established method. For the inverse calculations using impedance spectrum, the stability of reconstructed results is analyzed by fluctuation analysis of input data. In contrast to conventional regression methods, our method here takes the full advantage of both ultrasonic and inverse impedance spectroscopy methods to extract all constants from only one small sample. The method provides a powerful tool for assisting novel piezoelectric materials of small size and for generating needed input data sets for device designs using finite element simulations.

  5. A multi-sample based method for identifying common CNVs in normal human genomic structure using high-resolution aCGH data.

    Directory of Open Access Journals (Sweden)

    Chihyun Park

    Full Text Available BACKGROUND: It is difficult to identify copy number variations (CNV in normal human genomic data due to noise and non-linear relationships between different genomic regions and signal intensity. A high-resolution array comparative genomic hybridization (aCGH containing 42 million probes, which is very large compared to previous arrays, was recently published. Most existing CNV detection algorithms do not work well because of noise associated with the large amount of input data and because most of the current methods were not designed to analyze normal human samples. Normal human genome analysis often requires a joint approach across multiple samples. However, the majority of existing methods can only identify CNVs from a single sample. METHODOLOGY AND PRINCIPAL FINDINGS: We developed a multi-sample-based genomic variations detector (MGVD that uses segmentation to identify common breakpoints across multiple samples and a k-means-based clustering strategy. Unlike previous methods, MGVD simultaneously considers multiple samples with different genomic intensities and identifies CNVs and CNV zones (CNVZs; CNVZ is a more precise measure of the location of a genomic variant than the CNV region (CNVR. CONCLUSIONS AND SIGNIFICANCE: We designed a specialized algorithm to detect common CNVs from extremely high-resolution multi-sample aCGH data. MGVD showed high sensitivity and a low false discovery rate for a simulated data set, and outperformed most current methods when real, high-resolution HapMap datasets were analyzed. MGVD also had the fastest runtime compared to the other algorithms evaluated when actual, high-resolution aCGH data were analyzed. The CNVZs identified by MGVD can be used in association studies for revealing relationships between phenotypes and genomic aberrations. Our algorithm was developed with standard C++ and is available in Linux and MS Windows format in the STL library. It is freely available at: http://embio.yonsei.ac.kr/~Park/mgvd.php.

  6. DOE methods for evaluating environmental and waste management samples

    Energy Technology Data Exchange (ETDEWEB)

    Goheen, S.C.; McCulloch, M.; Thomas, B.L.; Riley, R.G.; Sklarew, D.S.; Mong, G.M.; Fadeff, S.K. [eds.

    1994-10-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) is a resource intended to support sampling and analytical activities for the evaluation of environmental and waste management samples from U.S. Department of Energy (DOE) sites. DOE Methods is the result of extensive cooperation from all DOE analytical laboratories. All of these laboratories have contributed key information and provided technical reviews as well as significant moral support leading to the success of this document. DOE Methods is designed to encompass methods for collecting representative samples and for determining the radioisotope activity and organic and inorganic composition of a sample. These determinations will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the U.S. Environmental Protection Agency, or others. The development of DOE Methods is supported by the Analytical Services Division of DOE. Unique methods or methods consolidated from similar procedures in the DOE Procedures Database are selected for potential inclusion in this document. Initial selection is based largely on DOE needs and procedure applicability and completeness. Methods appearing in this document are one of two types, {open_quotes}Draft{close_quotes} or {open_quotes}Verified{close_quotes}. {open_quotes}Draft{close_quotes} methods that have been reviewed internally and show potential for eventual verification are included in this document, but they have not been reviewed externally, and their precision and bias may not be known. {open_quotes}Verified{close_quotes} methods in DOE Methods have been reviewed by volunteers from various DOE sites and private corporations. These methods have delineated measures of precision and accuracy.

  7. DOE methods for evaluating environmental and waste management samples

    International Nuclear Information System (INIS)

    Goheen, S.C.; McCulloch, M.; Thomas, B.L.; Riley, R.G.; Sklarew, D.S.; Mong, G.M.; Fadeff, S.K.

    1994-10-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) is a resource intended to support sampling and analytical activities for the evaluation of environmental and waste management samples from U.S. Department of Energy (DOE) sites. DOE Methods is the result of extensive cooperation from all DOE analytical laboratories. All of these laboratories have contributed key information and provided technical reviews as well as significant moral support leading to the success of this document. DOE Methods is designed to encompass methods for collecting representative samples and for determining the radioisotope activity and organic and inorganic composition of a sample. These determinations will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the U.S. Environmental Protection Agency, or others. The development of DOE Methods is supported by the Analytical Services Division of DOE. Unique methods or methods consolidated from similar procedures in the DOE Procedures Database are selected for potential inclusion in this document. Initial selection is based largely on DOE needs and procedure applicability and completeness. Methods appearing in this document are one of two types, open-quotes Draftclose quotes or open-quotes Verifiedclose quotes. open-quotes Draftclose quotes methods that have been reviewed internally and show potential for eventual verification are included in this document, but they have not been reviewed externally, and their precision and bias may not be known. open-quotes Verifiedclose quotes methods in DOE Methods have been reviewed by volunteers from various DOE sites and private corporations. These methods have delineated measures of precision and accuracy

  8. Sampling system and method

    Science.gov (United States)

    Decker, David L.; Lyles, Brad F.; Purcell, Richard G.; Hershey, Ronald Lee

    2013-04-16

    The present disclosure provides an apparatus and method for coupling conduit segments together. A first pump obtains a sample and transmits it through a first conduit to a reservoir accessible by a second pump. The second pump further conducts the sample from the reservoir through a second conduit.

  9. A method of language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik; Hengeveld, Kees

    1993-01-01

    In recent years more attention is paid to the quality of language samples in typological work. Without an adequate sampling strategy, samples may suffer from various kinds of bias. In this article we propose a sampling method in which the genetic criterion is taken as the most important: samples...... to determine how many languages from each phylum should be selected, given any required sample size....

  10. Comparison of DNA preservation methods for environmental bacterial community samples.

    Science.gov (United States)

    Gray, Michael A; Pratte, Zoe A; Kellogg, Christina A

    2013-02-01

    Field collections of environmental samples, for example corals, for molecular microbial analyses present distinct challenges. The lack of laboratory facilities in remote locations is common, and preservation of microbial community DNA for later study is critical. A particular challenge is keeping samples frozen in transit. Five nucleic acid preservation methods that do not require cold storage were compared for effectiveness over time and ease of use. Mixed microbial communities of known composition were created and preserved by DNAgard(™), RNAlater(®), DMSO-EDTA-salt (DESS), FTA(®) cards, and FTA Elute(®) cards. Automated ribosomal intergenic spacer analysis and clone libraries were used to detect specific changes in the faux communities over weeks and months of storage. A previously known bias in FTA(®) cards that results in lower recovery of pure cultures of Gram-positive bacteria was also detected in mixed community samples. There appears to be a uniform bias across all five preservation methods against microorganisms with high G + C DNA. Overall, the liquid-based preservatives (DNAgard(™), RNAlater(®), and DESS) outperformed the card-based methods. No single liquid method clearly outperformed the others, leaving method choice to be based on experimental design, field facilities, shipping constraints, and allowable cost. © 2012 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.

  11. Study on the Method of Association Rules Mining Based on Genetic Algorithm and Application in Analysis of Seawater Samples

    Directory of Open Access Journals (Sweden)

    Qiuhong Sun

    2014-04-01

    Full Text Available Based on the data mining research, the data mining based on genetic algorithm method, the genetic algorithm is briefly introduced, while the genetic algorithm based on two important theories and theoretical templates principle implicit parallelism is also discussed. Focuses on the application of genetic algorithms for association rule mining method based on association rule mining, this paper proposes a genetic algorithm fitness function structure, data encoding, such as the title of the improvement program, in particular through the early issues study, proposed the improved adaptive Pc, Pm algorithm is applied to the genetic algorithm, thereby improving efficiency of the algorithm. Finally, a genetic algorithm based association rule mining algorithm, and be applied in sea water samples database in data mining and prove its effective.

  12. Advanced Markov chain Monte Carlo methods learning from past samples

    CERN Document Server

    Liang, Faming; Carrol, Raymond J

    2010-01-01

    This book provides comprehensive coverage of simulation of complex systems using Monte Carlo methods. Developing algorithms that are immune to the local trap problem has long been considered as the most important topic in MCMC research. Various advanced MCMC algorithms which address this problem have been developed include, the modified Gibbs sampler, the methods based on auxiliary variables and the methods making use of past samples. The focus of this book is on the algorithms that make use of past samples. This book includes the multicanonical algorithm, dynamic weighting, dynamically weight

  13. An adaptive sampling method for variable-fidelity surrogate models using improved hierarchical kriging

    Science.gov (United States)

    Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli

    2018-01-01

    Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.

  14. Perpendicular distance sampling: an alternative method for sampling downed coarse woody debris

    Science.gov (United States)

    Michael S. Williams; Jeffrey H. Gove

    2003-01-01

    Coarse woody debris (CWD) plays an important role in many forest ecosystem processes. In recent years, a number of new methods have been proposed to sample CWD. These methods select individual logs into the sample using some form of unequal probability sampling. One concern with most of these methods is the difficulty in estimating the volume of each log. A new method...

  15. A two-stage cluster sampling method using gridded population data, a GIS, and Google EarthTM imagery in a population-based mortality survey in Iraq

    Directory of Open Access Journals (Sweden)

    Galway LP

    2012-04-01

    Full Text Available Abstract Background Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. Results We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Conclusion Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings.

  16. Consensus of heterogeneous multi-agent systems based on sampled data with a small sampling delay

    International Nuclear Information System (INIS)

    Wang Na; Wu Zhi-Hai; Peng Li

    2014-01-01

    In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay for heterogeneous multi-agent systems is proposed. Then, the algebra graph theory, the matrix method, the stability theory of linear systems, and some other techniques are employed to derive the necessary and sufficient conditions guaranteeing heterogeneous multi-agent systems to asymptotically achieve the stationary consensus. Finally, simulations are performed to demonstrate the correctness of the theoretical results. (interdisciplinary physics and related areas of science and technology)

  17. A direct sampling method for inverse electromagnetic medium scattering

    KAUST Repository

    Ito, Kazufumi; Jin, Bangti; Zou, Jun

    2013-01-01

    In this paper, we study the inverse electromagnetic medium scattering problem of estimating the support and shape of medium scatterers from scattered electric/magnetic near-field data. We shall develop a novel direct sampling method based

  18. Multicenter validation of PCR-based method for detection of Salmonella in chicken and pig samples

    DEFF Research Database (Denmark)

    Malorny, B.; Cook, N.; D'Agostino, M.

    2004-01-01

    As part of a standardization project, an interlaboratory trial including 15 laboratories from 13 European countries was conducted to evaluate the performance of a noproprietary polymerase chain reaction (PCR)-based method for the detection of Salmonella on artificially contaminated chicken rinse...... or positive. Outlier results caused, for example, by gross departures from the experimental protocol, were omitted from the analysis. For both the chicken rinse and the pig swab samples, the diagnostic sensitivity was 100%, with 100% accordance (repeatability) and concordance (reproducibility). The diagnostic...... specificity was 80.1% (with 85.7% accordance and 67.5% concordance) for chicken rinse, and 91.7% (with 100% accordance and 83.3% concordance) for pig swab. Thus, the interlaboratory variation due to personnel, reagents, thermal cyclers, etc., did not affect the performance of the method, which...

  19. Phylogenetic representativeness: a new method for evaluating taxon sampling in evolutionary studies

    Directory of Open Access Journals (Sweden)

    Passamonti Marco

    2010-04-01

    Full Text Available Abstract Background Taxon sampling is a major concern in phylogenetic studies. Incomplete, biased, or improper taxon sampling can lead to misleading results in reconstructing evolutionary relationships. Several theoretical methods are available to optimize taxon choice in phylogenetic analyses. However, most involve some knowledge about the genetic relationships of the group of interest (i.e., the ingroup, or even a well-established phylogeny itself; these data are not always available in general phylogenetic applications. Results We propose a new method to assess taxon sampling developing Clarke and Warwick statistics. This method aims to measure the "phylogenetic representativeness" of a given sample or set of samples and it is based entirely on the pre-existing available taxonomy of the ingroup, which is commonly known to investigators. Moreover, our method also accounts for instability and discordance in taxonomies. A Python-based script suite, called PhyRe, has been developed to implement all analyses we describe in this paper. Conclusions We show that this method is sensitive and allows direct discrimination between representative and unrepresentative samples. It is also informative about the addition of taxa to improve taxonomic coverage of the ingroup. Provided that the investigators' expertise is mandatory in this field, phylogenetic representativeness makes up an objective touchstone in planning phylogenetic studies.

  20. A Method against Interrupted-Sampling Repeater Jamming Based on Energy Function Detection and Band-Pass Filtering

    Directory of Open Access Journals (Sweden)

    Hui Yuan

    2017-01-01

    Full Text Available Interrupted-sampling repeater jamming (ISRJ is a new kind of coherent jamming to the large time-bandwidth linear frequency modulation (LFM signal. Many jamming modes, such as lifelike multiple false targets and dense false targets, can be made through setting up different parameters. According to the “storage-repeater-storage-repeater” characteristics of the ISRJ and the differences in the time-frequency-energy domain between the ISRJ signal and the target echo signal, one new method based on the energy function detection and band-pass filtering is proposed to suppress the ISRJ. The methods mainly consist of two parts: extracting the signal segments without ISRJ and constructing band-pass filtering function with low sidelobe. The simulation results show that the method is effective in the ISRJ with different parameters.

  1. Solid phase microextraction headspace sampling of chemical warfare agent contaminated samples : method development for GC-MS analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jackson Lepage, C.R.; Hancock, J.R. [Defence Research and Development Canada, Medicine Hat, AB (Canada); Wyatt, H.D.M. [Regina Univ., SK (Canada)

    2004-07-01

    Defence R and D Canada-Suffield (DRDC-Suffield) is responsible for analyzing samples that are suspected to contain chemical warfare agents, either collected by the Canadian Forces or by first-responders in the event of a terrorist attack in Canada. The analytical techniques used to identify the composition of the samples include gas chromatography-mass spectrometry (GC-MS), liquid chromatography-mass spectrometry (LC-MS), Fourier-transform infrared spectroscopy (FT-IR) and nuclear magnetic resonance spectroscopy. GC-MS and LC-MS generally require solvent extraction and reconcentration, thereby increasing sample handling. The authors examined analytical techniques which reduce or eliminate sample manipulation. In particular, this paper presented a screening method based on solid phase microextraction (SPME) headspace sampling and GC-MS analysis for chemical warfare agents such as mustard, sarin, soman, and cyclohexyl methylphosphonofluoridate in contaminated soil samples. SPME is a method which uses small adsorbent polymer coated silica fibers that trap vaporous or liquid analytes for GC or LC analysis. Collection efficiency can be increased by adjusting sampling time and temperature. This method was tested on two real-world samples, one from excavated chemical munitions and the second from a caustic decontamination mixture. 7 refs., 2 tabs., 3 figs.

  2. Simulated Tempering Distributed Replica Sampling, Virtual Replica Exchange, and Other Generalized-Ensemble Methods for Conformational Sampling.

    Science.gov (United States)

    Rauscher, Sarah; Neale, Chris; Pomès, Régis

    2009-10-13

    Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.

  3. An improved sampling method of complex network

    Science.gov (United States)

    Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing

    2014-12-01

    Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.

  4. A distance limited method for sampling downed coarse woody debris

    Science.gov (United States)

    Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine; Michael S. Williams

    2012-01-01

    A new sampling method for down coarse woody debris is proposed based on limiting the perpendicular distance from individual pieces to a randomly chosen sample point. Two approaches are presented that allow different protocols to be used to determine field measurements; estimators for each protocol are also developed. Both protocols are compared via simulation against...

  5. A comparison of fitness-case sampling methods for genetic programming

    Science.gov (United States)

    Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel

    2017-11-01

    Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.

  6. Sample-based engine noise synthesis using an enhanced pitch-synchronous overlap-and-add method.

    Science.gov (United States)

    Jagla, Jan; Maillard, Julien; Martin, Nadine

    2012-11-01

    An algorithm for the real time synthesis of internal combustion engine noise is presented. Through the analysis of a recorded engine noise signal of continuously varying engine speed, a dataset of sound samples is extracted allowing the real time synthesis of the noise induced by arbitrary evolutions of engine speed. The sound samples are extracted from a recording spanning the entire engine speed range. Each sample is delimitated such as to contain the sound emitted during one cycle of the engine plus the necessary overlap to ensure smooth transitions during the synthesis. The proposed approach, an extension of the PSOLA method introduced for speech processing, takes advantage of the specific periodicity of engine noise signals to locate the extraction instants of the sound samples. During the synthesis stage, the sound samples corresponding to the target engine speed evolution are concatenated with an overlap and add algorithm. It is shown that this method produces high quality audio restitution with a low computational load. It is therefore well suited for real time applications.

  7. Efficiency of snake sampling methods in the Brazilian semiarid region.

    Science.gov (United States)

    Mesquita, Paula C M D; Passos, Daniel C; Cechin, Sonia Z

    2013-09-01

    The choice of sampling methods is a crucial step in every field survey in herpetology. In countries where time and financial support are limited, the choice of the methods is critical. The methods used to sample snakes often lack objective criteria, and the traditional methods have apparently been more important when making the choice. Consequently researches using not-standardized methods are frequently found in the literature. We have compared four commonly used methods for sampling snake assemblages in a semiarid area in Brazil. We compared the efficacy of each method based on the cost-benefit regarding the number of individuals and species captured, time, and financial investment. We found that pitfall traps were the less effective method in all aspects that were evaluated and it was not complementary to the other methods in terms of abundance of species and assemblage structure. We conclude that methods can only be considered complementary if they are standardized to the objectives of the study. The use of pitfall traps in short-term surveys of the snake fauna in areas with shrubby vegetation and stony soil is not recommended.

  8. Brachytherapy dose-volume histogram computations using optimized stratified sampling methods

    International Nuclear Information System (INIS)

    Karouzakis, K.; Lahanas, M.; Milickovic, N.; Giannouli, S.; Baltas, D.; Zamboglou, N.

    2002-01-01

    A stratified sampling method for the efficient repeated computation of dose-volume histograms (DVHs) in brachytherapy is presented as used for anatomy based brachytherapy optimization methods. The aim of the method is to reduce the number of sampling points required for the calculation of DVHs for the body and the PTV. From the DVHs are derived the quantities such as Conformity Index COIN and COIN integrals. This is achieved by using partial uniform distributed sampling points with a density in each region obtained from a survey of the gradients or the variance of the dose distribution in these regions. The shape of the sampling regions is adapted to the patient anatomy and the shape and size of the implant. For the application of this method a single preprocessing step is necessary which requires only a few seconds. Ten clinical implants were used to study the appropriate number of sampling points, given a required accuracy for quantities such as cumulative DVHs, COIN indices and COIN integrals. We found that DVHs of very large tissue volumes surrounding the PTV, and also COIN distributions, can be obtained using a factor of 5-10 times smaller the number of sampling points in comparison with uniform distributed points

  9. A fast learning method for large scale and multi-class samples of SVM

    Science.gov (United States)

    Fan, Yu; Guo, Huiming

    2017-06-01

    A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.

  10. Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering

    Science.gov (United States)

    Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki

    2018-03-01

    We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.

  11. The method of Sample Management in Neutron Activation Analysis Laboratory-Serpong

    International Nuclear Information System (INIS)

    Elisabeth-Ratnawati

    2005-01-01

    In the testing laboratory used by neutron activation analysis method, sample preparation is the main factor and it can't be neglect. The error in the sample preparation can give result with lower accuracy. In this article is explained the scheme of sample preparation i.e sample receive administration, the separate of sample, fluid and solid sample preparation, sample grouping, irradiation, sample counting and holding the sample post irradiation. If the management of samples were good application based on Standard Operation Procedure, therefore each samples has good traceability. To optimize the management of samples is needed the trained and skilled personal and good facility. (author)

  12. Shear Strength of Remoulding Clay Samples Using Different Methods of Moulding

    Science.gov (United States)

    Norhaliza, W.; Ismail, B.; Azhar, A. T. S.; Nurul, N. J.

    2016-07-01

    Shear strength for clay soil was required to determine the soil stability. Clay was known as a soil with complex natural formations and very difficult to obtain undisturbed samples at the site. The aim of this paper was to determine the unconfined shear strength of remoulded clay on different methods in moulding samples which were proctor compaction, hand operated soil compacter and miniature mould methods. All the samples were remoulded with the same optimum moisture content (OMC) and density that were 18% and 1880 kg/m3 respectively. The unconfined shear strength results of remoulding clay soils for proctor compaction method was 289.56kPa with the strain 4.8%, hand operated method was 261.66kPa with the strain 4.4% and miniature mould method was 247.52kPa with the strain 3.9%. Based on the proctor compaction method, the reduction percentage of unconfined shear strength of remoulded clay soil of hand operated method was 9.66%, and for miniature mould method was 14.52%. Thus, because there was no significant difference of reduction percentage of unconfined shear strength between three different methods, so it can be concluded that remoulding clay by hand operated method and miniature mould method were accepted and suggested to perform remoulding clay samples by other future researcher. However for comparison, the hand operated method was more suitable to form remoulded clay sample in term of easiness, saving time and less energy for unconfined shear strength determination purposes.

  13. 7 CFR 29.110 - Method of sampling.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Method of sampling. 29.110 Section 29.110 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Regulations Inspectors, Samplers, and Weighers § 29.110 Method of sampling. In sampling tobacco...

  14. Uniform Sampling Table Method and its Applications II--Evaluating the Uniform Sampling by Experiment.

    Science.gov (United States)

    Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei

    2015-01-01

    A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.

  15. Methods of sampling airborne fungi in working environments of waste treatment facilities.

    Science.gov (United States)

    Černá, Kristýna; Wittlingerová, Zdeňka; Zimová, Magdaléna; Janovský, Zdeněk

    2016-01-01

    The objective of the present study was to evaluate and compare the efficiency of a filter based sampling method and a high volume sampling method for sampling airborne culturable fungi present in waste sorting facilities. Membrane filters method was compared with surface air system method. The selected sampling methods were modified and tested in 2 plastic waste sorting facilities. The total number of colony-forming units (CFU)/m3 of airborne fungi was dependent on the type of sampling device, on the time of sampling, which was carried out every hour from the beginning of the work shift, and on the type of cultivation medium (p airborne fungi ranged 2×102-1.7×106 CFU/m3 when using the membrane filters (MF) method, and 3×102-6.4×104 CFU/m3 when using the surface air system (SAS) method. Both methods showed comparable sensitivity to the fluctuations of the concentrations of airborne fungi during the work shifts. The SAS method is adequate for a fast indicative determination of concentration of airborne fungi. The MF method is suitable for thorough assessment of working environment contamination by airborne fungi. Therefore we recommend the MF method for the implementation of a uniform standard methodology of airborne fungi sampling in working environments of waste treatment facilities. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  16. Probabilistic finite element stiffness of a laterally loaded monopile based on an improved asymptotic sampling method

    DEFF Research Database (Denmark)

    Vahdatirad, Mohammadjavad; Bayat, Mehdi; Andersen, Lars Vabbersgaard

    2015-01-01

    shear strength of clay. Normal and Sobol sampling are employed to provide the asymptotic sampling method to generate the probability distribution of the foundation stiffnesses. Monte Carlo simulation is used as a benchmark. Asymptotic sampling accompanied with Sobol quasi random sampling demonstrates......The mechanical responses of an offshore monopile foundation mounted in over-consolidated clay are calculated by employing a stochastic approach where a nonlinear p–y curve is incorporated with a finite element scheme. The random field theory is applied to represent a spatial variation for undrained...... an efficient method for estimating the probability distribution of stiffnesses for the offshore monopile foundation....

  17. Matrix removal in state of the art sample preparation methods for serum by charged aerosol detection and metabolomics-based LC-MS.

    Science.gov (United States)

    Schimek, Denise; Francesconi, Kevin A; Mautner, Anton; Libiseller, Gunnar; Raml, Reingard; Magnes, Christoph

    2016-04-07

    Investigations into sample preparation procedures usually focus on analyte recovery with no information provided about the fate of other components of the sample (matrix). For many analyses, however, and particularly those using liquid chromatography-mass spectrometry (LC-MS), quantitative measurements are greatly influenced by sample matrix. Using the example of the drug amitriptyline and three of its metabolites in serum, we performed a comprehensive investigation of nine commonly used sample clean-up procedures in terms of their suitability for preparing serum samples. We were monitoring the undesired matrix compounds using a combination of charged aerosol detection (CAD), LC-CAD, and a metabolomics-based LC-MS/MS approach. In this way, we compared analyte recovery of protein precipitation-, liquid-liquid-, solid-phase- and hybrid solid-phase extraction methods. Although all methods provided acceptable recoveries, the highest recovery was obtained by protein precipitation with acetonitrile/formic acid (amitriptyline 113%, nortriptyline 92%, 10-hydroxyamitriptyline 89%, and amitriptyline N-oxide 96%). The quantification of matrix removal by LC-CAD showed that the solid phase extraction method (SPE) provided the lowest remaining matrix load (48-123 μg mL(-1)), which is a 10-40 fold better matrix clean-up than the precipitation- or hybrid solid phase extraction methods. The metabolomics profiles of eleven compound classes, comprising 70 matrix compounds showed the trends of compound class removal for each sample preparation strategy. The collective data set of analyte recovery, matrix removal and matrix compound profile was used to assess the effectiveness of each sample preparation method. The best performance in matrix clean-up and practical handling of small sample volumes was showed by the SPE techniques, particularly HLB SPE. CAD proved to be an effective tool for revealing the considerable differences between the sample preparation methods. This detector can

  18. 19 CFR 151.83 - Method of sampling.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Method of sampling. 151.83 Section 151.83 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Cotton § 151.83 Method of sampling. For...

  19. ALARA ASSESSMENT OF SETTLER SLUDGE SAMPLING METHODS

    International Nuclear Information System (INIS)

    Nelsen, L.A.

    2009-01-01

    The purpose of this assessment is to compare underwater and above water settler sludge sampling methods to determine if the added cost for underwater sampling for the sole purpose of worker dose reductions is justified. Initial planning for sludge sampling included container, settler and knock-out-pot (KOP) sampling. Due to the significantly higher dose consequence of KOP sludge, a decision was made to sample KOP underwater to achieve worker dose reductions. Additionally, initial plans were to utilize the underwater sampling apparatus for settler sludge. Since there are no longer plans to sample KOP sludge, the decision for underwater sampling for settler sludge needs to be revisited. The present sampling plan calls for spending an estimated $2,500,000 to design and construct a new underwater sampling system (per A21 C-PL-001 RevOE). This evaluation will compare and contrast the present method of above water sampling to the underwater method that is planned by the Sludge Treatment Project (STP) and determine if settler samples can be taken using the existing sampling cart (with potentially minor modifications) while maintaining doses to workers As Low As Reasonably Achievable (ALARA) and eliminate the need for costly redesigns, testing and personnel retraining

  20. ALARA ASSESSMENT OF SETTLER SLUDGE SAMPLING METHODS

    Energy Technology Data Exchange (ETDEWEB)

    NELSEN LA

    2009-01-30

    The purpose of this assessment is to compare underwater and above water settler sludge sampling methods to determine if the added cost for underwater sampling for the sole purpose of worker dose reductions is justified. Initial planning for sludge sampling included container, settler and knock-out-pot (KOP) sampling. Due to the significantly higher dose consequence of KOP sludge, a decision was made to sample KOP underwater to achieve worker dose reductions. Additionally, initial plans were to utilize the underwater sampling apparatus for settler sludge. Since there are no longer plans to sample KOP sludge, the decision for underwater sampling for settler sludge needs to be revisited. The present sampling plan calls for spending an estimated $2,500,000 to design and construct a new underwater sampling system (per A21 C-PL-001 RevOE). This evaluation will compare and contrast the present method of above water sampling to the underwater method that is planned by the Sludge Treatment Project (STP) and determine if settler samples can be taken using the existing sampling cart (with potentially minor modifications) while maintaining doses to workers As Low As Reasonably Achievable (ALARA) and eliminate the need for costly redesigns, testing and personnel retraining.

  1. Evaluation of sampling methods for the detection of Salmonella in broiler flocks

    DEFF Research Database (Denmark)

    Skov, Marianne N.; Carstensen, B.; Tornoe, N.

    1999-01-01

    The present study compares four different sampling methods potentially applicable to detection of Salmonella in broiler flocks, based on collection of faecal samples (i) by hand, 300 fresh faecal samples (ii) absorbed on five sheets of paper (iii) absorbed on five pairs of socks (elastic cotton...... horizontal or vertical) were found in the investigation. The results showed that the sock method (five pairs of socks) had a sensitivity comparable with the hand collection method (60 pools of five faecal samples); the paper collection method was inferior, as was the use of only one pair of socks, Estimation...... tubes pulled over the boots and termed 'socks') and (iv) by using only one pair of socks. Twenty-three broiler flocks were included in the investigation and 18 of these were found to be positive by at least one method. Seven serotypes of Salmonella with different patterns of transmission (mainly...

  2. A novel method for fission product noble gas sampling

    International Nuclear Information System (INIS)

    Jain, S.K.; Prakash, Vivek; Singh, G.K.; Vinay, Kr.; Awsthi, A.; Bihari, K.; Joyson, R.; Manu, K.; Gupta, Ashok

    2008-01-01

    Noble gases occur to some extent in the Earth's atmosphere, but the concentrations of all but argon are exceedingly low. Argon is plentiful, constituting almost 1 % of the air. Fission Product Noble Gases (FPNG) are produced by nuclear fission and large parts of FPNG is produced in Nuclear reactions. FPNG are b-j emitters and contributing significantly in public dose. During normal operation of reactor release of FPNG is negligible but its release increases in case of fuel failure. Xenon, a member of FPNG family helps in identification of fuel failure and its extent in PHWRs. Due to above reasons it becomes necessary to assess the FPNG release during operation of NPPs. Presently used methodology of assessment of FPNG, at almost all power stations is Computer based gamma ray spectrometry. This provides fission product Noble gases nuclide identification through peak search of spectra. The air sample for the same is collected by grab sampling method, which has inherent disadvantages. An alternate method was developed at Rajasthan Atomic Power Station (RAPS) - 3 and 4 for assessment of FPNG, which uses adsorption phenomena for collection of air samples. This report presents details of sampling method for FPNG and noble gases in different systems of Nuclear Power Plant. (author)

  3. Establishing a novel automated magnetic bead-based method for the extraction of DNA from a variety of forensic samples.

    Science.gov (United States)

    Witt, Sebastian; Neumann, Jan; Zierdt, Holger; Gébel, Gabriella; Röscheisen, Christiane

    2012-09-01

    Automated systems have been increasingly utilized for DNA extraction by many forensic laboratories to handle growing numbers of forensic casework samples while minimizing the risk of human errors and assuring high reproducibility. The step towards automation however is not easy: The automated extraction method has to be very versatile to reliably prepare high yields of pure genomic DNA from a broad variety of sample types on different carrier materials. To prevent possible cross-contamination of samples or the loss of DNA, the components of the kit have to be designed in a way that allows for the automated handling of the samples with no manual intervention necessary. DNA extraction using paramagnetic particles coated with a DNA-binding surface is predestined for an automated approach. For this study, we tested different DNA extraction kits using DNA-binding paramagnetic particles with regard to DNA yield and handling by a Freedom EVO(®)150 extraction robot (Tecan) equipped with a Te-MagS magnetic separator. Among others, the extraction kits tested were the ChargeSwitch(®)Forensic DNA Purification Kit (Invitrogen), the PrepFiler™Automated Forensic DNA Extraction Kit (Applied Biosystems) and NucleoMag™96 Trace (Macherey-Nagel). After an extensive test phase, we established a novel magnetic bead extraction method based upon the NucleoMag™ extraction kit (Macherey-Nagel). The new method is readily automatable and produces high yields of DNA from different sample types (blood, saliva, sperm, contact stains) on various substrates (filter paper, swabs, cigarette butts) with no evidence of a loss of magnetic beads or sample cross-contamination. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  4. A DNA based method to detect the grapevine root-rotting fungus Roesleria subterranea in soil and root samples

    Directory of Open Access Journals (Sweden)

    S. Neuhauser

    2009-05-01

    Full Text Available Roesleria subterranea causes root rot in grapevine and fruit trees. The fungus has long been underestimated as a weak parasite, but during the last years it has been reported to cause severe damages in German vineyards. Direct, observation-based detection of the parasite is time consuming and destructive, as large parts of the rootstocks have to be uprooted and screened for the tiny, stipitate, hypogeous ascomata of R. subterranea. To facilitate rapid detection in vineyards, protocols to extract DNA from soil samples and grapevine roots, and R.-subterranea-specific PCR primers were designed. Twelve DNA-extraction protocols for soil samples were tested in small-scale experiments, and selected parameters were optimised. A protocol based on ball-mill homogenization, DNA extraction with SDS, skim milk, chloroform, and isopropanol, and subsequent purifi cation of the raw extracts with PVPP-spin-columns was most effective. This DNA extraction protocol was found to be suitable for a wide range of soil-types including clay, loam and humic-rich soils. For DNA extraction from grapevine roots a CTAB-based protocol was more reliable for various grapevine rootstock varieties. Roesleria-subterranea-specific primers for the ITS1-5.8S-ITS2 rDNA region were developed and tested for their specifi city to DNA extracts from eleven R. subterranea strains isolated from grapevine and fruit trees. No cross reactions were detected with DNA extracts from 44 different species of fungi isolated from vineyard soils. The sensitivity of the species-specifi c primers in combination with the DNA extraction method for soil was high: as little as 100 fg μl-1 R.-subterranea-DNA was suffi cient for a detection in soil samples and plant material. Given that specifi c primers are available, the presented method will also allow quick and large-scale testing for other root pathogens.

  5. Possible overestimation of surface disinfection efficiency by assessment methods based on liquid sampling procedures as demonstrated by in situ quantification of spore viability.

    Science.gov (United States)

    Grand, I; Bellon-Fontaine, M-N; Herry, J-M; Hilaire, D; Moriconi, F-X; Naïtali, M

    2011-09-01

    The standard test methods used to assess the efficiency of a disinfectant applied to surfaces are often based on counting the microbial survivors sampled in a liquid, but total cell removal from surfaces is seldom achieved. One might therefore wonder whether evaluations of microbial survivors in liquid-sampled cells are representative of the levels of survivors in whole populations. The present study was thus designed to determine the "damaged/undamaged" status induced by a peracetic acid disinfection for Bacillus atrophaeus spores deposited on glass coupons directly on this substrate and to compare it to the status of spores collected in liquid by a sampling procedure. The method utilized to assess the viability of both surface-associated and liquid-sampled spores included fluorescence labeling with a combination of Syto 61 and Chemchrome V6 dyes and quantifications by analyzing the images acquired by confocal laser scanning microscopy. The principal result of the study was that the viability of spores sampled in the liquid was found to be poorer than that of surface-associated spores. For example, after 2 min of peracetic acid disinfection, less than 17% ± 5% of viable cells were detected among liquid-sampled cells compared to 79% ± 5% or 47% ± 4%, respectively, when the viability was evaluated on the surface after or without the sampling procedure. Moreover, assessments of the survivors collected in the liquid phase, evaluated using the microscopic method and standard plate counts, were well correlated. Evaluations based on the determination of survivors among the liquid-sampled cells can thus overestimate the efficiency of surface disinfection procedures.

  6. Harmonisation of microbial sampling and testing methods for distillate fuels

    Energy Technology Data Exchange (ETDEWEB)

    Hill, G.C.; Hill, E.C. [ECHA Microbiology Ltd., Cardiff (United Kingdom)

    1995-05-01

    Increased incidence of microbial infection in distillate fuels has led to a demand for organisations such as the Institute of Petroleum to propose standards for microbiological quality, based on numbers of viable microbial colony forming units. Variations in quality requirements, and in the spoilage significance of contaminating microbes plus a tendency for temporal and spatial changes in the distribution of microbes, makes such standards difficult to implement. The problem is compounded by a diversity in the procedures employed for sampling and testing for microbial contamination and in the interpretation of the data obtained. The following paper reviews these problems and describes the efforts of The Institute of Petroleum Microbiology Fuels Group to address these issues and in particular to bring about harmonisation of sampling and testing methods. The benefits and drawbacks of available test methods, both laboratory based and on-site, are discussed.

  7. New sample preparation method based on task-specific ionic liquids for extraction and determination of copper in urine and wastewater.

    Science.gov (United States)

    Trtić-Petrović, Tatjana; Dimitrijević, Aleksandra; Zdolšek, Nikola; Đorđević, Jelena; Tot, Aleksandar; Vraneš, Milan; Gadžurić, Slobodan

    2018-01-01

    In this study, four hydrophilic ionic liquids (ILs) containing 1-alkyl-3-methylimidazolim cation and either salicylate or chloride anions were synthetized and studied as new task-specific ionic liquids (TSILs) suitable for aqueous biphasic system (ABS) formation and selective one-step extraction of copper(II). TSILs are designed so that the anion is responsible for forming the complex with metal(II) and preventing hydrolysis of metal cations at very strong alkaline pH, whereas the cation is responsible for selective extraction of metal(II)-salicylate complexes. It was found that 1-butyl-3-methylimidazolium salicylate could be used for selective extraction of Cu(II) in the presence of Zn(II), Cd(II), and Pb(II) at very alkaline solution without metal hydroxide formation. It was assumed that formation of metal(II)-salicylate complexes prevents the hydrolysis of the metal ions in alkaline solutions. The determined stability constants for Cu(II)-salicylate complexes, where salicylate was derived from different ionic liquids, indicated that there was no significant influence of the cation of the ionic liquid on the stability of the complexes. The ABS based on 1-butyl-3-methylimidazolium salicylate has been applied as the sample preparation step prior to voltammetric determination of Cu(II). The effect of volume of aqueous sample and IL and extraction time were investigated and optimum extraction conditions were determined. The obtained detection limits were 8 ng dm -3 . The optimized method was applied for the determination of Cu(II) in tap water, wastewater, and urine. The study indicated that application of the ABS based on 1-butyl-3-methylimidazolium salicylate ionic liquid could be successfully applied as the sample preparation method for the determination of Cu(II) from various environmental samples. Graphical abstract Aqueous biphasic system based on task-specific ionic liquid as a sample pretreatment for selective determination of Cu(II) in biological and

  8. A proposal on alternative sampling-based modeling method of spherical particles in stochastic media for Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Song Hyun; Lee, Jae Yong; KIm, Do Hyun; Kim, Jong Kyung [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-08-15

    Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media.

  9. A proposal on alternative sampling-based modeling method of spherical particles in stochastic media for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Kim, Song Hyun; Lee, Jae Yong; KIm, Do Hyun; Kim, Jong Kyung; Noh, Jae Man

    2015-01-01

    Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media

  10. Sampling bee communities using pan traps: alternative methods increase sample size

    Science.gov (United States)

    Monitoring of the status of bee populations and inventories of bee faunas require systematic sampling. Efficiency and ease of implementation has encouraged the use of pan traps to sample bees. Efforts to find an optimal standardized sampling method for pan traps have focused on pan trap color. Th...

  11. New adaptive sampling method in particle image velocimetry

    International Nuclear Information System (INIS)

    Yu, Kaikai; Xu, Jinglei; Tang, Lan; Mo, Jianwei

    2015-01-01

    This study proposes a new adaptive method to enable the number of interrogation windows and their positions in a particle image velocimetry (PIV) image interrogation algorithm to become self-adapted according to the seeding density. The proposed method can relax the constraint of uniform sampling rate and uniform window size commonly adopted in the traditional PIV algorithm. In addition, the positions of the sampling points are redistributed on the basis of the spring force generated by the sampling points. The advantages include control of the number of interrogation windows according to the local seeding density and smoother distribution of sampling points. The reliability of the adaptive sampling method is illustrated by processing synthetic and experimental images. The synthetic example attests to the advantages of the sampling method. Compared with that of the uniform interrogation technique in the experimental application, the spatial resolution is locally enhanced when using the proposed sampling method. (technical design note)

  12. Ordering decision-making methods on spare parts for a new aircraft fleet based on a two-sample prediction

    International Nuclear Information System (INIS)

    Yongquan, Sun; Xi, Chen; He, Ren; Yingchao, Jin; Quanwu, Liu

    2016-01-01

    Ordering decision-making on spare parts is crucial in maximizing aircraft utilization and minimizing total operating cost. Extensive researches on spare parts inventory management and optimal allocation could be found based on the amount of historical operation data or condition-monitoring data. However, it is challengeable to make an ordering decision on spare parts under the case of establishment of a fleet by introducing new aircraft with little historical data. In this paper, spare parts supporting policy and ordering decision-making policy for new aircraft fleet are analyzed firstly. Then two-sample predictions for a Weibull distribution and a Weibull process are incorporated into forecast of the first failure time and failure number during certain time period using Bayesian and classical method respectively, according to which the ordering time and ordering quantity for spare parts are identified. Finally, a case study is presented to illustrate the methods of identifying the ordering time and ordering number of engine-driven pumps through forecasting the failure time and failure number, followed by a discussion on the impact of various fleet sizes on prediction results. This method has the potential to decide the ordering time and quantity of spare parts when a new aircraft fleet is established. - Highlights: • A modeling framework of ordering spare parts for a new fleet is proposed. • Models for ordering time and number are established based on two-sample prediction. • The computation of future failure time is simplified using Newtonian binomial law. • Comparison of the first failure time PDFs is used to identify process parameters. • Identification methods for spare parts are validated by Engine Driven Pump case study.

  13. A Novel Analysis Method for Paired-Sample Microbial Ecology Experiments.

    Science.gov (United States)

    Olesen, Scott W; Vora, Suhani; Techtmann, Stephen M; Fortney, Julian L; Bastidas-Oyanedel, Juan R; Rodríguez, Jorge; Hazen, Terry C; Alm, Eric J

    2016-01-01

    Many microbial ecology experiments use sequencing data to measure a community's response to an experimental treatment. In a common experimental design, two units, one control and one experimental, are sampled before and after the treatment is applied to the experimental unit. The four resulting samples contain information about the dynamics of organisms that respond to the treatment, but there are no analytical methods designed to extract exactly this type of information from this configuration of samples. Here we present an analytical method specifically designed to visualize and generate hypotheses about microbial community dynamics in experiments that have paired samples and few or no replicates. The method is based on the Poisson lognormal distribution, long studied in macroecology, which we found accurately models the abundance distribution of taxa counts from 16S rRNA surveys. To demonstrate the method's validity and potential, we analyzed an experiment that measured the effect of crude oil on ocean microbial communities in microcosm. Our method identified known oil degraders as well as two clades, Maricurvus and Rhodobacteraceae, that responded to amendment with oil but do not include known oil degraders. Our approach is sensitive to organisms that increased in abundance only in the experimental unit but less sensitive to organisms that increased in both control and experimental units, thus mitigating the role of "bottle effects".

  14. Final Report for X-ray Diffraction Sample Preparation Method Development

    Energy Technology Data Exchange (ETDEWEB)

    Ely, T. M. [Hanford Site (HNF), Richland, WA (United States); Meznarich, H. K. [Hanford Site (HNF), Richland, WA (United States); Valero, T. [Hanford Site (HNF), Richland, WA (United States)

    2018-01-30

    WRPS-1500790, “X-ray Diffraction Saltcake Sample Preparation Method Development Plan/Procedure,” was originally prepared with the intent of improving the specimen preparation methodology used to generate saltcake specimens suitable for XRD-based solid phase characterization. At the time that this test plan document was originally developed, packed powder in cavity supports with collodion binder was the established XRD specimen preparation method. An alternate specimen preparation method less vulnerable, if not completely invulnerable to preferred orientation effects, was desired as a replacement for the method.

  15. A Novel Method to Handle the Effect of Uneven Sampling Effort in Biodiversity Databases

    Science.gov (United States)

    Pardo, Iker; Pata, María P.; Gómez, Daniel; García, María B.

    2013-01-01

    How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses. PMID:23326357

  16. An Integrated Approach Using Chaotic Map & Sample Value Difference Method for Electrocardiogram Steganography and OFDM Based Secured Patient Information Transmission.

    Science.gov (United States)

    Pandey, Anukul; Saini, Barjinder Singh; Singh, Butta; Sood, Neetu

    2017-10-18

    This paper presents a patient's confidential data hiding scheme in electrocardiogram (ECG) signal and its subsequent wireless transmission. Patient's confidential data is embedded in ECG (called stego-ECG) using chaotic map and the sample value difference approach. The sample value difference approach effectually hides the patient's confidential data in ECG sample pairs at the predefined locations. The chaotic map generates these predefined locations through the use of selective control parameters. Subsequently, the wireless transmission of the stego-ECG is analyzed using the Orthogonal Frequency Division Multiplexing (OFDM) system in a Rayleigh fading scenario for telemedicine applications. Evaluation of proposed method on all 48 records of MIT-BIH arrhythmia ECG database demonstrates that the embedding does not alter the diagnostic features of cover ECG. The secret data imperceptibility in stego-ECG is evident through the statistical and clinical performance measures. Statistical measures comprise of Percentage Root-mean-square Difference (PRD), Peak Signal to Noise Ratio (PSNR), and Kulback-Leibler Divergence (KL-Div), etc. while clinical metrics includes wavelet Energy Based Diagnostic Distortion (WEDD) and Wavelet based Weighted PRD (WWPRD). The various channel Signal-to-Noise Ratio scenarios are simulated for wireless communication of stego-ECG in OFDM system. The proposed method over all the 48 records of MIT-BIH arrhythmia database resulted in average, PRD = 0.26, PSNR = 55.49, KL-Div = 3.34 × 10 -6 , WEDD = 0.02, and WWPRD = 0.10 with secret data size of 21Kb. Further, a comparative analysis of proposed method and recent existing works was also performed. The results clearly, demonstrated the superiority of proposed method.

  17. Transformation-cost time-series method for analyzing irregularly sampled data.

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations-with associated costs-to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  18. Transformation-cost time-series method for analyzing irregularly sampled data

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G. Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations—with associated costs—to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  19. Sample selection based on kernel-subclustering for the signal reconstruction of multifunctional sensors

    International Nuclear Information System (INIS)

    Wang, Xin; Wei, Guo; Sun, Jinwei

    2013-01-01

    The signal reconstruction methods based on inverse modeling for the signal reconstruction of multifunctional sensors have been widely studied in recent years. To improve the accuracy, the reconstruction methods have become more and more complicated because of the increase in the model parameters and sample points. However, there is another factor that affects the reconstruction accuracy, the position of the sample points, which has not been studied. A reasonable selection of the sample points could improve the signal reconstruction quality in at least two ways: improved accuracy with the same number of sample points or the same accuracy obtained with a smaller number of sample points. Both ways are valuable for improving the accuracy and decreasing the workload, especially for large batches of multifunctional sensors. In this paper, we propose a sample selection method based on kernel-subclustering distill groupings of the sample data and produce the representation of the data set for inverse modeling. The method calculates the distance between two data points based on the kernel-induced distance instead of the conventional distance. The kernel function is a generalization of the distance metric by mapping the data that are non-separable in the original space into homogeneous groups in the high-dimensional space. The method obtained the best results compared with the other three methods in the simulation. (paper)

  20. Perilymph sampling from the cochlear apex: a reliable method to obtain higher purity perilymph samples from scala tympani.

    Science.gov (United States)

    Salt, Alec N; Hale, Shane A; Plonkte, Stefan K R

    2006-05-15

    Measurements of drug levels in the fluids of the inner ear are required to establish kinetic parameters and to determine the influence of specific local delivery protocols. For most substances, this requires cochlear fluids samples to be obtained for analysis. When auditory function is of primary interest, the drug level in the perilymph of scala tympani (ST) is most relevant, since drug in this scala has ready access to the auditory sensory cells. In many prior studies, ST perilymph samples have been obtained from the basal turn, either by aspiration through the round window membrane (RWM) or through an opening in the bony wall. A number of studies have demonstrated that such samples are likely to be contaminated with cerebrospinal fluid (CSF). CSF enters the basal turn of ST through the cochlear aqueduct when the bony capsule is perforated or when fluid is aspirated. The degree of sample contamination has, however, not been widely appreciated. Recent studies have shown that perilymph samples taken through the round window membrane are highly contaminated with CSF, with samples greater than 2microL in volume containing more CSF than perilymph. In spite of this knowledge, many groups continue to sample from the base of the cochlea, as it is a well-established method. We have developed an alternative, technically simple method to increase the proportion of ST perilymph in a fluid sample. The sample is taken from the apex of the cochlea, a site that is distant from the cochlear aqueduct. A previous problem with sampling through a perforation in the bone was that the native perilymph rapidly leaked out driven by CSF pressure and was lost to the middle ear space. We therefore developed a procedure to collect all the fluid that emerged from the perforated apex after perforation. We evaluated the method using a marker ion trimethylphenylammonium (TMPA). TMPA was applied to the perilymph of guinea pigs either by RW irrigation or by microinjection into the apical turn. The

  1. Radioactive air sampling methods

    CERN Document Server

    Maiello, Mark L

    2010-01-01

    Although the field of radioactive air sampling has matured and evolved over decades, it has lacked a single resource that assimilates technical and background information on its many facets. Edited by experts and with contributions from top practitioners and researchers, Radioactive Air Sampling Methods provides authoritative guidance on measuring airborne radioactivity from industrial, research, and nuclear power operations, as well as naturally occuring radioactivity in the environment. Designed for industrial hygienists, air quality experts, and heath physicists, the book delves into the applied research advancing and transforming practice with improvements to measurement equipment, human dose modeling of inhaled radioactivity, and radiation safety regulations. To present a wide picture of the field, it covers the international and national standards that guide the quality of air sampling measurements and equipment. It discusses emergency response issues, including radioactive fallout and the assets used ...

  2. The Recent Developments in Sample Preparation for Mass Spectrometry-Based Metabolomics.

    Science.gov (United States)

    Gong, Zhi-Gang; Hu, Jing; Wu, Xi; Xu, Yong-Jiang

    2017-07-04

    Metabolomics is a critical member in systems biology. Although great progress has been achieved in metabolomics, there are still some problems in sample preparation, data processing and data interpretation. In this review, we intend to explore the roles, challenges and trends in sample preparation for mass spectrometry- (MS-) based metabolomics. The newly emerged sample preparation methods were also critically examined, including laser microdissection, in vivo sampling, dried blood spot, microwave, ultrasound and enzyme-assisted extraction, as well as microextraction techniques. Finally, we provide some conclusions and perspectives for sample preparation in MS-based metabolomics.

  3. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  4. Fluidics platform and method for sample preparation

    Science.gov (United States)

    Benner, Henry W.; Dzenitis, John M.

    2016-06-21

    Provided herein are fluidics platforms and related methods for performing integrated sample collection and solid-phase extraction of a target component of the sample all in one tube. The fluidics platform comprises a pump, particles for solid-phase extraction and a particle-holding means. The method comprises contacting the sample with one or more reagents in a pump, coupling a particle-holding means to the pump and expelling the waste out of the pump while the particle-holding means retains the particles inside the pump. The fluidics platform and methods herein described allow solid-phase extraction without pipetting and centrifugation.

  5. Estimation of creatinine in Urine sample by Jaffe's method

    International Nuclear Information System (INIS)

    Wankhede, Sonal; Arunkumar, Suja; Sawant, Pramilla D.; Rao, B.B.

    2012-01-01

    In-vitro bioassay monitoring is based on the determination of activity concentrations in biological samples excreted from the body and is most suitable for alpha and beta emitters. A truly representative bioassay sample is the one having all the voids collected during a 24-h period however, this being technically difficult, overnight urine samples collected by the workers are analyzed. These overnight urine samples are collected for 10-16 h, however in the absence of any specific information, 12 h duration is assumed and the observed results are then corrected accordingly obtain the daily excretion rate. To reduce the uncertainty due to unknown duration of sample collection, IAEA has recommended two methods viz., measurement of specific gravity and creatinine excretion rate in urine sample. Creatinine is a final metabolic product creatinine phosphate in the body and is excreted at a steady rate for people with normally functioning kidneys. It is, therefore, often used as a normalization factor for estimation of duration of sample collection. The present study reports the chemical procedure standardized and its application for the estimation of creatinine in urine samples collected from occupational workers. Chemical procedure for estimation of creatinine in bioassay samples was standardized and applied successfully for its estimation in bioassay samples collected from the workers. The creatinine excretion rate observed for these workers is lower than observed in literature. Further, work is in progress to generate a data bank of creatinine excretion rate for most of the workers and also to study the variability in creatinine coefficient for the same individual based on the analysis of samples collected for different duration

  6. Method of separate determination of high-ohmic sample resistance and contact resistance

    Directory of Open Access Journals (Sweden)

    Vadim A. Golubiatnikov

    2015-09-01

    Full Text Available A method of separate determination of two-pole sample volume resistance and contact resistance is suggested. The method is applicable to high-ohmic semiconductor samples: semi-insulating gallium arsenide, detector cadmium-zinc telluride (CZT, etc. The method is based on near-contact region illumination by monochromatic radiation of variable intensity from light emitting diodes with quantum energies exceeding the band gap of the material. It is necessary to obtain sample photo-current dependence upon light emitting diode current and to find the linear portion of this dependence. Extrapolation of this linear portion to the Y-axis gives the cut-off current. As the bias voltage is known, it is easy to calculate sample volume resistance. Then, using dark current value, one can determine the total contact resistance. The method was tested for n-type semi-insulating GaAs. The contact resistance value was shown to be approximately equal to the sample volume resistance. Thus, the influence of contacts must be taken into account when electrophysical data are analyzed.

  7. Selecting Sample Preparation Workflows for Mass Spectrometry-Based Proteomic and Phosphoproteomic Analysis of Patient Samples with Acute Myeloid Leukemia.

    Science.gov (United States)

    Hernandez-Valladares, Maria; Aasebø, Elise; Selheim, Frode; Berven, Frode S; Bruserud, Øystein

    2016-08-22

    Global mass spectrometry (MS)-based proteomic and phosphoproteomic studies of acute myeloid leukemia (AML) biomarkers represent a powerful strategy to identify and confirm proteins and their phosphorylated modifications that could be applied in diagnosis and prognosis, as a support for individual treatment regimens and selection of patients for bone marrow transplant. MS-based studies require optimal and reproducible workflows that allow a satisfactory coverage of the proteome and its modifications. Preparation of samples for global MS analysis is a crucial step and it usually requires method testing, tuning and optimization. Different proteomic workflows that have been used to prepare AML patient samples for global MS analysis usually include a standard protein in-solution digestion procedure with a urea-based lysis buffer. The enrichment of phosphopeptides from AML patient samples has previously been carried out either with immobilized metal affinity chromatography (IMAC) or metal oxide affinity chromatography (MOAC). We have recently tested several methods of sample preparation for MS analysis of the AML proteome and phosphoproteome and introduced filter-aided sample preparation (FASP) as a superior methodology for the sensitive and reproducible generation of peptides from patient samples. FASP-prepared peptides can be further fractionated or IMAC-enriched for proteome or phosphoproteome analyses. Herein, we will review both in-solution and FASP-based sample preparation workflows and encourage the use of the latter for the highest protein and phosphorylation coverage and reproducibility.

  8. Magnetic Stirrer Method for the Detection of Trichinella Larvae in Muscle Samples.

    Science.gov (United States)

    Mayer-Scholl, Anne; Pozio, Edoardo; Gayda, Jennifer; Thaben, Nora; Bahn, Peter; Nöckler, Karsten

    2017-03-03

    Trichinellosis is a debilitating disease in humans and is caused by the consumption of raw or undercooked meat of animals infected with the nematode larvae of the genus Trichinella. The most important sources of human infections worldwide are game meat and pork or pork products. In many countries, the prevention of human trichinellosis is based on the identification of infected animals by means of the artificial digestion of muscle samples from susceptible animal carcasses. There are several methods based on the digestion of meat but the magnetic stirrer method is considered the gold standard. This method allows the detection of Trichinella larvae by microscopy after the enzymatic digestion of muscle samples and subsequent filtration and sedimentation steps. Although this method does not require special and expensive equipment, internal controls cannot be used. Therefore, stringent quality management should be applied throughout the test. The aim of the present work is to provide detailed handling instructions and critical control points of the method to analysts, based on the experience of the European Union Reference Laboratory for Parasites and the National Reference Laboratory of Germany for Trichinella.

  9. Using Load Balancing to Scalably Parallelize Sampling-Based Motion Planning Algorithms

    KAUST Repository

    Fidel, Adam; Jacobs, Sam Ade; Sharma, Shishir; Amato, Nancy M.; Rauchwerger, Lawrence

    2014-01-01

    Motion planning, which is the problem of computing feasible paths in an environment for a movable object, has applications in many domains ranging from robotics, to intelligent CAD, to protein folding. The best methods for solving this PSPACE-hard problem are so-called sampling-based planners. Recent work introduced uniform spatial subdivision techniques for parallelizing sampling-based motion planning algorithms that scaled well. However, such methods are prone to load imbalance, as planning time depends on region characteristics and, for most problems, the heterogeneity of the sub problems increases as the number of processors increases. In this work, we introduce two techniques to address load imbalance in the parallelization of sampling-based motion planning algorithms: an adaptive work stealing approach and bulk-synchronous redistribution. We show that applying these techniques to representatives of the two major classes of parallel sampling-based motion planning algorithms, probabilistic roadmaps and rapidly-exploring random trees, results in a more scalable and load-balanced computation on more than 3,000 cores. © 2014 IEEE.

  10. Using Load Balancing to Scalably Parallelize Sampling-Based Motion Planning Algorithms

    KAUST Repository

    Fidel, Adam

    2014-05-01

    Motion planning, which is the problem of computing feasible paths in an environment for a movable object, has applications in many domains ranging from robotics, to intelligent CAD, to protein folding. The best methods for solving this PSPACE-hard problem are so-called sampling-based planners. Recent work introduced uniform spatial subdivision techniques for parallelizing sampling-based motion planning algorithms that scaled well. However, such methods are prone to load imbalance, as planning time depends on region characteristics and, for most problems, the heterogeneity of the sub problems increases as the number of processors increases. In this work, we introduce two techniques to address load imbalance in the parallelization of sampling-based motion planning algorithms: an adaptive work stealing approach and bulk-synchronous redistribution. We show that applying these techniques to representatives of the two major classes of parallel sampling-based motion planning algorithms, probabilistic roadmaps and rapidly-exploring random trees, results in a more scalable and load-balanced computation on more than 3,000 cores. © 2014 IEEE.

  11. Sampling Methods in Cardiovascular Nursing Research: An Overview.

    Science.gov (United States)

    Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie

    2014-01-01

    Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.

  12. Improved mesh based photon sampling techniques for neutron activation analysis

    International Nuclear Information System (INIS)

    Relson, E.; Wilson, P. P. H.; Biondo, E. D.

    2013-01-01

    The design of fusion power systems requires analysis of neutron activation of large, complex volumes, and the resulting particles emitted from these volumes. Structured mesh-based discretization of these problems allows for improved modeling in these activation analysis problems. Finer discretization of these problems results in large computational costs, which drives the investigation of more efficient methods. Within an ad hoc subroutine of the Monte Carlo transport code MCNP, we implement sampling of voxels and photon energies for volumetric sources using the alias method. The alias method enables efficient sampling of a discrete probability distribution, and operates in 0(1) time, whereas the simpler direct discrete method requires 0(log(n)) time. By using the alias method, voxel sampling becomes a viable alternative to sampling space with the 0(1) approach of uniformly sampling the problem volume. Additionally, with voxel sampling it is straightforward to introduce biasing of volumetric sources, and we implement this biasing of voxels as an additional variance reduction technique that can be applied. We verify our implementation and compare the alias method, with and without biasing, to direct discrete sampling of voxels, and to uniform sampling. We study the behavior of source biasing in a second set of tests and find trends between improvements and source shape, material, and material density. Overall, however, the magnitude of improvements from source biasing appears to be limited. Future work will benefit from the implementation of efficient voxel sampling - particularly with conformal unstructured meshes where the uniform sampling approach cannot be applied. (authors)

  13. Ultrasonic-based membrane aided sample preparation of urine proteomes.

    Science.gov (United States)

    Jesus, Jemmyson Romário; Santos, Hugo M; López-Fernández, H; Lodeiro, Carlos; Arruda, Marco Aurélio Zezzi; Capelo, J L

    2018-02-01

    A new ultrafast ultrasonic-based method for shotgun proteomics as well as label-free protein quantification in urine samples is developed. The method first separates the urine proteins using nitrocellulose-based membranes and then proteins are in-membrane digested using trypsin. The enzymatic digestion process is accelerated from overnight to four minutes using a sonoreactor ultrasonic device. Overall, the sample treatment pipeline comprising protein separation, digestion and identification is done in just 3h. The process is assessed using urine of healthy volunteers. The method shows that male can be differentiated from female using the protein content of urine in a fast, easy and straightforward way. 232 and 226 proteins are identified in urine of male and female, respectively. From this, 162 are common to both genders, whilst 70 are unique to male and 64 to female. From the 162 common proteins, 13 are present at levels statistically different (p minimalism concept as outlined by Halls, as each stage of this analysis is evaluated to minimize the time, cost, sample requirement, reagent consumption, energy requirements and production of waste products. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. A support vector density-based importance sampling for reliability assessment

    International Nuclear Information System (INIS)

    Dai, Hongzhe; Zhang, Hao; Wang, Wei

    2012-01-01

    An importance sampling method based on the adaptive Markov chain simulation and support vector density estimation is developed in this paper for efficient structural reliability assessment. The methodology involves the generation of samples that can adaptively populate the important region by the adaptive Metropolis algorithm, and the construction of importance sampling density by support vector density. The use of the adaptive Metropolis algorithm may effectively improve the convergence and stability of the classical Markov chain simulation. The support vector density can approximate the sampling density with fewer samples in comparison to the conventional kernel density estimation. The proposed importance sampling method can effectively reduce the number of structural analysis required for achieving a given accuracy. Examples involving both numerical and practical structural problems are given to illustrate the application and efficiency of the proposed methodology.

  15. Present status of NMCC and sample preparation method for bio-samples

    International Nuclear Information System (INIS)

    Futatsugawa, S.; Hatakeyama, S.; Saitou, S.; Sera, K.

    1993-01-01

    In NMCC(Nishina Memorial Cyclotron Center) we are doing researches on PET of nuclear medicine (Positron Emission Computed Tomography) and PIXE analysis (Particle Induced X-ray Emission) using a small cyclotron of compactly designed. The NMCC facilities have been opened to researchers of other institutions since April 1993. The present status of NMCC is described. Bio-samples (medical samples, plants, animals and environmental samples) have mainly been analyzed by PIXE in NMCC. Small amounts of bio-samples for PIXE are decomposed quickly and easily in a sealed PTFE (polytetrafluoroethylene) vessel with a microwave oven. This sample preparation method of bio-samples also is described. (author)

  16. Direct sampling methods for inverse elastic scattering problems

    Science.gov (United States)

    Ji, Xia; Liu, Xiaodong; Xi, Yingxia

    2018-03-01

    We consider the inverse elastic scattering of incident plane compressional and shear waves from the knowledge of the far field patterns. Specifically, three direct sampling methods for location and shape reconstruction are proposed using the different component of the far field patterns. Only inner products are involved in the computation, thus the novel sampling methods are very simple and fast to be implemented. With the help of the factorization of the far field operator, we give a lower bound of the proposed indicator functionals for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functionals decay like the Bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functionals continuously dependent on the far field patterns, which further implies that the novel sampling methods are extremely stable with respect to data error. For the case when the observation directions are restricted into the limited aperture, we firstly introduce some data retrieval techniques to obtain those data that can not be measured directly and then use the proposed direct sampling methods for location and shape reconstructions. Finally, some numerical simulations in two dimensions are conducted with noisy data, and the results further verify the effectiveness and robustness of the proposed sampling methods, even for multiple multiscale cases and limited-aperture problems.

  17. A novel heterogeneous training sample selection method on space-time adaptive processing

    Science.gov (United States)

    Wang, Qiang; Zhang, Yongshun; Guo, Yiduo

    2018-04-01

    The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.

  18. Selecting Sample Preparation Workflows for Mass Spectrometry-Based Proteomic and Phosphoproteomic Analysis of Patient Samples with Acute Myeloid Leukemia

    Directory of Open Access Journals (Sweden)

    Maria Hernandez-Valladares

    2016-08-01

    Full Text Available Global mass spectrometry (MS-based proteomic and phosphoproteomic studies of acute myeloid leukemia (AML biomarkers represent a powerful strategy to identify and confirm proteins and their phosphorylated modifications that could be applied in diagnosis and prognosis, as a support for individual treatment regimens and selection of patients for bone marrow transplant. MS-based studies require optimal and reproducible workflows that allow a satisfactory coverage of the proteome and its modifications. Preparation of samples for global MS analysis is a crucial step and it usually requires method testing, tuning and optimization. Different proteomic workflows that have been used to prepare AML patient samples for global MS analysis usually include a standard protein in-solution digestion procedure with a urea-based lysis buffer. The enrichment of phosphopeptides from AML patient samples has previously been carried out either with immobilized metal affinity chromatography (IMAC or metal oxide affinity chromatography (MOAC. We have recently tested several methods of sample preparation for MS analysis of the AML proteome and phosphoproteome and introduced filter-aided sample preparation (FASP as a superior methodology for the sensitive and reproducible generation of peptides from patient samples. FASP-prepared peptides can be further fractionated or IMAC-enriched for proteome or phosphoproteome analyses. Herein, we will review both in-solution and FASP-based sample preparation workflows and encourage the use of the latter for the highest protein and phosphorylation coverage and reproducibility.

  19. A simple method to adapt time sampling of the analog signal

    International Nuclear Information System (INIS)

    Kalinin, Yu.G.; Martyanov, I.S.; Sadykov, Kh.; Zastrozhnova, N.N.

    2004-01-01

    In this paper we briefly describe the time sampling method, which is adapted to the speed of the signal change. Principally, this method is based on a simple idea--the combination of discrete integration with differentiation of the analog signal. This method can be used in nuclear electronics research into the characteristics of detectors and the shape of the pulse signal, pulse and transitive characteristics of inertial systems of processing of signals, etc

  20. Possible Overestimation of Surface Disinfection Efficiency by Assessment Methods Based on Liquid Sampling Procedures as Demonstrated by In Situ Quantification of Spore Viability ▿

    Science.gov (United States)

    Grand, I.; Bellon-Fontaine, M.-N.; Herry, J.-M.; Hilaire, D.; Moriconi, F.-X.; Naïtali, M.

    2011-01-01

    The standard test methods used to assess the efficiency of a disinfectant applied to surfaces are often based on counting the microbial survivors sampled in a liquid, but total cell removal from surfaces is seldom achieved. One might therefore wonder whether evaluations of microbial survivors in liquid-sampled cells are representative of the levels of survivors in whole populations. The present study was thus designed to determine the “damaged/undamaged” status induced by a peracetic acid disinfection for Bacillus atrophaeus spores deposited on glass coupons directly on this substrate and to compare it to the status of spores collected in liquid by a sampling procedure. The method utilized to assess the viability of both surface-associated and liquid-sampled spores included fluorescence labeling with a combination of Syto 61 and Chemchrome V6 dyes and quantifications by analyzing the images acquired by confocal laser scanning microscopy. The principal result of the study was that the viability of spores sampled in the liquid was found to be poorer than that of surface-associated spores. For example, after 2 min of peracetic acid disinfection, less than 17% ± 5% of viable cells were detected among liquid-sampled cells compared to 79% ± 5% or 47% ± 4%, respectively, when the viability was evaluated on the surface after or without the sampling procedure. Moreover, assessments of the survivors collected in the liquid phase, evaluated using the microscopic method and standard plate counts, were well correlated. Evaluations based on the determination of survivors among the liquid-sampled cells can thus overestimate the efficiency of surface disinfection procedures. PMID:21742922

  1. Microbiological Sampling Methods and Sanitation of Edible Plants Grown on ISS

    Science.gov (United States)

    Parrish, Charles H. II; Khodadad, Christina L.; Garland, Nathaniel T.; Larson, Brian D.; Hummreick, Mary E.

    2013-01-01

    Pathogenic microbes on the surfaces of salad crops and growth chambers pose a threat to the health of crew on International Space Station. For astronauts to safely consume spacegrown vegetables produced in NASA's new vegetable production unit, VEGGIE, three technical challenges must be overcome: real-time sampling, microbiological analysis, and sanitation. Raphanus sativus cultivar Cherry Bomb II and Latuca sativa cultivar Outredgeous, two saled crops to be grown in VEGGIE, were inoculated with Salmonella enterica serovar Typhimurium (S. Typhimurium), a bacterium known to cause food-borne illness Tape- and swab-based sampling techniques were optimized for use in microgravity and assessed for effectiveness in recovery of bacteria from crop surfaces: Rapid pathogen detection and molecular analyses were performed via quantitative real-time polymerase chain reactiop using LightCycler® 480 and RAZOR® EX, a scaled-down instrument that is undergoing evaluation and testing for future flight hardware. These methods were compared with conventional, culture-based methods for the recovery of S. Typhimurium colonies. A sterile wipe saturated with a citric acid-based, food-grade sanitizer was applied to two different surface materials used in VEGGIE flight hardware that had been contaminated with the bacterium Pseudomonas aeruginosa,. another known human pathogen. To sanitize surfaces, wipes were saturated with either the sanitizer or sterile deionized water and applied to each surface. Colony forming units of P. aeruginosa grown on tryptic soy agar plates were enumerated from surface samples after sanitization treatments. Depending on the VEGGIE hardware material, 2- to 4.5-log10 reductions in colony-forming units were observed after sanitization. The difference in recovery of S. Typhimurium between tape- and swab- based sampling techniques was insignificant. RAZOR® EX rapidly detected S. Typhimurium present in both raw culture and extracted DNA samples.

  2. Sampling Methods for Wallenius' and Fisher's Noncentral Hypergeometric Distributions

    DEFF Research Database (Denmark)

    Fog, Agner

    2008-01-01

    the mode, ratio-of-uniforms rejection method, and rejection by sampling in the tau domain. Methods for the multivariate distributions include: simulation of urn experiments, conditional method, Gibbs sampling, and Metropolis-Hastings sampling. These methods are useful for Monte Carlo simulation of models...... of biased sampling and models of evolution and for calculating moments and quantiles of the distributions.......Several methods for generating variates with univariate and multivariate Wallenius' and Fisher's noncentral hypergeometric distributions are developed. Methods for the univariate distributions include: simulation of urn experiments, inversion by binary search, inversion by chop-down search from...

  3. Pi sampling: a methodical and flexible approach to initial macromolecular crystallization screening

    International Nuclear Information System (INIS)

    Gorrec, Fabrice; Palmer, Colin M.; Lebon, Guillaume; Warne, Tony

    2011-01-01

    Pi sampling, derived from the incomplete factorial approach, is an effort to maximize the diversity of macromolecular crystallization conditions and to facilitate the preparation of 96-condition initial screens. The Pi sampling method is derived from the incomplete factorial approach to macromolecular crystallization screen design. The resulting ‘Pi screens’ have a modular distribution of a given set of up to 36 stock solutions. Maximally diverse conditions can be produced by taking into account the properties of the chemicals used in the formulation and the concentrations of the corresponding solutions. The Pi sampling method has been implemented in a web-based application that generates screen formulations and recipes. It is particularly adapted to screens consisting of 96 different conditions. The flexibility and efficiency of Pi sampling is demonstrated by the crystallization of soluble proteins and of an integral membrane-protein sample

  4. DOE methods for evaluating environmental and waste management samples

    International Nuclear Information System (INIS)

    Goheen, S.C.; McCulloch, M.; Thomas, B.L.; Riley, R.G.; Sklarew, D.S.; Mong, G.M.; Fadeff, S.K.

    1993-03-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) provides applicable methods in use by. the US Department of Energy (DOE) laboratories for sampling and analyzing constituents of waste and environmental samples. The development of DOE Methods is supported by the Laboratory Management Division (LMD) of the DOE. This document contains chapters and methods that are proposed for use in evaluating components of DOE environmental and waste management samples. DOE Methods is a resource intended to support sampling and analytical activities that will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the US Environmental Protection Agency (EPA), or others

  5. DOE methods for evaluating environmental and waste management samples.

    Energy Technology Data Exchange (ETDEWEB)

    Goheen, S C; McCulloch, M; Thomas, B L; Riley, R G; Sklarew, D S; Mong, G M; Fadeff, S K [eds.; Pacific Northwest Lab., Richland, WA (United States)

    1994-04-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) provides applicable methods in use by. the US Department of Energy (DOE) laboratories for sampling and analyzing constituents of waste and environmental samples. The development of DOE Methods is supported by the Laboratory Management Division (LMD) of the DOE. This document contains chapters and methods that are proposed for use in evaluating components of DOE environmental and waste management samples. DOE Methods is a resource intended to support sampling and analytical activities that will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the US Environmental Protection Agency (EPA), or others.

  6. Micro-sampling method based on high-resolution continuum source graphite furnace atomic absorption spectrometry for calcium determination in blood and mitochondrial suspensions.

    Science.gov (United States)

    Gómez-Nieto, Beatriz; Gismera, Mª Jesús; Sevilla, Mª Teresa; Satrústegui, Jorgina; Procopio, Jesús R

    2017-08-01

    A micro-sampling and straightforward method based on high resolution continuum source atomic absorption spectrometry (HR-CS AAS) was developed to determine extracellular and intracellular Ca in samples of interest in clinical and biomedical analysis. Solid sampling platforms were used to introduce the micro-samples into the graphite furnace atomizer. The secondary absorption line for Ca, located at 239.856nm, was selected to carry out the measurements. Experimental parameters such as pyrolysis and atomization temperatures and the amount of sample introduced for the measurements were optimized. Calibration was performed using aqueous standards and the approach to measure at the wings of the absorption lines was employed for the expansion of the linear response range. The limit of detection was of 0.02mgL -1 Ca (0.39ng Ca) and the upper limit of linear range was increased up to 8.0mgL -1 Ca (160ng Ca). The proposed method was used to determine Ca in mitochondrial suspensions and whole blood samples with successful results. Adequate recoveries (within 91-107%) were obtained in the tests performed for validation purposes. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Sample size for post-marketing safety studies based on historical controls.

    Science.gov (United States)

    Wu, Yu-te; Makuch, Robert W

    2010-08-01

    As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.

  8. PhyloChip™ microarray comparison of sampling methods used for coral microbial ecology

    Science.gov (United States)

    Kellogg, Christina A.; Piceno, Yvette M.; Tom, Lauren M.; DeSantis, Todd Z.; Zawada, David G.; Andersen, Gary L.

    2012-01-01

    Interest in coral microbial ecology has been increasing steadily over the last decade, yet standardized methods of sample collection still have not been defined. Two methods were compared for their ability to sample coral-associated microbial communities: tissue punches and foam swabs, the latter being less invasive and preferred by reef managers. Four colonies of star coral, Montastraea annularis, were sampled in the Dry Tortugas National Park (two healthy and two with white plague disease). The PhyloChip™ G3 microarray was used to assess microbial community structure of amplified 16S rRNA gene sequences. Samples clustered based on methodology rather than coral colony. Punch samples from healthy and diseased corals were distinct. All swab samples clustered closely together with the seawater control and did not group according to the health state of the corals. Although more microbial taxa were detected by the swab method, there is a much larger overlap between the water control and swab samples than punch samples, suggesting some of the additional diversity is due to contamination from water absorbed by the swab. While swabs are useful for noninvasive studies of the coral surface mucus layer, these results show that they are not optimal for studies of coral disease.

  9. Comparison of Methods for Bifenthrin Residues Determination in Fermented Wheat Samples

    Directory of Open Access Journals (Sweden)

    Tijana Đorđević

    2012-01-01

    Full Text Available Efficiency of three different sample preparation methods for GC/MS determinationof bifenthrin residues in wheat (Triticum spelta samples fermented by Lactobacillusplantarum was tested. The first method was based on a methanol:acetone=1:1 extractionfolowed by a purification on columns containing mixture of aluminium oxide and activatedcharcoal slurry-packed and eluted with dichlormethane, the second was based onmethanol:acetone=1:1 extraction folowed by the purification on florisil column and elutionby ethil acetate:acetone=4:1, while the third tested method was based on a combinationof the first two mentioned methods, thus methanol:acetone=1:1 extraction and clean-upthrought columns filled with a mixture of aluminum oxide and activated charcoal slurrypackedand eluted with ethil acetate:acetone=4:1. The second method was the most effectivefor obtaining satisfactory recoveries for bifenthrin in a range of 79-83% for four fortificationlevels, with good reproducibility i.e. RSD% in a range of 2.2-7.4%. The chosen methodwas further optimized by assessing the optimum volume of elution solvent used duringthe clean-up procedures. The highest recovery of 82.1% was obtained after elution with25 ml of solvent. Overall, two-step extraction with 25 ml of methanol:acetone=1:1 solventmix for 30 min, followed by clean-up procedure through a glass column with florisil coupledwith elution with 25 ml of ethyl acetate: acetone=4:1, allows simple, efficient and reliableGC/MS detection of bifenthrin residues from wheat grain fermented by L. plantarum.

  10. Small Body GN and C Research Report: G-SAMPLE - An In-Flight Dynamical Method for Identifying Sample Mass [External Release Version

    Science.gov (United States)

    Carson, John M., III; Bayard, David S.

    2006-01-01

    G-SAMPLE is an in-flight dynamical method for use by sample collection missions to identify the presence and quantity of collected sample material. The G-SAMPLE method implements a maximum-likelihood estimator to identify the collected sample mass, based on onboard force sensor measurements, thruster firings, and a dynamics model of the spacecraft. With G-SAMPLE, sample mass identification becomes a computation rather than an extra hardware requirement; the added cost of cameras or other sensors for sample mass detection is avoided. Realistic simulation examples are provided for a spacecraft configuration with a sample collection device mounted on the end of an extended boom. In one representative example, a 1000 gram sample mass is estimated to within 110 grams (95% confidence) under realistic assumptions of thruster profile error, spacecraft parameter uncertainty, and sensor noise. For convenience to future mission design, an overall sample-mass estimation error budget is developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.

  11. The Toggle Local Planner for sampling-based motion planning

    KAUST Repository

    Denny, Jory; Amato, Nancy M.

    2012-01-01

    Sampling-based solutions to the motion planning problem, such as the probabilistic roadmap method (PRM), have become commonplace in robotics applications. These solutions are the norm as the dimensionality of the planning space grows, i.e., d > 5

  12. Comparative effectiveness and acceptability of home-based and clinic-based sampling methods for sexually transmissible infections screening in females aged 14-50 years: a systematic review and meta-analysis.

    Science.gov (United States)

    Odesanmi, Tolulope Y; Wasti, Sharada P; Odesanmi, Omolola S; Adegbola, Omololu; Oguntuase, Olubukola O; Mahmood, Sajid

    2013-12-01

    Home-based sampling is a strategy to enhance uptake of sexually transmissible infection (STI) screening. This review aimed to compare the screening uptake levels of home-based self-sampling and clinic-based specimen collection for STIs (chlamydia (Chlamydia trachomatis), gonorrhoea (Neisseria gonorrhoeae) and trichomoniasis) in females aged 14-50 years. Acceptability and effect on specimen quality were determined. Sixteen electronic databases were searched from inception to September 2012. Randomised controlled trials (RCTs) comparing the uptake levels of home-based self-sampling and clinic-based sampling for chlamydia, gonorrhoea and trichomoniasis in females aged 14-50 years were eligible for inclusion. The risk of bias in the trials was assessed. Risk ratios (RRs) for dichotomous outcomes were meta-analysed. Of 3065 papers, six studies with seven RCTs contributed to the final review. Compared with clinic-based methods, home-based screening increased uptake significantly (P=0.001-0.05) in five trials and was substantiated in a meta-analysis (RR: 1.55; 95% confidence interval: 1.30-1.85; P=0.00001) of two trials. In three trials, a significant preference for home-based testing (P=0.001-0.05) was expressed. No significant difference was observed in specimen quality. Sampling was rated as easy by a significantly higher number of women (P=0.01) in the clinic group in one trial. The review provides evidence that home-based testing results in greater uptake of STI screening in females (14-50 years) than clinic-based testing without compromising quality in the developed world. Home collection strategies should be added to clinic-based screening programs to enhance uptake.

  13. 40 CFR Appendix I to Part 261 - Representative Sampling Methods

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Representative Sampling Methods I...—Representative Sampling Methods The methods and equipment used for sampling waste materials will vary with the form and consistency of the waste materials to be sampled. Samples collected using the sampling...

  14. Method for evaluation of radiative properties of glass samples

    Energy Technology Data Exchange (ETDEWEB)

    Mohelnikova, Jitka [Faculty of Civil Engineering, Brno University of Technology, Veveri 95, 602 00 Brno (Czech Republic)], E-mail: mohelnikova.j@fce.vutbr.cz

    2008-04-15

    The paper presents a simple calculation method which serves for an evaluation of radiative properties of window glasses. The method is based on a computer simulation model of the energy balance of a thermally insulated box with selected glass samples. A temperature profile of the air inside of the box with a glass sample exposed to affecting radiation was determined for defined boundary conditions. The spectral range of the radiation was considered in the interval between 280 and 2500 nm. This interval is adequate to the spectral range of solar radiation affecting windows in building facades. The air temperature rise within the box was determined in a response to the affecting radiation in the time between the beginning of the radiation exposition and the time of steady-state thermal conditions. The steady state temperature inside of the insulated box serves for the evaluation of the box energy balance and determination of the glass sample radiative properties. These properties are represented by glass characteristics as mean values of transmittance, reflectance and absorptance calculated for a defined spectral range. The data of the computer simulations were compared to experimental measurements on a real model of the insulated box. Results of both the calculations and measurements are in a good compliance. The method is recommended for preliminary evaluation of window glass radiative properties which serve as data for energy evaluation of buildings.

  15. Probability Sampling Method for a Hidden Population Using Respondent-Driven Sampling: Simulation for Cancer Survivors.

    Science.gov (United States)

    Jung, Minsoo

    2015-01-01

    When there is no sampling frame within a certain group or the group is concerned that making its population public would bring social stigma, we say the population is hidden. It is difficult to approach this kind of population survey-methodologically because the response rate is low and its members are not quite honest with their responses when probability sampling is used. The only alternative known to address the problems caused by previous methods such as snowball sampling is respondent-driven sampling (RDS), which was developed by Heckathorn and his colleagues. RDS is based on a Markov chain, and uses the social network information of the respondent. This characteristic allows for probability sampling when we survey a hidden population. We verified through computer simulation whether RDS can be used on a hidden population of cancer survivors. According to the simulation results of this thesis, the chain-referral sampling of RDS tends to minimize as the sample gets bigger, and it becomes stabilized as the wave progresses. Therefore, it shows that the final sample information can be completely independent from the initial seeds if a certain level of sample size is secured even if the initial seeds were selected through convenient sampling. Thus, RDS can be considered as an alternative which can improve upon both key informant sampling and ethnographic surveys, and it needs to be utilized for various cases domestically as well.

  16. Mixed Methods Sampling: A Typology with Examples

    Science.gov (United States)

    Teddlie, Charles; Yu, Fen

    2007-01-01

    This article presents a discussion of mixed methods (MM) sampling techniques. MM sampling involves combining well-established qualitative and quantitative techniques in creative ways to answer research questions posed by MM research designs. Several issues germane to MM sampling are presented including the differences between probability and…

  17. Log sampling methods and software for stand and landscape analyses.

    Science.gov (United States)

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe methods for efficient, accurate sampling of logs at landscape and stand scales to estimate density, total length, cover, volume, and weight. Our methods focus on optimizing the sampling effort by choosing an appropriate sampling method and transect length for specific forest conditions and objectives. Sampling methods include the line-intersect method and...

  18. Sample Entropy-Based Approach to Evaluate the Stability of Double-Wire Pulsed MIG Welding

    Directory of Open Access Journals (Sweden)

    Ping Yao

    2014-01-01

    Full Text Available According to the sample entropy, this paper deals with a quantitative method to evaluate the current stability in double-wire pulsed MIG welding. Firstly, the sample entropy of current signals with different stability but the same parameters is calculated. The results show that the more stable the current, the smaller the value and the standard deviation of sample entropy. Secondly, four parameters, which are pulse width, peak current, base current, and frequency, are selected for four-level three-factor orthogonal experiment. The calculation and analysis of desired signals indicate that sample entropy values are affected by welding current parameters. Then, a quantitative method based on sample entropy is proposed. The experiment results show that the method can preferably quantify the welding current stability.

  19. New methods for sampling sparse populations

    Science.gov (United States)

    Anna Ringvall

    2007-01-01

    To improve surveys of sparse objects, methods that use auxiliary information have been suggested. Guided transect sampling uses prior information, e.g., from aerial photographs, for the layout of survey strips. Instead of being laid out straight, the strips will wind between potentially more interesting areas. 3P sampling (probability proportional to prediction) uses...

  20. 19 CFR 151.70 - Method of sampling by Customs.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Method of sampling by Customs. 151.70 Section 151... THE TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Wool and Hair § 151.70 Method of sampling by Customs. A general sample shall be taken from each sampling unit, unless it is not...

  1. A numerical integration-based yield estimation method for integrated circuits

    International Nuclear Information System (INIS)

    Liang Tao; Jia Xinzhang

    2011-01-01

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  2. A numerical integration-based yield estimation method for integrated circuits

    Energy Technology Data Exchange (ETDEWEB)

    Liang Tao; Jia Xinzhang, E-mail: tliang@yahoo.cn [Key Laboratory of Ministry of Education for Wide Bandgap Semiconductor Materials and Devices, School of Microelectronics, Xidian University, Xi' an 710071 (China)

    2011-04-15

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  3. An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples.

    Science.gov (United States)

    Riediger, Irina N; Hoffmaster, Alex R; Casanovas-Massana, Arnau; Biondo, Alexander W; Ko, Albert I; Stoddard, Robyn A

    2016-01-01

    Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden.

  4. Generalized sample entropy analysis for traffic signals based on similarity measure

    Science.gov (United States)

    Shang, Du; Xu, Mengjia; Shang, Pengjian

    2017-05-01

    Sample entropy is a prevailing method used to quantify the complexity of a time series. In this paper a modified method of generalized sample entropy and surrogate data analysis is proposed as a new measure to assess the complexity of a complex dynamical system such as traffic signals. The method based on similarity distance presents a different way of signals patterns match showing distinct behaviors of complexity. Simulations are conducted over synthetic data and traffic signals for providing the comparative study, which is provided to show the power of the new method. Compared with previous sample entropy and surrogate data analysis, the new method has two main advantages. The first one is that it overcomes the limitation about the relationship between the dimension parameter and the length of series. The second one is that the modified sample entropy functions can be used to quantitatively distinguish time series from different complex systems by the similar measure.

  5. Innovative methods for inorganic sample preparation

    Energy Technology Data Exchange (ETDEWEB)

    Essling, A.M.; Huff, E.A.; Graczyk, D.G.

    1992-04-01

    Procedures and guidelines are given for the dissolution of a variety of selected materials using fusion, microwave, and Parr bomb techniques. These materials include germanium glass, corium-concrete mixtures, and zeolites. Emphasis is placed on sample-preparation approaches that produce a single master solution suitable for complete multielement characterization of the sample. In addition, data are presented on the soil microwave digestion method approved by the Environmental Protection Agency (EPA). Advantages and disadvantages of each sample-preparation technique are summarized.

  6. Innovative methods for inorganic sample preparation

    International Nuclear Information System (INIS)

    Essling, A.M.; Huff, E.A.; Graczyk, D.G.

    1992-04-01

    Procedures and guidelines are given for the dissolution of a variety of selected materials using fusion, microwave, and Parr bomb techniques. These materials include germanium glass, corium-concrete mixtures, and zeolites. Emphasis is placed on sample-preparation approaches that produce a single master solution suitable for complete multielement characterization of the sample. In addition, data are presented on the soil microwave digestion method approved by the Environmental Protection Agency (EPA). Advantages and disadvantages of each sample-preparation technique are summarized

  7. Improved LC-MS/MS method for the quantification of hepcidin-25 in clinical samples.

    Science.gov (United States)

    Abbas, Ioana M; Hoffmann, Holger; Montes-Bayón, María; Weller, Michael G

    2018-06-01

    Mass spectrometry-based methods play a crucial role in the quantification of the main iron metabolism regulator hepcidin by singling out the bioactive 25-residue peptide from the other naturally occurring N-truncated isoforms (hepcidin-20, -22, -24), which seem to be inactive in iron homeostasis. However, several difficulties arise in the MS analysis of hepcidin due to the "sticky" character of the peptide and the lack of suitable standards. Here, we propose the use of amino- and fluoro-silanized autosampler vials to reduce hepcidin interaction to laboratory glassware surfaces after testing several types of vials for the preparation of stock solutions and serum samples for isotope dilution liquid chromatography-tandem mass spectrometry (ID-LC-MS/MS). Furthermore, we have investigated two sample preparation strategies and two chromatographic separation conditions with the aim of developing a LC-MS/MS method for the sensitive and reliable quantification of hepcidin-25 in serum samples. A chromatographic separation based on usual acidic mobile phases was compared with a novel approach involving the separation of hepcidin-25 with solvents at high pH containing 0.1% of ammonia. Both methods were applied to clinical samples in an intra-laboratory comparison of two LC-MS/MS methods using the same hepcidin-25 calibrators with good correlation of the results. Finally, we recommend a LC-MS/MS-based quantification method with a dynamic range of 0.5-40 μg/L for the assessment of hepcidin-25 in human serum that uses TFA-based mobile phases and silanized glass vials. Graphical abstract Structure of hepcidin-25 (Protein Data Bank, PDB ID 2KEF).

  8. Comparison of a New Cobinamide-Based Method to a Standard Laboratory Method for Measuring Cyanide in Human Blood

    Science.gov (United States)

    Swezey, Robert; Shinn, Walter; Green, Carol; Drover, David R.; Hammer, Gregory B.; Schulman, Scott R.; Zajicek, Anne; Jett, David A.; Boss, Gerry R.

    2013-01-01

    Most hospital laboratories do not measure blood cyanide concentrations, and samples must be sent to reference laboratories. A simple method is needed for measuring cyanide in hospitals. The authors previously developed a method to quantify cyanide based on the high binding affinity of the vitamin B12 analog, cobinamide, for cyanide and a major spectral change observed for cyanide-bound cobinamide. This method is now validated in human blood, and the findings include a mean inter-assay accuracy of 99.1%, precision of 8.75% and a lower limit of quantification of 3.27 µM cyanide. The method was applied to blood samples from children treated with sodium nitroprusside and it yielded measurable results in 88 of 172 samples (51%), whereas the reference laboratory yielded results in only 19 samples (11%). In all 19 samples, the cobinamide-based method also yielded measurable results. The two methods showed reasonable agreement when analyzed by linear regression, but not when analyzed by a standard error of the estimate or paired t-test. Differences in results between the two methods may be because samples were assayed at different times on different sample types. The cobinamide-based method is applicable to human blood, and can be used in hospital laboratories and emergency rooms. PMID:23653045

  9. Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images

    Science.gov (United States)

    Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun

    2013-01-01

    This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608

  10. A novel digestion method based on a choline chloride–oxalic acid deep eutectic solvent for determining Cu, Fe, and Zn in fish samples

    Energy Technology Data Exchange (ETDEWEB)

    Habibi, Emadaldin [Department of Marine Chemistry, Faculty of Marine Science, Khorramshahr University of Marine Science and Technology, P.O. BOX 669, Khorramshahr (Iran, Islamic Republic of); Ghanemi, Kamal, E-mail: Kamal.ghanemi@kmsu.ac.ir [Department of Marine Chemistry, Faculty of Marine Science, Khorramshahr University of Marine Science and Technology, P.O. BOX 669, Khorramshahr (Iran, Islamic Republic of); Marine Science Research Institute, Khorramshahr University of Marine Science and Technology, Khorramshahr (Iran, Islamic Republic of); Fallah-Mehrjardi, Mehdi [Department of Marine Chemistry, Faculty of Marine Science, Khorramshahr University of Marine Science and Technology, P.O. BOX 669, Khorramshahr (Iran, Islamic Republic of); Marine Science Research Institute, Khorramshahr University of Marine Science and Technology, Khorramshahr (Iran, Islamic Republic of); Dadolahi-Sohrab, Ali [Department of Marine Environment, Faculty of marine natural resources, Khorramshahr University of Marine Science and Technology, Khorramshahr (Iran, Islamic Republic of)

    2013-01-31

    Highlights: ► A novel digestion method: lack of concentrated acids or oxidizing reagents. ► First report of using choline chloride–oxalic acid (ChCl–Ox) for digestion. ► Complete dissolution of biological samples in ChCl–Ox for solubilization metals. ► Extraction recoveries greater than 95%: validated by the fish protein CRM. ► Successfully applied in different fish tissues (Muscle, Liver, and Gills). -- Abstract: A novel and efficient digestion method based on choline chloride–oxalic acid (ChCl–Ox) deep eutectic solvent (DES) was developed for flame atomic absorption spectrometry (FAAS) determination of Cu, Zn, and Fe in biological fish samples. Key parameters that influence analyte recovery were investigated and optimized, using the fish protein certified reference material (CRM, DORM-3) throughout the procedure. In this method, 100 mg of the sample was dissolved in ChCl–Ox (1:2, molar ratio) at 100 °C for 45 min. Then, 5.0 mL HNO{sub 3} (1.0 M) was added. After centrifugation, the supernatant solution was filtered, diluted to a known volume, and analyzed by FAAS. Under optimized conditions, an excellent agreement between the obtained results and the certified values was observed, using Student's t-test (P = 0.05); the extraction recovery of the all elements was greater than 95.3%. The proposed method was successfully applied to the determination of analytes in different tissues (muscle, liver, and gills) having a broad concentration range in a marine fish sample. The reproducibility of the method was validated by analyzing all samples by our method in a different laboratory, using inductively coupled plasma optical emission spectrometry (ICP-OES). For comparison, a conventional acid digestion (CAD) method was also used for the determination of analytes in all studied samples. The simplicity of the proposed experimental procedure, high extraction efficiency, short analysis time, lack of concentrated acids and oxidizing agents, and the

  11. A novel digestion method based on a choline chloride–oxalic acid deep eutectic solvent for determining Cu, Fe, and Zn in fish samples

    International Nuclear Information System (INIS)

    Habibi, Emadaldin; Ghanemi, Kamal; Fallah-Mehrjardi, Mehdi; Dadolahi-Sohrab, Ali

    2013-01-01

    Highlights: ► A novel digestion method: lack of concentrated acids or oxidizing reagents. ► First report of using choline chloride–oxalic acid (ChCl–Ox) for digestion. ► Complete dissolution of biological samples in ChCl–Ox for solubilization metals. ► Extraction recoveries greater than 95%: validated by the fish protein CRM. ► Successfully applied in different fish tissues (Muscle, Liver, and Gills). -- Abstract: A novel and efficient digestion method based on choline chloride–oxalic acid (ChCl–Ox) deep eutectic solvent (DES) was developed for flame atomic absorption spectrometry (FAAS) determination of Cu, Zn, and Fe in biological fish samples. Key parameters that influence analyte recovery were investigated and optimized, using the fish protein certified reference material (CRM, DORM-3) throughout the procedure. In this method, 100 mg of the sample was dissolved in ChCl–Ox (1:2, molar ratio) at 100 °C for 45 min. Then, 5.0 mL HNO 3 (1.0 M) was added. After centrifugation, the supernatant solution was filtered, diluted to a known volume, and analyzed by FAAS. Under optimized conditions, an excellent agreement between the obtained results and the certified values was observed, using Student's t-test (P = 0.05); the extraction recovery of the all elements was greater than 95.3%. The proposed method was successfully applied to the determination of analytes in different tissues (muscle, liver, and gills) having a broad concentration range in a marine fish sample. The reproducibility of the method was validated by analyzing all samples by our method in a different laboratory, using inductively coupled plasma optical emission spectrometry (ICP-OES). For comparison, a conventional acid digestion (CAD) method was also used for the determination of analytes in all studied samples. The simplicity of the proposed experimental procedure, high extraction efficiency, short analysis time, lack of concentrated acids and oxidizing agents, and the use of

  12. Patch-based visual tracking with online representative sample selection

    Science.gov (United States)

    Ou, Weihua; Yuan, Di; Li, Donghao; Liu, Bin; Xia, Daoxun; Zeng, Wu

    2017-05-01

    Occlusion is one of the most challenging problems in visual object tracking. Recently, a lot of discriminative methods have been proposed to deal with this problem. For the discriminative methods, it is difficult to select the representative samples for the target template updating. In general, the holistic bounding boxes that contain tracked results are selected as the positive samples. However, when the objects are occluded, this simple strategy easily introduces the noises into the training data set and the target template and then leads the tracker to drift away from the target seriously. To address this problem, we propose a robust patch-based visual tracker with online representative sample selection. Different from previous works, we divide the object and the candidates into several patches uniformly and propose a score function to calculate the score of each patch independently. Then, the average score is adopted to determine the optimal candidate. Finally, we utilize the non-negative least square method to find the representative samples, which are used to update the target template. The experimental results on the object tracking benchmark 2013 and on the 13 challenging sequences show that the proposed method is robust to the occlusion and achieves promising results.

  13. Chapter 12. Sampling and analytical methods

    International Nuclear Information System (INIS)

    Busenberg, E.; Plummer, L.N.; Cook, P.G.; Solomon, D.K.; Han, L.F.; Groening, M.; Oster, H.

    2006-01-01

    When water samples are taken for the analysis of CFCs, regardless of the sampling method used, contamination of samples by contact with atmospheric air (with its 'high' CFC concentrations) is a major concern. This is because groundwaters usually have lower CFC concentrations than those waters which have been exposed to the modern air. Some groundwaters might not contain CFCs and, therefore, are most sensitive to trace contamination by atmospheric air. Thus, extreme precautions are needed to obtain uncontaminated samples when groundwaters, particularly those with older ages, are sampled. It is recommended at the start of any CFC investigation that samples from a CFC-free source be collected and analysed, as a check upon the sampling equipment and methodology. The CFC-free source might be a deep monitoring well or, alternatively, CFC-free water could be carefully prepared in the laboratory. It is especially important that all tubing, pumps and connection that will be used in the sampling campaign be checked in this manner

  14. Method for fractional solid-waste sampling and chemical analysis

    DEFF Research Database (Denmark)

    Riber, Christian; Rodushkin, I.; Spliid, Henrik

    2007-01-01

    four subsampling methods and five digestion methods, paying attention to the heterogeneity and the material characteristics of the waste fractions, it was possible to determine 61 substances with low detection limits, reasonable variance, and high accuracy. For most of the substances of environmental...... of variance (20-85% of the overall variation). Only by increasing the sample size significantly can this variance be reduced. The accuracy and short-term reproducibility of the chemical characterization were good, as determined by the analysis of several relevant certified reference materials. Typically, six...... to eight different certified reference materials representing a range of concentrations levels and matrix characteristics were included. Based on the documentation provided, the methods introduced were considered satisfactory for characterization of the chemical composition of waste-material fractions...

  15. Silicon based ultrafast optical waveform sampling

    DEFF Research Database (Denmark)

    Ji, Hua; Galili, Michael; Pu, Minhao

    2010-01-01

    A 300 nmx450 nmx5 mm silicon nanowire is designed and fabricated for a four wave mixing based non-linear optical gate. Based on this silicon nanowire, an ultra-fast optical sampling system is successfully demonstrated using a free-running fiber laser with a carbon nanotube-based mode-locker as th......A 300 nmx450 nmx5 mm silicon nanowire is designed and fabricated for a four wave mixing based non-linear optical gate. Based on this silicon nanowire, an ultra-fast optical sampling system is successfully demonstrated using a free-running fiber laser with a carbon nanotube-based mode......-locker as the sampling source. A clear eye-diagram of a 320 Gbit/s data signal is obtained. The temporal resolution of the sampling system is estimated to 360 fs....

  16. Sampling methods for low-frequency electromagnetic imaging

    International Nuclear Information System (INIS)

    Gebauer, Bastian; Hanke, Martin; Schneider, Christoph

    2008-01-01

    For the detection of hidden objects by low-frequency electromagnetic imaging the linear sampling method works remarkably well despite the fact that the rigorous mathematical justification is still incomplete. In this work, we give an explanation for this good performance by showing that in the low-frequency limit the measurement operator fulfils the assumptions for the fully justified variant of the linear sampling method, the so-called factorization method. We also show how the method has to be modified in the physically relevant case of electromagnetic imaging with divergence-free currents. We present numerical results to illustrate our findings, and to show that similar performance can be expected for the case of conducting objects and layered backgrounds

  17. Adaptive Sampling based 3D Profile Measuring Method for Free-Form Surface

    Science.gov (United States)

    Duan, Xianyin; Zou, Yu; Gao, Qiang; Peng, Fangyu; Zhou, Min; Jiang, Guozhang

    2018-03-01

    In order to solve the problem of adaptability and scanning efficiency of the current surface profile detection device, a high precision and high efficiency detection approach is proposed for surface contour of free-form surface parts based on self- adaptability. The contact mechanical probe and the non-contact laser probe are synthetically integrated according to the sampling approach of adaptive front-end path detection. First, the front-end path is measured by the non-contact laser probe, and the detection path is planned by the internal algorithm of the measuring instrument. Then a reasonable measurement sampling is completed according to the planned path by the contact mechanical probe. The detection approach can effectively improve the measurement efficiency of the free-form surface contours and can simultaneously detect the surface contours of unknown free-form surfaces with different curvatures and even different rate of curvature. The detection approach proposed in this paper also has important reference value for free-form surface contour detection.

  18. Modeling and experimental validation of sawing based lander anchoring and sampling methods for asteroid exploration

    Science.gov (United States)

    Zhang, Jun; Dong, Chengcheng; Zhang, Hui; Li, Song; Song, Aiguo

    2018-05-01

    This paper presents a novel lander anchoring system based on sawing method for asteroid exploration. The system is composed of three robotic arms, three cutting discs, and a control system. The discs mounted at the end of the arms are able to penetrate into the rock surface of asteroids. After the discs cut into the rock surface, the self-locking function of the arms provides forces to fix the lander on the surface. Modeling, trajectory planning, simulations, mechanism design, and prototype fabrication of the anchoring system are discussed, respectively. The performances of the system are tested on different kinds of rocks, at different sawing angles, locations, and speeds. Results show that the system can cut 15 mm deep into granite rock in 180 s at sawing angle of 60°, with the average power of 58.41 W, and the "weight on bit" (WOB) of 8.637 N. The 7.8 kg anchoring system is capable of providing omni-directional anchoring forces, at least 225 N normal and 157 N tangent to the surface of the rock. The system has the advantages of low-weight, low energy consumption and balance forces, high anchoring efficiency and reliability, and could enable the lander to move and sample or assist astronauts and robots in walking and sampling on asteroids.

  19. A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.

    Science.gov (United States)

    Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa

    2016-05-17

    Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.

  20. Random sampling or geostatistical modelling? Choosing between design-based and model-based sampling strategies for soil (with discussion)

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    1997-01-01

    Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based

  1. Sampling in interview-based qualitative research: A theoretical and practical guide

    OpenAIRE

    Robinson, Oliver

    2014-01-01

    Sampling is central to the practice of qualitative methods, but compared with data collection and analysis, its processes are discussed relatively little. A four-point approach to sampling in qualitative interview-based research is presented and critically discussed in this article, which integrates theory and process for the following: (1) Defining a sample universe, by way of specifying inclusion and exclusion criteria for potential participation; (2) Deciding upon a sample size, through th...

  2. Preview-based sampling for controlling gaseous simulations

    KAUST Repository

    Huang, Ruoguan

    2011-01-01

    In this work, we describe an automated method for directing the control of a high resolution gaseous fluid simulation based on the results of a lower resolution preview simulation. Small variations in accuracy between low and high resolution grids can lead to divergent simulations, which is problematic for those wanting to achieve a desired behavior. Our goal is to provide a simple method for ensuring that the high resolution simulation matches key properties from the lower resolution simulation. We first let a user specify a fast, coarse simulation that will be used for guidance. Our automated method samples the data to be matched at various positions and scales in the simulation, or allows the user to identify key portions of the simulation to maintain. During the high resolution simulation, a matching process ensures that the properties sampled from the low resolution simulation are maintained. This matching process keeps the different resolution simulations aligned even for complex systems, and can ensure consistency of not only the velocity field, but also advected scalar values. Because the final simulation is naturally similar to the preview simulation, only minor controlling adjustments are needed, allowing a simpler control method than that used in prior keyframing approaches. Copyright © 2011 by the Association for Computing Machinery, Inc.

  3. Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems

    Science.gov (United States)

    Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros

    2015-04-01

    In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the

  4. Risk-Based Sampling: I Don't Want to Weight in Vain.

    Science.gov (United States)

    Powell, Mark R

    2015-12-01

    Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.

  5. Neonatal blood gas sampling methods | Goenka | South African ...

    African Journals Online (AJOL)

    There is little published guidance that systematically evaluates the different methods of neonatal blood gas sampling, where each method has its individual benefits and risks. This review critically surveys the available evidence to generate a comparison between arterial and capillary blood gas sampling, focusing on their ...

  6. An improved selective sampling method

    International Nuclear Information System (INIS)

    Miyahara, Hiroshi; Iida, Nobuyuki; Watanabe, Tamaki

    1986-01-01

    The coincidence methods which are currently used for the accurate activity standardisation of radio-nuclides, require dead time and resolving time corrections which tend to become increasingly uncertain as countrates exceed about 10 K. To reduce the dependence on such corrections, Muller, in 1981, proposed the selective sampling method using a fast multichannel analyser (50 ns ch -1 ) for measuring the countrates. It is, in many ways, more convenient and possibly potentially more reliable to replace the MCA with scalers and a circuit is described employing five scalers; two of them serving to measure the background correction. Results of comparisons using our new method and the coincidence method for measuring the activity of 60 Co sources yielded agree-ment within statistical uncertainties. (author)

  7. Sample preparation method for scanning force microscopy

    CERN Document Server

    Jankov, I R; Szente, R N; Carreno, M N P; Swart, J W; Landers, R

    2001-01-01

    We present a method of sample preparation for studies of ion implantation on metal surfaces. The method, employing a mechanical mask, is specially adapted for samples analysed by Scanning Force Microscopy. It was successfully tested on polycrystalline copper substrates implanted with phosphorus ions at an acceleration voltage of 39 keV. The changes of the electrical properties of the surface were measured by Kelvin Probe Force Microscopy and the surface composition was analysed by Auger Electron Spectroscopy.

  8. Combining land use information and small stream sampling with PCR-based methods for better characterization of diffuse sources of human fecal pollution.

    Science.gov (United States)

    Peed, Lindsay A; Nietch, Christopher T; Kelty, Catherine A; Meckes, Mark; Mooney, Thomas; Sivaganesan, Mano; Shanks, Orin C

    2011-07-01

    Diffuse sources of human fecal pollution allow for the direct discharge of waste into receiving waters with minimal or no treatment. Traditional culture-based methods are commonly used to characterize fecal pollution in ambient waters, however these methods do not discern between human and other animal sources of fecal pollution making it difficult to identify diffuse pollution sources. Human-associated quantitative real-time PCR (qPCR) methods in combination with low-order headwatershed sampling, precipitation information, and high-resolution geographic information system land use data can be useful for identifying diffuse source of human fecal pollution in receiving waters. To test this assertion, this study monitored nine headwatersheds over a two-year period potentially impacted by faulty septic systems and leaky sanitary sewer lines. Human fecal pollution was measured using three different human-associated qPCR methods and a positive significant correlation was seen between abundance of human-associated genetic markers and septic systems following wet weather events. In contrast, a negative correlation was observed with sanitary sewer line densities suggesting septic systems are the predominant diffuse source of human fecal pollution in the study area. These results demonstrate the advantages of combining water sampling, climate information, land-use computer-based modeling, and molecular biology disciplines to better characterize diffuse sources of human fecal pollution in environmental waters.

  9. A study on the weather sampling method for probabilistic consequence analysis

    International Nuclear Information System (INIS)

    Oh, Hae Cheol

    1996-02-01

    The main task of probabilistic accident consequence analysis model is to predict the radiological situation and to provide a reliable quantitative data base for making decisions on countermeasures. The magnitude of accident consequence is depended on the characteristic of the accident and the weather coincident. In probabilistic accident consequence analysis, it is necessary to repeat the atmospheric dispersion calculation with several hundreds of weather sequences to predict the full distribution of consequences which may occur following a postulated accident release. It is desirable to select a representative sample of weather sequences from a meteorological record which is typical of the area over which the released radionuclides will disperse and which spans a sufficiently long period. The selection process is done by means of sampling techniques from a full year of hourly weather data characteristic of the plant site. In this study, the proposed Weighted importance sampling method selects proportional to the each bin size to closely approximate the true frequency distribution of weather condition at the site. The Weighted importance sampling method results in substantially less sampling uncertainty than the previous technique. The proposed technique can result in improve confidence in risk estimates

  10. Problems with sampling desert tortoises: A simulation analysis based on field data

    Science.gov (United States)

    Freilich, J.E.; Camp, R.J.; Duda, J.J.; Karl, A.E.

    2005-01-01

    The desert tortoise (Gopherus agassizii) was listed as a U.S. threatened species in 1990 based largely on population declines inferred from mark-recapture surveys of 2.59-km2 (1-mi2) plots. Since then, several census methods have been proposed and tested, but all methods still pose logistical or statistical difficulties. We conducted computer simulations using actual tortoise location data from 2 1-mi2 plot surveys in southern California, USA, to identify strengths and weaknesses of current sampling strategies. We considered tortoise population estimates based on these plots as "truth" and then tested various sampling methods based on sampling smaller plots or transect lines passing through the mile squares. Data were analyzed using Schnabel's mark-recapture estimate and program CAPTURE. Experimental subsampling with replacement of the 1-mi2 data using 1-km2 and 0.25-km2 plot boundaries produced data sets of smaller plot sizes, which we compared to estimates from the 1-mi 2 plots. We also tested distance sampling by saturating a 1-mi 2 site with computer simulated transect lines, once again evaluating bias in density estimates. Subsampling estimates from 1-km2 plots did not differ significantly from the estimates derived at 1-mi2. The 0.25-km2 subsamples significantly overestimated population sizes, chiefly because too few recaptures were made. Distance sampling simulations were biased 80% of the time and had high coefficient of variation to density ratios. Furthermore, a prospective power analysis suggested limited ability to detect population declines as high as 50%. We concluded that poor performance and bias of both sampling procedures was driven by insufficient sample size, suggesting that all efforts must be directed to increasing numbers found in order to produce reliable results. Our results suggest that present methods may not be capable of accurately estimating desert tortoise populations.

  11. A Proposal of New Spherical Particle Modeling Method Based on Stochastic Sampling of Particle Locations in Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Song Hyun; Kim, Do Hyun; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jea Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    To the high computational efficiency and user convenience, the implicit method had received attention; however, it is noted that the implicit method in the previous studies has low accuracy at high packing fraction. In this study, a new implicit method, which can be used at any packing fraction with high accuracy, is proposed. In this study, the implicit modeling method in the spherical particle distributed medium for using the MC simulation is proposed. A new concept in the spherical particle sampling was developed to solve the problems in the previous implicit methods. The sampling method was verified by simulating the sampling method in the infinite and finite medium. The results show that the particle implicit modeling with the proposed method was accurately performed in all packing fraction boundaries. It is expected that the proposed method can be efficiently utilized for the spherical particle distributed mediums, which are the fusion reactor blanket, VHTR reactors, and shielding analysis.

  12. Method for analysing radium in powder samples and its application to uranium prospecting

    International Nuclear Information System (INIS)

    Gong Xinxi; Hu Minzhi.

    1987-01-01

    The decayed daughter of Rn released from the power sample (soil) in a sealed bottle were collected on a piece of copper and the radium in the sample can be measured by counting α-particles with an Alphameter for uranium prospection, thus it is called the radium method. This method has many advantages, such as high sensitivity (the lowest limit of detection for radium sample per gram is 2.7 x 10 -15 g), high efficiency, low cost and easy to use. On the basis of measuring more than 700 samples taken along 20 sections in 8 deposits, the results show that the radium method is better than γ-measurement and equal to 210 Po method for the capability to descover anomalies. The author also summarizes the anomaly intensities of radium method, 210 Po method and γ-measurement respectively at the surface with deep blind ores, with or without surficial mineralization, and the figures of their profiles and the variation of Ra/ 210 Po ratios. According to the above-mentioned distinguishing features, the uranium mineralization located in deep and/or shallow parts can be distinguishd. The combined application of radium, 210 Po and γ-measurement methods may be regarded as one of the important methods used for anomaly assessment. Based on the experiments of the radium measurements with 771 stream sediments samples in an area of 100 km 2 , it is demonstrated that the radium mehtod can be used in the stages of uranium reconnaissance and prospecting

  13. Sampling and analysis methods for geothermal fluids and gases

    Energy Technology Data Exchange (ETDEWEB)

    Watson, J.C.

    1978-07-01

    The sampling procedures for geothermal fluids and gases include: sampling hot springs, fumaroles, etc.; sampling condensed brine and entrained gases; sampling steam-lines; low pressure separator systems; high pressure separator systems; two-phase sampling; downhole samplers; and miscellaneous methods. The recommended analytical methods compiled here cover physical properties, dissolved solids, and dissolved and entrained gases. The sequences of methods listed for each parameter are: wet chemical, gravimetric, colorimetric, electrode, atomic absorption, flame emission, x-ray fluorescence, inductively coupled plasma-atomic emission spectroscopy, ion exchange chromatography, spark source mass spectrometry, neutron activation analysis, and emission spectrometry. Material on correction of brine component concentrations for steam loss during flashing is presented. (MHR)

  14. A direct sampling method for inverse electromagnetic medium scattering

    KAUST Repository

    Ito, Kazufumi

    2013-09-01

    In this paper, we study the inverse electromagnetic medium scattering problem of estimating the support and shape of medium scatterers from scattered electric/magnetic near-field data. We shall develop a novel direct sampling method based on an analysis of electromagnetic scattering and the behavior of the fundamental solution. It is applicable to a few incident fields and needs only to compute inner products of the measured scattered field with the fundamental solutions located at sampling points. Hence, it is strictly direct, computationally very efficient and highly robust to the presence of data noise. Two- and three-dimensional numerical experiments indicate that it can provide reliable support estimates for multiple scatterers in the case of both exact and highly noisy data. © 2013 IOP Publishing Ltd.

  15. Sample-Based Extreme Learning Machine with Missing Data

    Directory of Open Access Journals (Sweden)

    Hang Gao

    2015-01-01

    Full Text Available Extreme learning machine (ELM has been extensively studied in machine learning community during the last few decades due to its high efficiency and the unification of classification, regression, and so forth. Though bearing such merits, existing ELM algorithms cannot efficiently handle the issue of missing data, which is relatively common in practical applications. The problem of missing data is commonly handled by imputation (i.e., replacing missing values with substituted values according to available information. However, imputation methods are not always effective. In this paper, we propose a sample-based learning framework to address this issue. Based on this framework, we develop two sample-based ELM algorithms for classification and regression, respectively. Comprehensive experiments have been conducted in synthetic data sets, UCI benchmark data sets, and a real world fingerprint image data set. As indicated, without introducing extra computational complexity, the proposed algorithms do more accurate and stable learning than other state-of-the-art ones, especially in the case of higher missing ratio.

  16. Gaussian process based intelligent sampling for measuring nano-structure surfaces

    Science.gov (United States)

    Sun, L. J.; Ren, M. J.; Yin, Y. H.

    2016-09-01

    Nanotechnology is the science and engineering that manipulate matters at nano scale, which can be used to create many new materials and devices with a vast range of applications. As the nanotech product increasingly enters the commercial marketplace, nanometrology becomes a stringent and enabling technology for the manipulation and the quality control of the nanotechnology. However, many measuring instruments, for instance scanning probe microscopy, are limited to relatively small area of hundreds of micrometers with very low efficiency. Therefore some intelligent sampling strategies should be required to improve the scanning efficiency for measuring large area. This paper presents a Gaussian process based intelligent sampling method to address this problem. The method makes use of Gaussian process based Bayesian regression as a mathematical foundation to represent the surface geometry, and the posterior estimation of Gaussian process is computed by combining the prior probability distribution with the maximum likelihood function. Then each sampling point is adaptively selected by determining the position which is the most likely outside of the required tolerance zone among the candidates and then inserted to update the model iteratively. Both simulationson the nominal surface and manufactured surface have been conducted on nano-structure surfaces to verify the validity of the proposed method. The results imply that the proposed method significantly improves the measurement efficiency in measuring large area structured surfaces.

  17. A method for under-sampled ecological network data analysis: plant-pollination as case study

    Directory of Open Access Journals (Sweden)

    Peter B. Sorensen

    2012-01-01

    Full Text Available In this paper, we develop a method, termed the Interaction Distribution (ID method, for analysis of quantitative ecological network data. In many cases, quantitative network data sets are under-sampled, i.e. many interactions are poorly sampled or remain unobserved. Hence, the output of statistical analyses may fail to differentiate between patterns that are statistical artefacts and those which are real characteristics of ecological networks. The ID method can support assessment and inference of under-sampled ecological network data. In the current paper, we illustrate and discuss the ID method based on the properties of plant-animal pollination data sets of flower visitation frequencies. However, the ID method may be applied to other types of ecological networks. The method can supplement existing network analyses based on two definitions of the underlying probabilities for each combination of pollinator and plant species: (1, pi,j: the probability for a visit made by the i’th pollinator species to take place on the j’th plant species; (2, qi,j: the probability for a visit received by the j’th plant species to be made by the i’th pollinator. The method applies the Dirichlet distribution to estimate these two probabilities, based on a given empirical data set. The estimated mean values for pi,j and qi,j reflect the relative differences between recorded numbers of visits for different pollinator and plant species, and the estimated uncertainty of pi,j and qi,j decreases with higher numbers of recorded visits.

  18. Towards a new method for the quantification of metabolites in the biological sample

    International Nuclear Information System (INIS)

    Neugnot, B.

    2005-03-01

    The quantification of metabolites is a key step in drug development. The aim of this Ph.D. work was to study the feasibility of a new method for this quantification, in the biological sample, without the drawbacks (cost, time, ethics) of the classical quantification methods based on metabolites synthesis or administration to man of the radiolabelled drug. Our strategy consists in determining the response factor, in mass spectrometry, of the metabolites. This approach is based on tritium labelling of the metabolites, ex vivo, by isotopic exchange. The labelling step was studied with deuterium. Metabolites of a model drug, recovered from in vitro or urinary samples, were labelled by three ways (Crab tree's catalyst ID2, deuterated trifluoroacetic acid or rhodium chloride ID20). Then, the transposition to tritium labelling was studied and the first results are very promising for the ultimate validation of the method. (author)

  19. A passive guard for low thermal conductivity measurement of small samples by the hot plate method

    International Nuclear Information System (INIS)

    Jannot, Yves; Godefroy, Justine; Degiovanni, Alain; Grigorova-Moutiers, Veneta

    2017-01-01

    Hot plate methods under steady state conditions are based on a 1D model to estimate the thermal conductivity, using measurements of the temperatures T 0 and T 1 of the two sides of the sample and of the heat flux crossing it. To be consistent with the hypothesis of the 1D heat flux, either a hot plate guarded apparatus is used, or the temperature is measured at the centre of the sample. On one hand the latter method can be used only if the ratio thickness/width of the sample is sufficiently low and on the other hand the guarded hot plate method requires large width samples (typical cross section of 0.6  ×  0.6 m 2 ). That is why both methods cannot be used for low width samples. The method presented in this paper is based on an optimal choice of the temperatures T 0 and T 1 compared to the ambient temperature T a , enabling the estimation of the thermal conductivity with a centered hot plate method, by applying the 1D heat flux model. It will be shown that these optimal values do not depend on the size or on the thermal conductivity of samples (in the range 0.015–0.2 W m −1 K −1 ), but only on T a . The experimental results obtained validate the method for several reference samples for values of the ratio thickness/width up to 0.3, thus enabling the measurement of the thermal conductivity of samples having a small cross-section, down to 0.045  ×  0.045 m 2 . (paper)

  20. Solvent-assisted dispersive solid-phase extraction: A sample preparation method for trace detection of diazinon in urine and environmental water samples.

    Science.gov (United States)

    Aladaghlo, Zolfaghar; Fakhari, Alireza; Behbahani, Mohammad

    2016-09-02

    In this research, a sample preparation method termed solvent-assisted dispersive solid-phase extraction (SA-DSPE) was applied. The used sample preparation method was based on the dispersion of the sorbent into the aqueous sample to maximize the interaction surface. In this approach, the dispersion of the sorbent at a very low milligram level was received by inserting a solution of the sorbent and disperser solvent into the aqueous sample. The cloudy solution created from the dispersion of the sorbent in the bulk aqueous sample. After pre-concentration of the diazinon, the cloudy solution was centrifuged and diazinon in the sediment phase dissolved in ethanol and determined by gas chromatography-flame ionization detector. Under the optimized conditions (pH of solution=7.0, Sorbent: benzophenone, 2%, Disperser solvent: ethanol, 500μL, Centrifuge: centrifuged at 4000rpm for 3min), the method detection limit for diazinon was 0.2, 0.3, 0.3 and 0.3μgL(-1) for distilled water, lake water, waste water and urine sample, respectively. Furthermore, the pre-concentration factor was 363.8, 356.1, 360.7 and 353.38 in distilled water, waste water, lake water and urine sample, respectively. SA-DSPE was successfully used for trace monitoring of diazinon in urine, lake and waste water samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Application of WSP method in analysis of environmental samples

    International Nuclear Information System (INIS)

    Stacho, M.; Slugen, V.; Hinca, R.; Sojak, S.; Krnac, S.

    2014-01-01

    Detection of activity in natural samples is specific especially because of its low level and high background interferences. Reduction of background interferences could be reached using low background chamber. Measurement geometry in shape of Marinelli beaker is commonly used according to low level of activity in natural samples. The Peak Net Area (PNA) method is the world-wide accepted technique for analysis of gamma-ray spectra. It is based on the net area calculation of the full energy peak, therefore, it takes into account only a fraction of measured gamma-ray spectrum. On the other hand, the Whole Spectrum Processing (WSP) approach to the gamma analysis makes possible to use entire information being in the spectrum. This significantly raises efficiency and improves energy resolution of the analysis. A principal step for the WSP application is building up the suitable response operator. Problems are put in an appearance when suitable standard calibration sources are unavailable. It may be occurred in the case of large volume samples and/or in the analysis of high energy range. Combined experimental and mathematical calibration may be a suitable solution. Many different detectors have been used to register the gamma ray and its energy. HPGe detectors produce the highest resolution commonly available today. Therefore they are they the most often used detectors in natural samples activity analysis. Scintillation detectors analysed using PNA method could be also used in simple cases, but for complicated spectra are practically inapplicable. WSP approach improves resolution of scintillation detectors and expands their applicability. WSP method allowed significant improvement of the energetic resolution and separation of "1"3"7Cs 661 keV peak from "2"1"4Bi 609 keV peak. At the other hand the statistical fluctuations in the lower part of the spectrum highlighted by background subtraction causes that this part is still not reliably analyzable. (authors)

  2. NIM: A Node Influence Based Method for Cancer Classification

    Directory of Open Access Journals (Sweden)

    Yiwen Wang

    2014-01-01

    Full Text Available The classification of different cancer types owns great significance in the medical field. However, the great majority of existing cancer classification methods are clinical-based and have relatively weak diagnostic ability. With the rapid development of gene expression technology, it is able to classify different kinds of cancers using DNA microarray. Our main idea is to confront the problem of cancer classification using gene expression data from a graph-based view. Based on a new node influence model we proposed, this paper presents a novel high accuracy method for cancer classification, which is composed of four parts: the first is to calculate the similarity matrix of all samples, the second is to compute the node influence of training samples, the third is to obtain the similarity between every test sample and each class using weighted sum of node influence and similarity matrix, and the last is to classify each test sample based on its similarity between every class. The data sets used in our experiments are breast cancer, central nervous system, colon tumor, prostate cancer, acute lymphoblastic leukemia, and lung cancer. experimental results showed that our node influence based method (NIM is more efficient and robust than the support vector machine, K-nearest neighbor, C4.5, naive Bayes, and CART.

  3. Wet-digestion of environmental sample using silver-mediated electrochemical method

    International Nuclear Information System (INIS)

    Kuwabara, Jun

    2010-01-01

    An application of silver-mediated electrochemical method to environmental samples as the effective digestion method for iodine analysis was tried. Usual digestion method for 129 I in many type of environmental sample is combustion method using quartz glass tube. Chemical yield of iodine on the combustion method reduce depending on the type of sample. The silver-mediated electrochemical method is expected to achieve very low loss of iodine. In this study, dried kombu (Laminaria) sample was tried to digest with electrochemical cell. At the case of 1g of sample, digestion was completed for about 24 hours under the electric condition of <10V and <2A. After the digestion, oxidized species of iodine was reduced to iodide by adding sodium sulfite. And then the precipitate of silver iodide was obtained. (author)

  4. Curvature computation in volume-of-fluid method based on point-cloud sampling

    Science.gov (United States)

    Kassar, Bruno B. M.; Carneiro, João N. E.; Nieckele, Angela O.

    2018-01-01

    This work proposes a novel approach to compute interface curvature in multiphase flow simulation based on Volume of Fluid (VOF) method. It is well documented in the literature that curvature and normal vector computation in VOF may lack accuracy mainly due to abrupt changes in the volume fraction field across the interfaces. This may cause deterioration on the interface tension forces estimates, often resulting in inaccurate results for interface tension dominated flows. Many techniques have been presented over the last years in order to enhance accuracy in normal vectors and curvature estimates including height functions, parabolic fitting of the volume fraction, reconstructing distance functions, coupling Level Set method with VOF, convolving the volume fraction field with smoothing kernels among others. We propose a novel technique based on a representation of the interface by a cloud of points. The curvatures and the interface normal vectors are computed geometrically at each point of the cloud and projected onto the Eulerian grid in a Front-Tracking manner. Results are compared to benchmark data and significant reduction on spurious currents as well as improvement in the pressure jump are observed. The method was developed in the open source suite OpenFOAM® extending its standard VOF implementation, the interFoam solver.

  5. A flexible method for multi-level sample size determination

    International Nuclear Information System (INIS)

    Lu, Ming-Shih; Sanborn, J.B.; Teichmann, T.

    1997-01-01

    This paper gives a flexible method to determine sample sizes for both systematic and random error models (this pertains to sampling problems in nuclear safeguard questions). In addition, the method allows different attribute rejection limits. The new method could assist achieving a higher detection probability and enhance inspection effectiveness

  6. LVQ-SMOTE - Learning Vector Quantization based Synthetic Minority Over-sampling Technique for biomedical data.

    Science.gov (United States)

    Nakamura, Munehiro; Kajiwara, Yusuke; Otsuka, Atsushi; Kimura, Haruhiko

    2013-10-02

    Over-sampling methods based on Synthetic Minority Over-sampling Technique (SMOTE) have been proposed for classification problems of imbalanced biomedical data. However, the existing over-sampling methods achieve slightly better or sometimes worse result than the simplest SMOTE. In order to improve the effectiveness of SMOTE, this paper presents a novel over-sampling method using codebooks obtained by the learning vector quantization. In general, even when an existing SMOTE applied to a biomedical dataset, its empty feature space is still so huge that most classification algorithms would not perform well on estimating borderlines between classes. To tackle this problem, our over-sampling method generates synthetic samples which occupy more feature space than the other SMOTE algorithms. Briefly saying, our over-sampling method enables to generate useful synthetic samples by referring to actual samples taken from real-world datasets. Experiments on eight real-world imbalanced datasets demonstrate that our proposed over-sampling method performs better than the simplest SMOTE on four of five standard classification algorithms. Moreover, it is seen that the performance of our method increases if the latest SMOTE called MWMOTE is used in our algorithm. Experiments on datasets for β-turn types prediction show some important patterns that have not been seen in previous analyses. The proposed over-sampling method generates useful synthetic samples for the classification of imbalanced biomedical data. Besides, the proposed over-sampling method is basically compatible with basic classification algorithms and the existing over-sampling methods.

  7. Influence of Sample Size on Automatic Positional Accuracy Assessment Methods for Urban Areas

    Directory of Open Access Journals (Sweden)

    Francisco J. Ariza-López

    2018-05-01

    Full Text Available In recent years, new approaches aimed to increase the automation level of positional accuracy assessment processes for spatial data have been developed. However, in such cases, an aspect as significant as sample size has not yet been addressed. In this paper, we study the influence of sample size when estimating the planimetric positional accuracy of urban databases by means of an automatic assessment using polygon-based methodology. Our study is based on a simulation process, which extracts pairs of homologous polygons from the assessed and reference data sources and applies two buffer-based methods. The parameter used for determining the different sizes (which range from 5 km up to 100 km has been the length of the polygons’ perimeter, and for each sample size 1000 simulations were run. After completing the simulation process, the comparisons between the estimated distribution functions for each sample and population distribution function were carried out by means of the Kolmogorov–Smirnov test. Results show a significant reduction in the variability of estimations when sample size increased from 5 km to 100 km.

  8. Ultrasonic-energy enhance the ionic liquid-based dual microextraction to preconcentrate the lead in ground and stored rain water samples as compared to conventional shaking method.

    Science.gov (United States)

    Nizamani, Sooraj; Kazi, Tasneem G; Afridi, Hassan I

    2018-01-01

    An efficient preconcentration technique based on ultrasonic-assisted ionic liquid-based dual microextraction (UA-ILDµE) method has been developed to preconcentrate the lead (Pb +2 ) in ground and stored rain water. In the current proposed method, Pb +2 was complexed with a chelating agent (dithizone), whereas an ionic liquid (1-butyl-3-methylimidazolium hexafluorophosphate) was used for extraction purpose. The ultrasonic irradiation and electrical shaking system were applied to enhance the dispersion and extraction of Pb +2 complex in aqueous samples. For second phase, dual microextraction (DµE phase), the enriched Pb +2 complex in ionic liquid, extracted back into the acidic aqueous solution and finally determined by flame atomic absorption spectrometry. Some major analytical parameters that influenced the extraction efficiency of developed method, such as pH, concentration of ligand, volume of ionic liquid and samples, time of shaking in thermostatic electrical shaker and ultrasonic bath, effect of back extracting HNO 3 volume, matrix effect, centrifugation time and rate were optimized. At the sample volume of 25mL, the calculated preconcentration factor was 62.2. The limit of detection of proposed procedure for Pb +2 ions was found to be 0.54μgL -1 . The validation of developed method was performed by the analysis of certified sample of water SRM 1643e and standard addition method in a real water sample. The extraction recovery of Pb +2 was enhanced≥2% with shaking time of 80s in ultrasonic bath as compared to used thermostatic electrical shaker, where for optimum recovery up to 10min was required. The developed procedure was successfully used for the enrichment of Pb +2 in ground and stored rain water (surface water) samples of an endemic region of Pakistan. The resulted data indicated that the ground water samples were highly contaminated with Pb +2 , while some of the surface water samples were also have higher values of Pb +2 than permissible limit of

  9. A cache-friendly sampling strategy for texture-based volume rendering on GPU

    Directory of Open Access Journals (Sweden)

    Junpeng Wang

    2017-06-01

    Full Text Available The texture-based volume rendering is a memory-intensive algorithm. Its performance relies heavily on the performance of the texture cache. However, most existing texture-based volume rendering methods blindly map computational resources to texture memory and result in incoherent memory access patterns, causing low cache hit rates in certain cases. The distance between samples taken by threads of an atomic scheduling unit (e.g. a warp of 32 threads in CUDA of the GPU is a crucial factor that affects the texture cache performance. Based on this fact, we present a new sampling strategy, called Warp Marching, for the ray-casting algorithm of texture-based volume rendering. The effects of different sample organizations and different thread-pixel mappings in the ray-casting algorithm are thoroughly analyzed. Also, a pipeline manner color blending approach is introduced and the power of warp-level GPU operations is leveraged to improve the efficiency of parallel executions on the GPU. In addition, the rendering performance of the Warp Marching is view-independent, and it outperforms existing empty space skipping techniques in scenarios that need to render large dynamic volumes in a low resolution image. Through a series of micro-benchmarking and real-life data experiments, we rigorously analyze our sampling strategies and demonstrate significant performance enhancements over existing sampling methods.

  10. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  11. Fast and simple method for semiquantitative determination of calcium propionate in bread samples.

    Science.gov (United States)

    Phechkrajang, Chutima Matayatsuk; Yooyong, Surin

    2017-04-01

    Calcium propionate has been widely used as a preservative in bakery and in bread. It is sometimes not carefully used, or a high concentration is added to preserve products. High consumption of calcium propionate can lead to several health problems. This study aims to develop a fast and simple semiquantitative method based on color complex formation for the determination of calcium propionate in a bread sample. A red-brown complex was obtained from the reaction of ferric ammonium sulfate and propionate anion. The product was rapidly formed and easily observed with the concentration of propionate anion >0.4 mg/mL. A high-performance liquid chromatography (HPLC) method was also developed and validated for comparison. Twenty-two bread samples from three markets near Bangkok were randomly selected and assayed for calcium propionate using the above two developed methods. The results showed that 19/22 samples contained calcium propionate >2000 mg/kg. The results of the complex formation method agreed with the HPLC method. Copyright © 2016. Published by Elsevier B.V.

  12. Fast and simple method for semiquantitative determination of calcium propionate in bread samples

    Directory of Open Access Journals (Sweden)

    Chutima Matayatsuk Phechkrajang

    2017-04-01

    Full Text Available Calcium propionate has been widely used as a preservative in bakery and in bread. It is sometimes not carefully used, or a high concentration is added to preserve products. High consumption of calcium propionate can lead to several health problems. This study aims to develop a fast and simple semiquantitative method based on color complex formation for the determination of calcium propionate in a bread sample. A red–brown complex was obtained from the reaction of ferric ammonium sulfate and propionate anion. The product was rapidly formed and easily observed with the concentration of propionate anion >0.4 mg/mL. A high-performance liquid chromatography (HPLC method was also developed and validated for comparison. Twenty-two bread samples from three markets near Bangkok were randomly selected and assayed for calcium propionate using the above two developed methods. The results showed that 19/22 samples contained calcium propionate >2000 mg/kg. The results of the complex formation method agreed with the HPLC method.

  13. A Highly Sensitive and Selective Method for the Determination of an Iodate in Table-salt Samples Using Malachite Green-based Spectrophotometry.

    Science.gov (United States)

    Konkayan, Mongkol; Limchoowong, Nunticha; Sricharoen, Phitchan; Chanthai, Saksit

    2016-01-01

    A simple, rapid, and sensitive malachite green-based spectrophotometric method for the selective trace determination of an iodate has been developed and presented for the first time. The reaction mixture was specifically involved in the liberation of iodine in the presence of an excess of iodide in an acidic condition following an instantaneous reaction between the liberated iodine and malachite green dye. The optimum condition was obtained with a buffer solution pH of 5.2 in the presence of 40 mg L -1 potassium iodide and 1.5 × 10 -5 M malachite green for a 5-min incubation time. The iodate contents in some table-salt samples were in the range of 26 to 45 mg kg -1 , while those of drinking water, tap water, canal water, and seawater samples were not detectable (< 96 ng mL -1 of limits of detection, LOQ) with their satisfied method of recoveries of between 93 and 108%. The results agreed with those obtained using ICP-OES for comparison.

  14. Background estimation in short-wave region during determination of total sample composition by x-ray fluorescence method

    International Nuclear Information System (INIS)

    Simakov, V.A.; Kordyukov, S.V.; Petrov, E.N.

    1988-01-01

    Method of background estimation in short-wave spectral region during determination of total sample composition by X-ray fluorescence method is described. 13 types of different rocks with considerable variations of base composition and Zr, Nb, Th, U content below 7x10 -3 % are investigated. The suggested method of background accounting provides for a less statistical error of the background estimation than direct isolated measurement and reliability of its determination in a short-wave region independent on the sample base. Possibilities of suggested method for artificial mixtures conforming by the content of main component to technological concemtrates - niobium, zirconium, tantalum are estimated

  15. Development of a solid-phase microextraction-based method for sampling of persistent chlorinated hydrocarbons in an urbanized coastal environment.

    Science.gov (United States)

    Zeng, Eddy Y; Tsukada, David; Diehl, Dario W

    2004-11-01

    Solid-phase microextraction (SPME) has been used as an in situ sampling technique for a wide range of volatile organic chemicals, but SPME field sampling of nonvolatile organic pollutants has not been reported. This paper describes the development of an SPME-based sampling method employing a poly(dimethylsiloxane) (PDMS)-coated (100-microm thickness) fiber as the sorbent phase. The laboratory-calibrated PDMS-coated fibers were used to construct SPME samplers, and field tests were conducted at three coastal locations off southern California to determine the equilibrium sampling time and compare the efficacy of the SPME samplers with that of an Infiltrex 100 water pumping system (Axys Environmental Systems Ltd., Sidney, British Columbia, Canada). p,p'-DDE and o,p'-DDE were the components consistently detected in the SPME samples among 42 polychlorinated biphenyl congeners and 17 chlorinated pesticidestargeted. SPME samplers deployed attwo locations with moderate and high levels of contamination for 18 and 30 d, respectively, attained statistically identical concentrations of p,p'-DDE and o,p'-DDE. In addition, SPME samplers deployed for 23 and 43 d, respectively, at a location of low contamination also contained statistically identical concentrations of p,p'-DDE. These results indicate that equilibrium could be reached within 18 to 23 d. The concentrations of p,p'-DDE, o,p'-DDE, or p,p'-DDD obtained with the SPME samplers and the Infiltrex 100 system were virtually identical. In particular, two water column concentration profiles of p,p'-DDE and o,p'-DDE acquired by the SPME samplers at a highly contaminated site on the Palos Verdes Shelf overlapped with the profiles obtained by the Infiltrex 100 system in 1997. The field tests not only reveal the advantages of the SPME samplers compared to the Infiltrex 100 system and other integrative passive devices but also indicate the need to improve the sensitivity of the SPME-based sampling technique.

  16. Using the multi-objective optimization replica exchange Monte Carlo enhanced sampling method for protein-small molecule docking.

    Science.gov (United States)

    Wang, Hongrui; Liu, Hongwei; Cai, Leixin; Wang, Caixia; Lv, Qiang

    2017-07-10

    In this study, we extended the replica exchange Monte Carlo (REMC) sampling method to protein-small molecule docking conformational prediction using RosettaLigand. In contrast to the traditional Monte Carlo (MC) and REMC sampling methods, these methods use multi-objective optimization Pareto front information to facilitate the selection of replicas for exchange. The Pareto front information generated to select lower energy conformations as representative conformation structure replicas can facilitate the convergence of the available conformational space, including available near-native structures. Furthermore, our approach directly provides min-min scenario Pareto optimal solutions, as well as a hybrid of the min-min and max-min scenario Pareto optimal solutions with lower energy conformations for use as structure templates in the REMC sampling method. These methods were validated based on a thorough analysis of a benchmark data set containing 16 benchmark test cases. An in-depth comparison between MC, REMC, multi-objective optimization-REMC (MO-REMC), and hybrid MO-REMC (HMO-REMC) sampling methods was performed to illustrate the differences between the four conformational search strategies. Our findings demonstrate that the MO-REMC and HMO-REMC conformational sampling methods are powerful approaches for obtaining protein-small molecule docking conformational predictions based on the binding energy of complexes in RosettaLigand.

  17. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  18. Comparing two sampling methods to engage hard-to-reach communities in research priority setting.

    Science.gov (United States)

    Valerio, Melissa A; Rodriguez, Natalia; Winkler, Paula; Lopez, Jaime; Dennison, Meagen; Liang, Yuanyuan; Turner, Barbara J

    2016-10-28

    Effective community-partnered and patient-centered outcomes research needs to address community priorities. However, optimal sampling methods to engage stakeholders from hard-to-reach, vulnerable communities to generate research priorities have not been identified. In two similar rural, largely Hispanic communities, a community advisory board guided recruitment of stakeholders affected by chronic pain using a different method in each community: 1) snowball sampling, a chain- referral method or 2) purposive sampling to recruit diverse stakeholders. In both communities, three groups of stakeholders attended a series of three facilitated meetings to orient, brainstorm, and prioritize ideas (9 meetings/community). Using mixed methods analysis, we compared stakeholder recruitment and retention as well as priorities from both communities' stakeholders on mean ratings of their ideas based on importance and feasibility for implementation in their community. Of 65 eligible stakeholders in one community recruited by snowball sampling, 55 (85 %) consented, 52 (95 %) attended the first meeting, and 36 (65 %) attended all 3 meetings. In the second community, the purposive sampling method was supplemented by convenience sampling to increase recruitment. Of 69 stakeholders recruited by this combined strategy, 62 (90 %) consented, 36 (58 %) attended the first meeting, and 26 (42 %) attended all 3 meetings. Snowball sampling recruited more Hispanics and disabled persons (all P research, focusing on non-pharmacologic interventions for management of chronic pain. Ratings on importance and feasibility for community implementation differed only on the importance of massage services (P = 0.045) which was higher for the purposive/convenience sampling group and for city improvements/transportation services (P = 0.004) which was higher for the snowball sampling group. In each of the two similar hard-to-reach communities, a community advisory board partnered with researchers

  19. Comparison of three methods for recovery of Brucella canis DNA from canine blood samples.

    Science.gov (United States)

    Batinga, Maria Cryskely A; Dos Santos, Jaíne C; Lima, Julia T R; Bigotto, Maria Fernanda D; Muner, Kerstin; Faita, Thalita; Soares, Rodrigo M; da Silva, David A V; Oliveira, Trícia M F S; Ferreira, Helena L; Diniz, Jaqueline A; Keid, Lara B

    2017-12-01

    Brucella canis, a gram-negative, facultative intracellular and zoonotic bacterium causes canine brucellosis. Direct methods are the most appropriate for the detection of canine brucellosis and bacterial isolation from blood samples has been employed as gold-standard method. However, due to the delay in obtaining results and the biological risk of the bacterial culturing, the polymerase chain reaction (PCR) has been successfully used as an alternative method for the diagnosis of the infection. Sample preparation is a key step for successful PCR and protocols that provide high DNA yield and purity are recommended to ensure high diagnostic sensitivity. The objective of this study was to evaluate the performance of PCR for the diagnosis of B. canis infection in 36 dogs by testing DNA of whole blood obtained through different extraction and purification protocols. Methods 1 and 2 were based on a commercial kit, using protocols recommended for DNA purification of whole blood and tissue samples, respectively. Method 3 was an in-house method based on enzymatic lysis and purification using organic solvents. The results of the PCR on samples obtained through three different DNA extraction protocols were compared to the blood culture. Of the 36 dogs, 13 (36.1%) were positive by blood culturing, while nine (25.0%), 14 (38.8%), and 15 (41.6%) were positive by PCR after DNA extraction using methods 1, 2 and 3, respectively. PCR performed on DNA purified by Method 2 was as efficient as blood culturing and PCR performed on DNA purified with in-house method, but had the advantage of being less laborious and, therefore, a suitable alternative for the direct B. canis detection in dogs. Copyright © 2017. Published by Elsevier B.V.

  20. Quantitative cerebral H215O perfusion PET without arterial blood sampling, a method based on washout rate

    International Nuclear Information System (INIS)

    Treyer, Valerie; Jobin, Mathieu; Burger, Cyrill; Buck, Alfred; Teneggi, Vincenzo

    2003-01-01

    The quantitative determination of regional cerebral blood flow (rCBF) is important in certain clinical and research applications. The disadvantage of most quantitative methods using H 2 15 O positron emission tomography (PET) is the need for arterial blood sampling. In this study a new non-invasive method for rCBF quantification was evaluated. The method is based on the washout rate of H 2 15 O following intravenous injection. All results were obtained with Alpert's method, which yields maps of the washin parameter K 1 (rCBF K1 ) and the washout parameter k 2 (rCBF k2 ). Maps of rCBF K1 were computed with measured arterial input curves. Maps of rCBF k2* were calculated with a standard input curve which was the mean of eight individual input curves. The mean of grey matter rCBF k2* (CBF k2* ) was then compared with the mean of rCBF K1 (CBF K1 ) in ten healthy volunteer smokers who underwent two PET sessions on day 1 and day 3. Each session consisted of three serial H 2 15 O scans. Reproducibility was analysed using the rCBF difference scan 3-scan 2 in each session. The perfusion reserve (PR = rCBF acetazolamide -rCBF baseline ) following acetazolamide challenge was calculated with rCBF k2* (PR k2* ) and rCBF K1 (PR K1 ) in ten patients with cerebrovascular disease. The difference CBF k2* -CBF K1 was 5.90±8.12 ml/min/100 ml (mean±SD, n=55). The SD of the scan 3-scan 1 difference was 6.1% for rCBF k2* and rCBF K1 , demonstrating a high reproducibility. Perfusion reserve values determined with rCBF K1 and rCBF k2* were in high agreement (difference PR k2* -PR K1 =-6.5±10.4%, PR expressed in percentage increase from baseline). In conclusion, a new non-invasive method for the quantitative determination of rCBF is presented. The method is in good agreement with Alpert's original method and the reproducibility is high. It does not require arterial blood sampling, yields quantitative voxel-by-voxel maps of rCBF, and is computationally efficient and easy to implement

  1. Sampling Key Populations for HIV Surveillance: Results From Eight Cross-Sectional Studies Using Respondent-Driven Sampling and Venue-Based Snowball Sampling.

    Science.gov (United States)

    Rao, Amrita; Stahlman, Shauna; Hargreaves, James; Weir, Sharon; Edwards, Jessie; Rice, Brian; Kochelani, Duncan; Mavimbela, Mpumelelo; Baral, Stefan

    2017-10-20

    In using regularly collected or existing surveillance data to characterize engagement in human immunodeficiency virus (HIV) services among marginalized populations, differences in sampling methods may produce different pictures of the target population and may therefore result in different priorities for response. The objective of this study was to use existing data to evaluate the sample distribution of eight studies of female sex workers (FSW) and men who have sex with men (MSM), who were recruited using different sampling approaches in two locations within Sub-Saharan Africa: Manzini, Swaziland and Yaoundé, Cameroon. MSM and FSW participants were recruited using either respondent-driven sampling (RDS) or venue-based snowball sampling. Recruitment took place between 2011 and 2016. Participants at each study site were administered a face-to-face survey to assess sociodemographics, along with the prevalence of self-reported HIV status, frequency of HIV testing, stigma, and other HIV-related characteristics. Crude and RDS-adjusted prevalence estimates were calculated. Crude prevalence estimates from the venue-based snowball samples were compared with the overlap of the RDS-adjusted prevalence estimates, between both FSW and MSM in Cameroon and Swaziland. RDS samples tended to be younger (MSM aged 18-21 years in Swaziland: 47.6% [139/310] in RDS vs 24.3% [42/173] in Snowball, in Cameroon: 47.9% [99/306] in RDS vs 20.1% [52/259] in Snowball; FSW aged 18-21 years in Swaziland 42.5% [82/325] in RDS vs 8.0% [20/249] in Snowball; in Cameroon 15.6% [75/576] in RDS vs 8.1% [25/306] in Snowball). They were less educated (MSM: primary school completed or less in Swaziland 42.6% [109/310] in RDS vs 4.0% [7/173] in Snowball, in Cameroon 46.2% [138/306] in RDS vs 14.3% [37/259] in Snowball; FSW: primary school completed or less in Swaziland 86.6% [281/325] in RDS vs 23.9% [59/247] in Snowball, in Cameroon 87.4% [520/576] in RDS vs 77.5% [238/307] in Snowball) than the snowball

  2. A novel method for single sample multi-axial nanoindentation of hydrated heterogeneous tissues based on testing great white shark jaws.

    Science.gov (United States)

    Ferrara, Toni L; Boughton, Philip; Slavich, Eve; Wroe, Stephen

    2013-01-01

    Nanomechanical testing methods that are suitable for a range of hydrated tissues are crucial for understanding biological systems. Nanoindentation of tissues can provide valuable insights into biology, tissue engineering and biomimetic design. However, testing hydrated biological samples still remains a significant challenge. Shark jaw cartilage is an ideal substrate for developing a method to test hydrated tissues because it is a unique heterogeneous composite of both mineralized (hard) and non-mineralized (soft) layers and possesses a jaw geometry that is challenging to test mechanically. The aim of this study is to develop a novel method for obtaining multidirectional nanomechanical properties for both layers of jaw cartilage from a single sample, taken from the great white shark (Carcharodon carcharias). A method for obtaining multidirectional data from a single sample is necessary for examining tissue mechanics in this shark because it is a protected species and hence samples may be difficult to obtain. Results show that this method maintains hydration of samples that would otherwise rapidly dehydrate. Our study is the first analysis of nanomechanical properties of great white shark jaw cartilage. Variation in nanomechanical properties were detected in different orthogonal directions for both layers of jaw cartilage in this species. The data further suggest that the mineralized layer of shark jaw cartilage is less stiff than previously posited. Our method allows multidirectional nanomechanical properties to be obtained from a single, small, hydrated heterogeneous sample. Our technique is therefore suitable for use when specimens are rare, valuable or limited in quantity, such as samples obtained from endangered species or pathological tissues. We also outline a method for tip-to-optic calibration that facilitates nanoindentation of soft biological tissues. Our technique may help address the critical need for a nanomechanical testing method that is applicable

  3. A novel method for single sample multi-axial nanoindentation of hydrated heterogeneous tissues based on testing great white shark jaws.

    Directory of Open Access Journals (Sweden)

    Toni L Ferrara

    Full Text Available Nanomechanical testing methods that are suitable for a range of hydrated tissues are crucial for understanding biological systems. Nanoindentation of tissues can provide valuable insights into biology, tissue engineering and biomimetic design. However, testing hydrated biological samples still remains a significant challenge. Shark jaw cartilage is an ideal substrate for developing a method to test hydrated tissues because it is a unique heterogeneous composite of both mineralized (hard and non-mineralized (soft layers and possesses a jaw geometry that is challenging to test mechanically. The aim of this study is to develop a novel method for obtaining multidirectional nanomechanical properties for both layers of jaw cartilage from a single sample, taken from the great white shark (Carcharodon carcharias. A method for obtaining multidirectional data from a single sample is necessary for examining tissue mechanics in this shark because it is a protected species and hence samples may be difficult to obtain. Results show that this method maintains hydration of samples that would otherwise rapidly dehydrate. Our study is the first analysis of nanomechanical properties of great white shark jaw cartilage. Variation in nanomechanical properties were detected in different orthogonal directions for both layers of jaw cartilage in this species. The data further suggest that the mineralized layer of shark jaw cartilage is less stiff than previously posited. Our method allows multidirectional nanomechanical properties to be obtained from a single, small, hydrated heterogeneous sample. Our technique is therefore suitable for use when specimens are rare, valuable or limited in quantity, such as samples obtained from endangered species or pathological tissues. We also outline a method for tip-to-optic calibration that facilitates nanoindentation of soft biological tissues. Our technique may help address the critical need for a nanomechanical testing method

  4. Validation and comparison of two sampling methods to assess dermal exposure to drilling fluids and crude oil.

    Science.gov (United States)

    Galea, Karen S; McGonagle, Carolyn; Sleeuwenhoek, Anne; Todd, David; Jiménez, Araceli Sánchez

    2014-06-01

    Dermal exposure to drilling fluids and crude oil is an exposure route of concern. However, there have been no published studies describing sampling methods or reporting dermal exposure measurements. We describe a study that aimed to evaluate a wipe sampling method to assess dermal exposure to an oil-based drilling fluid and crude oil, as well as to investigate the feasibility of using an interception cotton glove sampler for exposure on the hands/wrists. A direct comparison of the wipe and interception methods was also completed using pigs' trotters as a surrogate for human skin and a direct surface contact exposure scenario. Overall, acceptable recovery and sampling efficiencies were reported for both methods, and both methods had satisfactory storage stability at 1 and 7 days, although there appeared to be some loss over 14 days. The methods' comparison study revealed significantly higher removal of both fluids from the metal surface with the glove samples compared with the wipe samples (on average 2.5 times higher). Both evaluated sampling methods were found to be suitable for assessing dermal exposure to oil-based drilling fluids and crude oil; however, the comparison study clearly illustrates that glove samplers may overestimate the amount of fluid transferred to the skin. Further comparison of the two dermal sampling methods using additional exposure situations such as immersion or deposition, as well as a field evaluation, is warranted to confirm their appropriateness and suitability in the working environment. © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  5. [Establishment and assessment of QA/QC method for sampling and analysis of atmosphere background CO2].

    Science.gov (United States)

    Liu, Li-xin; Zhou, Ling-xi; Xia, Ling-jun; Wang, Hong-yang; Fang, Shuang-xi

    2014-12-01

    To strengthen scientific management and sharing of greenhouse gas data obtained from atmospheric background stations in China, it is important to ensure the standardization of quality assurance and quality control method for background CO2 sampling and analysis. Based on the greenhouse gas sampling and observation experience of CMA, using portable sampling observation and WS-CRDS analysis technique as an example, the quality assurance measures for atmospheric CO,sampling and observation in the Waliguan station (Qinghai), the glass bottle quality assurance measures and the systematic quality control method during sample analysis, the correction method during data processing, as well as the data grading quality markers and data fitting interpolation method were systematically introduced. Finally, using this research method, the CO2 sampling and observation data at the atmospheric background stations in 3 typical regions were processed and the concentration variation characteristics were analyzed, indicating that this research method could well catch the influences of the regional and local environmental factors on the observation results, and reflect the characteristics of natural and human activities in an objective and accurate way.

  6. Target and suspect screening of psychoactive substances in sewage-based samples by UHPLC-QTOF

    Energy Technology Data Exchange (ETDEWEB)

    Baz-Lomba, J.A., E-mail: jba@niva.no [Norwegian Institute for Water Research, Gaustadalléen 21, NO-0349, Oslo (Norway); Faculty of Medicine, University of Oslo, PO box 1078 Blindern, 0316, Oslo (Norway); Reid, Malcolm J.; Thomas, Kevin V. [Norwegian Institute for Water Research, Gaustadalléen 21, NO-0349, Oslo (Norway)

    2016-03-31

    The quantification of illicit drug and pharmaceutical residues in sewage has been shown to be a valuable tool that complements existing approaches in monitoring the patterns and trends of drug use. The present work delineates the development of a novel analytical tool and dynamic workflow for the analysis of a wide range of substances in sewage-based samples. The validated method can simultaneously quantify 51 target psychoactive substances and pharmaceuticals in sewage-based samples using an off-line automated solid phase extraction (SPE-DEX) method, using Oasis HLB disks, followed by ultra-high performance liquid chromatography coupled to quadrupole time-of-flight mass spectrometry (UHPLC-QTOF) in MS{sup e}. Quantification and matrix effect corrections were overcome with the use of 25 isotopic labeled internal standards (ILIS). Recoveries were generally greater than 60% and the limits of quantification were in the low nanogram-per-liter range (0.4–187 ng L{sup −1}). The emergence of new psychoactive substances (NPS) on the drug scene poses a specific analytical challenge since their market is highly dynamic with new compounds continuously entering the market. Suspect screening using high-resolution mass spectrometry (HRMS) simultaneously allowed the unequivocal identification of NPS based on a mass accuracy criteria of 5 ppm (of the molecular ion and at least two fragments) and retention time (2.5% tolerance) using the UNIFI screening platform. Applying MS{sup e} data against a suspect screening database of over 1000 drugs and metabolites, this method becomes a broad and reliable tool to detect and confirm NPS occurrence. This was demonstrated through the HRMS analysis of three different sewage-based sample types; influent wastewater, passive sampler extracts and pooled urine samples resulting in the concurrent quantification of known psychoactive substances and the identification of NPS and pharmaceuticals. - Highlights: • A novel reiterative workflow

  7. Target and suspect screening of psychoactive substances in sewage-based samples by UHPLC-QTOF

    International Nuclear Information System (INIS)

    Baz-Lomba, J.A.; Reid, Malcolm J.; Thomas, Kevin V.

    2016-01-01

    The quantification of illicit drug and pharmaceutical residues in sewage has been shown to be a valuable tool that complements existing approaches in monitoring the patterns and trends of drug use. The present work delineates the development of a novel analytical tool and dynamic workflow for the analysis of a wide range of substances in sewage-based samples. The validated method can simultaneously quantify 51 target psychoactive substances and pharmaceuticals in sewage-based samples using an off-line automated solid phase extraction (SPE-DEX) method, using Oasis HLB disks, followed by ultra-high performance liquid chromatography coupled to quadrupole time-of-flight mass spectrometry (UHPLC-QTOF) in MS"e. Quantification and matrix effect corrections were overcome with the use of 25 isotopic labeled internal standards (ILIS). Recoveries were generally greater than 60% and the limits of quantification were in the low nanogram-per-liter range (0.4–187 ng L"−"1). The emergence of new psychoactive substances (NPS) on the drug scene poses a specific analytical challenge since their market is highly dynamic with new compounds continuously entering the market. Suspect screening using high-resolution mass spectrometry (HRMS) simultaneously allowed the unequivocal identification of NPS based on a mass accuracy criteria of 5 ppm (of the molecular ion and at least two fragments) and retention time (2.5% tolerance) using the UNIFI screening platform. Applying MS"e data against a suspect screening database of over 1000 drugs and metabolites, this method becomes a broad and reliable tool to detect and confirm NPS occurrence. This was demonstrated through the HRMS analysis of three different sewage-based sample types; influent wastewater, passive sampler extracts and pooled urine samples resulting in the concurrent quantification of known psychoactive substances and the identification of NPS and pharmaceuticals. - Highlights: • A novel reiterative workflow based on three

  8. Bio-sample detection on paper-based devices with inkjet printer-sprayed reagents.

    Science.gov (United States)

    Liang, Wun-Hong; Chu, Chien-Hung; Yang, Ruey-Jen

    2015-12-01

    The reagent required for bio-sample detection on paper-based analytical devices is generally introduced manually using a pipette. Such an approach is time-consuming; particularly if a large number of devices are required. Automated methods provide a far more convenient solution for large-scale production, but incur a substantial cost. Accordingly, the present study proposes a low-cost method for the paper-based analytical devices in which the biochemical reagents are sprayed onto the device directly using a modified commercial inkjet printer. The feasibility of the proposed method is demonstrated by performing aspartate aminotransferase (AST) and alanine aminotransferase (ALT) tests using simple two-dimensional (2D) paper-based devices. In both cases, the reaction process is analyzed using an image-processing-based colorimetric method. The experimental results show that for AST detection within the 0-105 U/l concentration range, the optimal observation time is around four minutes, while for ALT detection in the 0-125 U/l concentration range, the optimal observation time is approximately one minute. Finally, for both samples, the detection performance of the sprayed-reagent analytical devices is insensitive to the glucose concentration. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Rapid-viability PCR method for detection of live, virulent Bacillus anthracis in environmental samples.

    Science.gov (United States)

    Létant, Sonia E; Murphy, Gloria A; Alfaro, Teneile M; Avila, Julie R; Kane, Staci R; Raber, Ellen; Bunt, Thomas M; Shah, Sanjiv R

    2011-09-01

    In the event of a biothreat agent release, hundreds of samples would need to be rapidly processed to characterize the extent of contamination and determine the efficacy of remediation activities. Current biological agent identification and viability determination methods are both labor- and time-intensive such that turnaround time for confirmed results is typically several days. In order to alleviate this issue, automated, high-throughput sample processing methods were developed in which real-time PCR analysis is conducted on samples before and after incubation. The method, referred to as rapid-viability (RV)-PCR, uses the change in cycle threshold after incubation to detect the presence of live organisms. In this article, we report a novel RV-PCR method for detection of live, virulent Bacillus anthracis, in which the incubation time was reduced from 14 h to 9 h, bringing the total turnaround time for results below 15 h. The method incorporates a magnetic bead-based DNA extraction and purification step prior to PCR analysis, as well as specific real-time PCR assays for the B. anthracis chromosome and pXO1 and pXO2 plasmids. A single laboratory verification of the optimized method applied to the detection of virulent B. anthracis in environmental samples was conducted and showed a detection level of 10 to 99 CFU/sample with both manual and automated RV-PCR methods in the presence of various challenges. Experiments exploring the relationship between the incubation time and the limit of detection suggest that the method could be further shortened by an additional 2 to 3 h for relatively clean samples.

  10. Comparison of Standard Culture-Based Method to Culture-Independent Method for Evaluation of Hygiene Effects on the Hand Microbiome

    Science.gov (United States)

    Leff, J.; Henley, J.; Tittl, J.; De Nardo, E.; Butler, M.; Griggs, R.; Fierer, N.

    2017-01-01

    ABSTRACT Hands play a critical role in the transmission of microbiota on one’s own body, between individuals, and on environmental surfaces. Effectively measuring the composition of the hand microbiome is important to hand hygiene science, which has implications for human health. Hand hygiene products are evaluated using standard culture-based methods, but standard test methods for culture-independent microbiome characterization are lacking. We sampled the hands of 50 participants using swab-based and glove-based methods prior to and following four hand hygiene treatments (using a nonantimicrobial hand wash, alcohol-based hand sanitizer [ABHS], a 70% ethanol solution, or tap water). We compared results among culture plate counts, 16S rRNA gene sequencing of DNA extracted directly from hands, and sequencing of DNA extracted from culture plates. Glove-based sampling yielded higher numbers of unique operational taxonomic units (OTUs) but had less diversity in bacterial community composition than swab-based sampling. We detected treatment-induced changes in diversity only by using swab-based samples (P hand hygiene industry methods and for future hand microbiome studies. On the basis of our results and previously published studies, we propose recommendations for best practices in hand microbiome research. PMID:28351915

  11. Using machine learning to accelerate sampling-based inversion

    Science.gov (United States)

    Valentine, A. P.; Sambridge, M.

    2017-12-01

    In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.

  12. Standard methods for sampling North American freshwater fishes

    Science.gov (United States)

    Bonar, Scott A.; Hubert, Wayne A.; Willis, David W.

    2009-01-01

    This important reference book provides standard sampling methods recommended by the American Fisheries Society for assessing and monitoring freshwater fish populations in North America. Methods apply to ponds, reservoirs, natural lakes, and streams and rivers containing cold and warmwater fishes. Range-wide and eco-regional averages for indices of abundance, population structure, and condition for individual species are supplied to facilitate comparisons of standard data among populations. Provides information on converting nonstandard to standard data, statistical and database procedures for analyzing and storing standard data, and methods to prevent transfer of invasive species while sampling.

  13. Solution-based targeted genomic enrichment for precious DNA samples

    Directory of Open Access Journals (Sweden)

    Shearer Aiden

    2012-05-01

    Full Text Available Abstract Background Solution-based targeted genomic enrichment (TGE protocols permit selective sequencing of genomic regions of interest on a massively parallel scale. These protocols could be improved by: 1 modifying or eliminating time consuming steps; 2 increasing yield to reduce input DNA and excessive PCR cycling; and 3 enhancing reproducible. Results We developed a solution-based TGE method for downstream Illumina sequencing in a non-automated workflow, adding standard Illumina barcode indexes during the post-hybridization amplification to allow for sample pooling prior to sequencing. The method utilizes Agilent SureSelect baits, primers and hybridization reagents for the capture, off-the-shelf reagents for the library preparation steps, and adaptor oligonucleotides for Illumina paired-end sequencing purchased directly from an oligonucleotide manufacturing company. Conclusions This solution-based TGE method for Illumina sequencing is optimized for small- or medium-sized laboratories and addresses the weaknesses of standard protocols by reducing the amount of input DNA required, increasing capture yield, optimizing efficiency, and improving reproducibility.

  14. Reliability-based design optimization via high order response surface method

    International Nuclear Information System (INIS)

    Li, Hong Shuang

    2013-01-01

    To reduce the computational effort of reliability-based design optimization (RBDO), the response surface method (RSM) has been widely used to evaluate reliability constraints. We propose an efficient methodology for solving RBDO problems based on an improved high order response surface method (HORSM) that takes advantage of an efficient sampling method, Hermite polynomials and uncertainty contribution concept to construct a high order response surface function with cross terms for reliability analysis. The sampling method generates supporting points from Gauss-Hermite quadrature points, which can be used to approximate response surface function without cross terms, to identify the highest order of each random variable and to determine the significant variables connected with point estimate method. The cross terms between two significant random variables are added to the response surface function to improve the approximation accuracy. Integrating the nested strategy, the improved HORSM is explored in solving RBDO problems. Additionally, a sampling based reliability sensitivity analysis method is employed to reduce the computational effort further when design variables are distributional parameters of input random variables. The proposed methodology is applied on two test problems to validate its accuracy and efficiency. The proposed methodology is more efficient than first order reliability method based RBDO and Monte Carlo simulation based RBDO, and enables the use of RBDO as a practical design tool.

  15. Comparing Microbiome Sampling Methods in a Wild Mammal: Fecal and Intestinal Samples Record Different Signals of Host Ecology, Evolution.

    Science.gov (United States)

    Ingala, Melissa R; Simmons, Nancy B; Wultsch, Claudia; Krampis, Konstantinos; Speer, Kelly A; Perkins, Susan L

    2018-01-01

    The gut microbiome is a community of host-associated symbiotic microbes that fulfills multiple key roles in host metabolism, immune function, and tissue development. Given the ability of the microbiome to impact host fitness, there is increasing interest in studying the microbiome of wild animals to better understand these communities in the context of host ecology and evolution. Human microbiome research protocols are well established, but wildlife microbiome research is still a developing field. Currently, there is no standardized set of best practices guiding the collection of microbiome samples from wildlife. Gut microflora are typically sampled either by fecal collection, rectal swabbing, or by destructively sampling the intestinal contents of the host animal. Studies rarely include more than one sampling technique and no comparison of these methods currently exists for a wild mammal. Although some studies have hypothesized that the fecal microbiome is a nested subset of the intestinal microbiome, this hypothesis has not been formally tested. To address these issues, we examined guano (feces) and distal intestinal mucosa from 19 species of free-ranging bats from Lamanai, Belize, using 16S rRNA amplicon sequencing to compare microbial communities across sample types. We found that the diversity and composition of intestine and guano samples differed substantially. In addition, we conclude that signatures of host evolution are retained by studying gut microbiomes based on mucosal tissue samples, but not fecal samples. Conversely, fecal samples retained more signal of host diet than intestinal samples. These results suggest that fecal and intestinal sampling methods are not interchangeable, and that these two microbiotas record different information about the host from which they are isolated.

  16. Comparing Microbiome Sampling Methods in a Wild Mammal: Fecal and Intestinal Samples Record Different Signals of Host Ecology, Evolution

    Directory of Open Access Journals (Sweden)

    Melissa R. Ingala

    2018-05-01

    Full Text Available The gut microbiome is a community of host-associated symbiotic microbes that fulfills multiple key roles in host metabolism, immune function, and tissue development. Given the ability of the microbiome to impact host fitness, there is increasing interest in studying the microbiome of wild animals to better understand these communities in the context of host ecology and evolution. Human microbiome research protocols are well established, but wildlife microbiome research is still a developing field. Currently, there is no standardized set of best practices guiding the collection of microbiome samples from wildlife. Gut microflora are typically sampled either by fecal collection, rectal swabbing, or by destructively sampling the intestinal contents of the host animal. Studies rarely include more than one sampling technique and no comparison of these methods currently exists for a wild mammal. Although some studies have hypothesized that the fecal microbiome is a nested subset of the intestinal microbiome, this hypothesis has not been formally tested. To address these issues, we examined guano (feces and distal intestinal mucosa from 19 species of free-ranging bats from Lamanai, Belize, using 16S rRNA amplicon sequencing to compare microbial communities across sample types. We found that the diversity and composition of intestine and guano samples differed substantially. In addition, we conclude that signatures of host evolution are retained by studying gut microbiomes based on mucosal tissue samples, but not fecal samples. Conversely, fecal samples retained more signal of host diet than intestinal samples. These results suggest that fecal and intestinal sampling methods are not interchangeable, and that these two microbiotas record different information about the host from which they are isolated.

  17. Comparing two sampling methods to engage hard-to-reach communities in research priority setting

    Directory of Open Access Journals (Sweden)

    Melissa A. Valerio

    2016-10-01

    Full Text Available Abstract Background Effective community-partnered and patient-centered outcomes research needs to address community priorities. However, optimal sampling methods to engage stakeholders from hard-to-reach, vulnerable communities to generate research priorities have not been identified. Methods In two similar rural, largely Hispanic communities, a community advisory board guided recruitment of stakeholders affected by chronic pain using a different method in each community: 1 snowball sampling, a chain- referral method or 2 purposive sampling to recruit diverse stakeholders. In both communities, three groups of stakeholders attended a series of three facilitated meetings to orient, brainstorm, and prioritize ideas (9 meetings/community. Using mixed methods analysis, we compared stakeholder recruitment and retention as well as priorities from both communities’ stakeholders on mean ratings of their ideas based on importance and feasibility for implementation in their community. Results Of 65 eligible stakeholders in one community recruited by snowball sampling, 55 (85 % consented, 52 (95 % attended the first meeting, and 36 (65 % attended all 3 meetings. In the second community, the purposive sampling method was supplemented by convenience sampling to increase recruitment. Of 69 stakeholders recruited by this combined strategy, 62 (90 % consented, 36 (58 % attended the first meeting, and 26 (42 % attended all 3 meetings. Snowball sampling recruited more Hispanics and disabled persons (all P < 0.05. Despite differing recruitment strategies, stakeholders from the two communities identified largely similar ideas for research, focusing on non-pharmacologic interventions for management of chronic pain. Ratings on importance and feasibility for community implementation differed only on the importance of massage services (P = 0.045 which was higher for the purposive/convenience sampling group and for city improvements

  18. A Study of Assimilation Bias in Name-Based Sampling of Migrants

    Directory of Open Access Journals (Sweden)

    Schnell Rainer

    2014-06-01

    Full Text Available The use of personal names for screening is an increasingly popular sampling technique for migrant populations. Although this is often an effective sampling procedure, very little is known about the properties of this method. Based on a large German survey, this article compares characteristics of respondents whose names have been correctly classified as belonging to a migrant population with respondentswho aremigrants and whose names have not been classified as belonging to a migrant population. Although significant differences were found for some variables even with some large effect sizes, the overall bias introduced by name-based sampling (NBS is small as long as procedures with small false-negative rates are employed.

  19. Observer-Based Stabilization of Spacecraft Rendezvous with Variable Sampling and Sensor Nonlinearity

    Directory of Open Access Journals (Sweden)

    Zhuoshi Li

    2013-01-01

    Full Text Available This paper addresses the observer-based control problem of spacecraft rendezvous with nonuniform sampling period. The relative dynamic model is based on the classical Clohessy-Wiltshire equation, and sensor nonlinearity and sampling are considered together in a unified framework. The purpose of this paper is to perform an observer-based controller synthesis by using sampled and saturated output measurements, such that the resulting closed-loop system is exponentially stable. A time-dependent Lyapunov functional is developed which depends on time and the upper bound of the sampling period and also does not grow along the input update times. The controller design problem is solved in terms of the linear matrix inequality method, and the obtained results are less conservative than using the traditional Lyapunov functionals. Finally, a numerical simulation example is built to show the validity of the developed sampled-data control strategy.

  20. Method of plasma etching Ga-based compound semiconductors

    Science.gov (United States)

    Qiu, Weibin; Goddard, Lynford L.

    2012-12-25

    A method of plasma etching Ga-based compound semiconductors includes providing a process chamber and a source electrode adjacent to the process chamber. The process chamber contains a sample comprising a Ga-based compound semiconductor. The sample is in contact with a platen which is electrically connected to a first power supply, and the source electrode is electrically connected to a second power supply. The method includes flowing SiCl.sub.4 gas into the chamber, flowing Ar gas into the chamber, and flowing H.sub.2 gas into the chamber. RF power is supplied independently to the source electrode and the platen. A plasma is generated based on the gases in the process chamber, and regions of a surface of the sample adjacent to one or more masked portions of the surface are etched to create a substantially smooth etched surface including features having substantially vertical walls beneath the masked portions.

  1. Do sampling methods differ in their utility for ecological monitoring? Comparison of line-point intercept, grid-point intercept, and ocular estimate methods

    Science.gov (United States)

    This study compared the utility of three sampling methods for ecological monitoring based on: interchangeability of data (rank correlations), precision (coefficient of variation), cost (minutes/transect), and potential of each method to generate multiple indicators. Species richness and foliar cover...

  2. Passive sampling methods for contaminated sediments

    DEFF Research Database (Denmark)

    Peijnenburg, Willie J.G.M.; Teasdale, Peter R.; Reible, Danny

    2014-01-01

    “Dissolved” concentrations of contaminants in sediment porewater (Cfree) provide a more relevant exposure metric for risk assessment than do total concentrations. Passive sampling methods (PSMs) for estimating Cfree offer the potential for cost-efficient and accurate in situ characterization...

  3. Field evaluation of personal sampling methods for multiple bioaerosols.

    Science.gov (United States)

    Wang, Chi-Hsun; Chen, Bean T; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine

    2015-01-01

    Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  4. [Sampling methods for PM2.5 from stationary sources: a review].

    Science.gov (United States)

    Jiang, Jing-Kun; Deng, Jian-Guo; Li, Zhen; Li, Xing-Hua; Duan, Lei; Hao, Ji-Ming

    2014-05-01

    The new China national ambient air quality standard has been published in 2012 and will be implemented in 2016. To meet the requirements in this new standard, monitoring and controlling PM2,,5 emission from stationary sources are very important. However, so far there is no national standard method on sampling PM2.5 from stationary sources. Different sampling methods for PM2.5 from stationary sources and relevant international standards were reviewed in this study. It includes the methods for PM2.5 sampling in flue gas and the methods for PM2.5 sampling after dilution. Both advantages and disadvantages of these sampling methods were discussed. For environmental management, the method for PM2.5 sampling in flue gas such as impactor and virtual impactor was suggested as a standard to determine filterable PM2.5. To evaluate environmental and health effects of PM2.5 from stationary sources, standard dilution method for sampling of total PM2.5 should be established.

  5. Comparison of Standard Culture-Based Method to Culture-Independent Method for Evaluation of Hygiene Effects on the Hand Microbiome

    Directory of Open Access Journals (Sweden)

    C. Zapka

    2017-03-01

    Full Text Available Hands play a critical role in the transmission of microbiota on one’s own body, between individuals, and on environmental surfaces. Effectively measuring the composition of the hand microbiome is important to hand hygiene science, which has implications for human health. Hand hygiene products are evaluated using standard culture-based methods, but standard test methods for culture-independent microbiome characterization are lacking. We sampled the hands of 50 participants using swab-based and glove-based methods prior to and following four hand hygiene treatments (using a nonantimicrobial hand wash, alcohol-based hand sanitizer [ABHS], a 70% ethanol solution, or tap water. We compared results among culture plate counts, 16S rRNA gene sequencing of DNA extracted directly from hands, and sequencing of DNA extracted from culture plates. Glove-based sampling yielded higher numbers of unique operational taxonomic units (OTUs but had less diversity in bacterial community composition than swab-based sampling. We detected treatment-induced changes in diversity only by using swab-based samples (P < 0.001; we were unable to detect changes with glove-based samples. Bacterial cell counts significantly decreased with use of the ABHS (P < 0.05 and ethanol control (P < 0.05. Skin hydration at baseline correlated with bacterial abundances, bacterial community composition, pH, and redness across subjects. The importance of the method choice was substantial. These findings are important to ensure improvement of hand hygiene industry methods and for future hand microbiome studies. On the basis of our results and previously published studies, we propose recommendations for best practices in hand microbiome research.

  6. An empirical comparison of isolate-based and sample-based definitions of antimicrobial resistance and their effect on estimates of prevalence.

    Science.gov (United States)

    Humphry, R W; Evans, J; Webster, C; Tongue, S C; Innocent, G T; Gunn, G J

    2018-02-01

    Antimicrobial resistance is primarily a problem in human medicine but there are unquantified links of transmission in both directions between animal and human populations. Quantitative assessment of the costs and benefits of reduced antimicrobial usage in livestock requires robust quantification of transmission of resistance between animals, the environment and the human population. This in turn requires appropriate measurement of resistance. To tackle this we selected two different methods for determining whether a sample is resistant - one based on screening a sample, the other on testing individual isolates. Our overall objective was to explore the differences arising from choice of measurement. A literature search demonstrated the widespread use of testing of individual isolates. The first aim of this study was to compare, quantitatively, sample level and isolate level screening. Cattle or sheep faecal samples (n=41) submitted for routine parasitology were tested for antimicrobial resistance in two ways: (1) "streak" direct culture onto plates containing the antimicrobial of interest; (2) determination of minimum inhibitory concentration (MIC) of 8-10 isolates per sample compared to published MIC thresholds. Two antibiotics (ampicillin and nalidixic acid) were tested. With ampicillin, direct culture resulted in more than double the number of resistant samples than the MIC method based on eight individual isolates. The second aim of this study was to demonstrate the utility of the observed relationship between these two measures of antimicrobial resistance to re-estimate the prevalence of antimicrobial resistance from a previous study, in which we had used "streak" cultures. Boot-strap methods were used to estimate the proportion of samples that would have tested resistant in the historic study, had we used the isolate-based MIC method instead. Our boot-strap results indicate that our estimates of prevalence of antimicrobial resistance would have been

  7. 40 CFR 80.8 - Sampling methods for gasoline and diesel fuel.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Sampling methods for gasoline and... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES General Provisions § 80.8 Sampling methods for gasoline and diesel fuel. The sampling methods specified in this section shall be used to collect samples...

  8. Characterization of hazardous waste sites: a methods manual. Volume 2. Available sampling methods (second edition)

    International Nuclear Information System (INIS)

    Ford, P.J.; Turina, P.J.; Seely, D.E.

    1984-12-01

    Investigations at hazardous waste sites and sites of chemical spills often require on-site measurements and sampling activities to assess the type and extent of contamination. This document is a compilation of sampling methods and materials suitable to address most needs that arise during routine waste site and hazardous spill investigations. The sampling methods presented in this document are compiled by media, and were selected on the basis of practicality, economics, representativeness, compatability with analytical considerations, and safety, as well as other criteria. In addition to sampling procedures, sample handling and shipping, chain-of-custody procedures, instrument certification, equipment fabrication, and equipment decontamination procedures are described. Sampling methods for soil, sludges, sediments, and bulk materials cover the solids medium. Ten methods are detailed for surface waters, groundwater and containerized liquids; twelve are presented for ambient air, soil gases and vapors, and headspace gases. A brief discussion of ionizing radiation survey instruments is also provided

  9. Adaptive sampling method in deep-penetration particle transport problem

    International Nuclear Information System (INIS)

    Wang Ruihong; Ji Zhicheng; Pei Lucheng

    2012-01-01

    Deep-penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, a kind of particle transport random walking system under the emission point as a sampling station is built. Then, an adaptive sampling scheme is derived for better solution with the achieved information. The main advantage of the adaptive scheme is to choose the most suitable sampling number from the emission point station to obtain the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is introduced. Its main principle is to define the importance function due to the particle state and to ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive scheme under the emission point as a station could overcome the difficulty of underestimation of the result in some degree, and the adaptive importance sampling method gets satisfied results as well. (authors)

  10. Using the sampling method to propagate uncertainties of physical parameters in systems with fissile material

    International Nuclear Information System (INIS)

    Campolina, Daniel de Almeida Magalhães

    2015-01-01

    There is an uncertainty for all the components that comprise the model of a nuclear system. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a realistic calculation that has been replacing conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. By analyzing the propagated uncertainty to the effective neutron multiplication factor (k eff ), the effects of the sample size, computational uncertainty and efficiency of a random number generator to represent the distributions that characterize physical uncertainty in a light water reactor was investigated. A program entitled GB s ample was implemented to enable the application of the random sampling method, which requires an automated process and robust statistical tools. The program was based on the black box model and the MCNPX code was used in and parallel processing for the calculation of particle transport. The uncertainties considered were taken from a benchmark experiment in which the effects in k eff due to physical uncertainties is done through a conservative method. In this work a script called GB s ample was implemented to automate the sampling based method, use multiprocessing and assure the necessary robustness. It has been found the possibility of improving the efficiency of the random sampling method by selecting distributions obtained from a random number generator in order to obtain a better representation of uncertainty figures. After the convergence of the method is achieved, in order to reduce the variance of the uncertainty propagated without increase in computational time, it was found the best number o components to be sampled. It was also observed that if the sampling method is used to calculate the effect on k eff due to physical uncertainties reported by

  11. Field evaluation of personal sampling methods for multiple bioaerosols.

    Directory of Open Access Journals (Sweden)

    Chi-Hsun Wang

    Full Text Available Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min. Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  12. Methods for sampling geographically mobile female traders in an East African market setting

    Science.gov (United States)

    Achiro, Lillian; Kwena, Zachary A.; McFarland, Willi; Neilands, Torsten B.; Cohen, Craig R.; Bukusi, Elizabeth A.; Camlin, Carol S.

    2018-01-01

    Background The role of migration in the spread of HIV in sub-Saharan Africa is well-documented. Yet migration and HIV research have often focused on HIV risks to male migrants and their partners, or migrants overall, often failing to measure the risks to women via their direct involvement in migration. Inconsistent measures of mobility, gender biases in those measures, and limited data sources for sex-specific population-based estimates of mobility have contributed to a paucity of research on the HIV prevention and care needs of migrant and highly mobile women. This study addresses an urgent need for novel methods for developing probability-based, systematic samples of highly mobile women, focusing on a population of female traders operating out of one of the largest open air markets in East Africa. Our method involves three stages: 1.) identification and mapping of all market stall locations using Global Positioning System (GPS) coordinates; 2.) using female market vendor stall GPS coordinates to build the sampling frame using replicates; and 3.) using maps and GPS data for recruitment of study participants. Results The location of 6,390 vendor stalls were mapped using GPS. Of these, 4,064 stalls occupied by women (63.6%) were used to draw four replicates of 128 stalls each, and a fifth replicate of 15 pre-selected random alternates for a total of 527 stalls assigned to one of five replicates. Staff visited 323 stalls from the first three replicates and from these successfully recruited 306 female vendors into the study for a participation rate of 94.7%. Mobilization strategies and involving traders association representatives in participant recruitment were critical to the study’s success. Conclusion The study’s high participation rate suggests that this geospatial sampling method holds promise for development of probability-based samples in other settings that serve as transport hubs for highly mobile populations. PMID:29324780

  13. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    Science.gov (United States)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  14. MStern Blotting-High Throughput Polyvinylidene Fluoride (PVDF) Membrane-Based Proteomic Sample Preparation for 96-Well Plates.

    Science.gov (United States)

    Berger, Sebastian T; Ahmed, Saima; Muntel, Jan; Cuevas Polo, Nerea; Bachur, Richard; Kentsis, Alex; Steen, Judith; Steen, Hanno

    2015-10-01

    We describe a 96-well plate compatible membrane-based proteomic sample processing method, which enables the complete processing of 96 samples (or multiples thereof) within a single workday. This method uses a large-pore hydrophobic PVDF membrane that efficiently adsorbs proteins, resulting in fast liquid transfer through the membrane and significantly reduced sample processing times. Low liquid transfer speeds have prevented the useful 96-well plate implementation of FASP as a widely used membrane-based proteomic sample processing method. We validated our approach on whole-cell lysate and urine and cerebrospinal fluid as clinically relevant body fluids. Without compromising peptide and protein identification, our method uses a vacuum manifold and circumvents the need for digest desalting, making our processing method compatible with standard liquid handling robots. In summary, our new method maintains the strengths of FASP and simultaneously overcomes one of the major limitations of FASP without compromising protein identification and quantification. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  15. Stratified sampling design based on data mining.

    Science.gov (United States)

    Kim, Yeonkook J; Oh, Yoonhwan; Park, Sunghoon; Cho, Sungzoon; Park, Hayoung

    2013-09-01

    To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency. We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study. Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively. This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea.

  16. A method for three-dimensional quantitative observation of the microstructure of biological samples

    Science.gov (United States)

    Wang, Pengfei; Chen, Dieyan; Ma, Wanyun; Wu, Hongxin; Ji, Liang; Sun, Jialin; Lv, Danyu; Zhang, Lu; Li, Ying; Tian, Ning; Zheng, Jinggao; Zhao, Fengying

    2009-07-01

    Contemporary biology has developed into the era of cell biology and molecular biology, and people try to study the mechanism of all kinds of biological phenomena at the microcosmic level now. Accurate description of the microstructure of biological samples is exigent need from many biomedical experiments. This paper introduces a method for 3-dimensional quantitative observation on the microstructure of vital biological samples based on two photon laser scanning microscopy (TPLSM). TPLSM is a novel kind of fluorescence microscopy, which has excellence in its low optical damage, high resolution, deep penetration depth and suitability for 3-dimensional (3D) imaging. Fluorescent stained samples were observed by TPLSM, and afterward the original shapes of them were obtained through 3D image reconstruction. The spatial distribution of all objects in samples as well as their volumes could be derived by image segmentation and mathematic calculation. Thus the 3-dimensionally and quantitatively depicted microstructure of the samples was finally derived. We applied this method to quantitative analysis of the spatial distribution of chromosomes in meiotic mouse oocytes at metaphase, and wonderful results came out last.

  17. Method and apparatus for sampling atmospheric mercury

    Science.gov (United States)

    Trujillo, Patricio E.; Campbell, Evan E.; Eutsler, Bernard C.

    1976-01-20

    A method of simultaneously sampling particulate mercury, organic mercurial vapors, and metallic mercury vapor in the working and occupational environment and determining the amount of mercury derived from each such source in the sampled air. A known volume of air is passed through a sampling tube containing a filter for particulate mercury collection, a first adsorber for the selective adsorption of organic mercurial vapors, and a second adsorber for the adsorption of metallic mercury vapor. Carbon black molecular sieves are particularly useful as the selective adsorber for organic mercurial vapors. The amount of mercury adsorbed or collected in each section of the sampling tube is readily quantitatively determined by flameless atomic absorption spectrophotometry.

  18. Appreciating the difference between design-based and model-based sampling strategies in quantitative morphology of the nervous system.

    Science.gov (United States)

    Geuna, S

    2000-11-20

    Quantitative morphology of the nervous system has undergone great developments over recent years, and several new technical procedures have been devised and applied successfully to neuromorphological research. However, a lively debate has arisen on some issues, and a great deal of confusion appears to exist that is definitely responsible for the slow spread of the new techniques among scientists. One such element of confusion is related to uncertainty about the meaning, implications, and advantages of the design-based sampling strategy that characterize the new techniques. In this article, to help remove this uncertainty, morphoquantitative methods are described and contrasted on the basis of the inferential paradigm of the sampling strategy: design-based vs model-based. Moreover, some recommendations are made to help scientists judge the appropriateness of a method used for a given study in relation to its specific goals. Finally, the use of the term stereology to label, more or less expressly, only some methods is critically discussed. Copyright 2000 Wiley-Liss, Inc.

  19. History based batch method preserving tally means

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Choi, Sung Hoon

    2012-01-01

    In the Monte Carlo (MC) eigenvalue calculations, the sample variance of a tally mean calculated from its cycle-wise estimates is biased because of the inter-cycle correlations of the fission source distribution (FSD). Recently, we proposed a new real variance estimation method named the history-based batch method in which a MC run is treated as multiple runs with small number of histories per cycle to generate independent tally estimates. In this paper, the history-based batch method based on the weight correction is presented to preserve the tally mean from the original MC run. The effectiveness of the new method is examined for the weakly coupled fissile array problem as a function of the dominance ratio and the batch size, in comparison with other schemes available

  20. Sample preparation and biomass determination of SRF model mixture using cryogenic milling and the adapted balance method

    Energy Technology Data Exchange (ETDEWEB)

    Schnöller, Johannes, E-mail: johannes.schnoeller@chello.at; Aschenbrenner, Philipp; Hahn, Manuel; Fellner, Johann; Rechberger, Helmut

    2014-11-15

    Highlights: • An alternative sample comminution procedure for SRF is tested. • Proof of principle is shown on a SRF model mixture. • The biogenic content of the SRF is analyzed with the adapted balance method. • The novel method combines combustion analysis and a data reconciliation algorithm. • Factors for the variance of the analysis results are statistically quantified. - Abstract: The biogenic fraction of a simple solid recovered fuel (SRF) mixture (80 wt% printer paper/20 wt% high density polyethylene) is analyzed with the in-house developed adapted balance method (aBM). This fairly new approach is a combination of combustion elemental analysis (CHNS) and a data reconciliation algorithm based on successive linearisation for evaluation of the analysis results. This method shows a great potential as an alternative way to determine the biomass content in SRF. However, the employed analytical technique (CHNS elemental analysis) restricts the probed sample mass to low amounts in the range of a few hundred milligrams. This requires sample comminution to small grain sizes (<200 μm) to generate representative SRF specimen. This is not easily accomplished for certain material mixtures (e.g. SRF with rubber content) by conventional means of sample size reduction. This paper presents a proof of principle investigation of the sample preparation and analysis of an SRF model mixture with the use of cryogenic impact milling (final sample comminution) and the adapted balance method (determination of biomass content). The so derived sample preparation methodology (cutting mills and cryogenic impact milling) shows a better performance in accuracy and precision for the determination of the biomass content than one solely based on cutting mills. The results for the determination of the biogenic fraction are within 1–5% of the data obtained by the reference methods, selective dissolution method (SDM) and {sup 14}C-method ({sup 14}C-M)

  1. Evaluation of Legionella Air Contamination in Healthcare Facilities by Different Sampling Methods: An Italian Multicenter Study.

    Science.gov (United States)

    Montagna, Maria Teresa; De Giglio, Osvalda; Cristina, Maria Luisa; Napoli, Christian; Pacifico, Claudia; Agodi, Antonella; Baldovin, Tatjana; Casini, Beatrice; Coniglio, Maria Anna; D'Errico, Marcello Mario; Delia, Santi Antonino; Deriu, Maria Grazia; Guida, Marco; Laganà, Pasqualina; Liguori, Giorgio; Moro, Matteo; Mura, Ida; Pennino, Francesca; Privitera, Gaetano; Romano Spica, Vincenzo; Sembeni, Silvia; Spagnolo, Anna Maria; Tardivo, Stefano; Torre, Ida; Valeriani, Federica; Albertini, Roberto; Pasquarella, Cesira

    2017-06-22

    Healthcare facilities (HF) represent an at-risk environment for legionellosis transmission occurring after inhalation of contaminated aerosols. In general, the control of water is preferred to that of air because, to date, there are no standardized sampling protocols. Legionella air contamination was investigated in the bathrooms of 11 HF by active sampling (Surface Air System and Coriolis ® μ) and passive sampling using settling plates. During the 8-hour sampling, hot tap water was sampled three times. All air samples were evaluated using culture-based methods, whereas liquid samples collected using the Coriolis ® μ were also analyzed by real-time PCR. Legionella presence in the air and water was then compared by sequence-based typing (SBT) methods. Air contamination was found in four HF (36.4%) by at least one of the culturable methods. The culturable investigation by Coriolis ® μ did not yield Legionella in any enrolled HF. However, molecular investigation using Coriolis ® μ resulted in eight HF testing positive for Legionella in the air. Comparison of Legionella air and water contamination indicated that Legionella water concentration could be predictive of its presence in the air. Furthermore, a molecular study of 12 L. pneumophila strains confirmed a match between the Legionella strains from air and water samples by SBT for three out of four HF that tested positive for Legionella by at least one of the culturable methods. Overall, our study shows that Legionella air detection cannot replace water sampling because the absence of microorganisms from the air does not necessarily represent their absence from water; nevertheless, air sampling may provide useful information for risk assessment. The liquid impingement technique appears to have the greatest capacity for collecting airborne Legionella if combined with molecular investigations.

  2. Evaluation of Legionella Air Contamination in Healthcare Facilities by Different Sampling Methods: An Italian Multicenter Study

    Science.gov (United States)

    Montagna, Maria Teresa; De Giglio, Osvalda; Cristina, Maria Luisa; Napoli, Christian; Pacifico, Claudia; Agodi, Antonella; Baldovin, Tatjana; Casini, Beatrice; Coniglio, Maria Anna; D’Errico, Marcello Mario; Delia, Santi Antonino; Deriu, Maria Grazia; Guida, Marco; Laganà, Pasqualina; Liguori, Giorgio; Moro, Matteo; Mura, Ida; Pennino, Francesca; Privitera, Gaetano; Romano Spica, Vincenzo; Sembeni, Silvia; Spagnolo, Anna Maria; Tardivo, Stefano; Torre, Ida; Valeriani, Federica; Albertini, Roberto; Pasquarella, Cesira

    2017-01-01

    Healthcare facilities (HF) represent an at-risk environment for legionellosis transmission occurring after inhalation of contaminated aerosols. In general, the control of water is preferred to that of air because, to date, there are no standardized sampling protocols. Legionella air contamination was investigated in the bathrooms of 11 HF by active sampling (Surface Air System and Coriolis®μ) and passive sampling using settling plates. During the 8-hour sampling, hot tap water was sampled three times. All air samples were evaluated using culture-based methods, whereas liquid samples collected using the Coriolis®μ were also analyzed by real-time PCR. Legionella presence in the air and water was then compared by sequence-based typing (SBT) methods. Air contamination was found in four HF (36.4%) by at least one of the culturable methods. The culturable investigation by Coriolis®μ did not yield Legionella in any enrolled HF. However, molecular investigation using Coriolis®μ resulted in eight HF testing positive for Legionella in the air. Comparison of Legionella air and water contamination indicated that Legionella water concentration could be predictive of its presence in the air. Furthermore, a molecular study of 12 L. pneumophila strains confirmed a match between the Legionella strains from air and water samples by SBT for three out of four HF that tested positive for Legionella by at least one of the culturable methods. Overall, our study shows that Legionella air detection cannot replace water sampling because the absence of microorganisms from the air does not necessarily represent their absence from water; nevertheless, air sampling may provide useful information for risk assessment. The liquid impingement technique appears to have the greatest capacity for collecting airborne Legionella if combined with molecular investigations. PMID:28640202

  3. A modified FOX-1 method for Micro-determination of hydrogen peroxide in honey samples.

    Science.gov (United States)

    Li, Dan; Wang, Meng; Cheng, Ni; Xue, Xiaofeng; Wu, Liming; Cao, Wei

    2017-12-15

    Hydrogen peroxide (H 2 O 2 ) is a major antibacterial activity-associated biomarker in honey. Measurement of endogenous H 2 O 2 in honey is of great value in prediction of the H 2 O 2 -depended antibacterial activity and characterization or selection of honey samples for their use as an antibacterial agent or natural food preservative. Considering current methods for H 2 O 2 determination are either time-consuming or complicated with their high-cost, a study was conducted to modify and validate the spectrophotometry-based ferrous oxidation-xylenol orange (FOX-1) method for micro-determination of H 2 O 2 in honey samples. The result suggested that the proposed FOX-1 method is fast, sensitive, precise and repeatable. The method was successfully applied for the analysis of a total of 35 honey samples from 5 floral origins and 33 geographical origins. The proposed method is low-cost and easy-to-run, and it can be considered by researchers and industry for routine analysis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  5. Characterizing lentic freshwater fish assemblages using multiple sampling methods

    Science.gov (United States)

    Fischer, Jesse R.; Quist, Michael C.

    2014-01-01

    Characterizing fish assemblages in lentic ecosystems is difficult, and multiple sampling methods are almost always necessary to gain reliable estimates of indices such as species richness. However, most research focused on lentic fish sampling methodology has targeted recreationally important species, and little to no information is available regarding the influence of multiple methods and timing (i.e., temporal variation) on characterizing entire fish assemblages. Therefore, six lakes and impoundments (48–1,557 ha surface area) were sampled seasonally with seven gear types to evaluate the combined influence of sampling methods and timing on the number of species and individuals sampled. Probabilities of detection for species indicated strong selectivities and seasonal trends that provide guidance on optimal seasons to use gears when targeting multiple species. The evaluation of species richness and number of individuals sampled using multiple gear combinations demonstrated that appreciable benefits over relatively few gears (e.g., to four) used in optimal seasons were not present. Specifically, over 90 % of the species encountered with all gear types and season combinations (N = 19) from six lakes and reservoirs were sampled with nighttime boat electrofishing in the fall and benthic trawling, modified-fyke, and mini-fyke netting during the summer. Our results indicated that the characterization of lentic fish assemblages was highly influenced by the selection of sampling gears and seasons, but did not appear to be influenced by waterbody type (i.e., natural lake, impoundment). The standardization of data collected with multiple methods and seasons to account for bias is imperative to monitoring of lentic ecosystems and will provide researchers with increased reliability in their interpretations and decisions made using information on lentic fish assemblages.

  6. Comparative analysis of five DNA isolation protocols and three drying methods for leaves samples of Nectandra megapotamica (Spreng. Mez

    Directory of Open Access Journals (Sweden)

    Leonardo Severo da Costa

    2016-06-01

    Full Text Available The aim of the study was to establish a DNA isolation protocol Nectandra megapotamica (Spreng. Mez., able to obtain samples of high yield and quality for use in genomic analysis. A commercial kit and four classical methods of DNA extraction were tested, including three cetyltrimethylammonium bromide (CTAB-based and one sodium dodecyl sulfate (SDS-based methods. Three drying methods for leaves samples were also evaluated including drying at room temperature (RT, in an oven at 40ºC (S40, and in a microwave oven (FMO. The DNA solutions obtained from different types of leaves samples using the five protocols were assessed in terms of cost, execution time, and quality and yield of extracted DNA. The commercial kit did not extract DNA with sufficient quantity or quality for successful PCR reactions. Among the classic methods, only the protocols of Dellaporta and of Khanuja yielded DNA extractions for all three types of foliar samples that resulted in successful PCR reactions and subsequent enzyme restriction assays. Based on the evaluated variables, the most appropriate DNA extraction method for Nectandra megapotamica (Spreng. Mez. was that of Dellaporta, regardless of the method used to dry the samples. The selected method has a relatively low cost and total execution time. Moreover, the quality and quantity of DNA extracted using this method was sufficient for DNA sequence amplification using PCR reactions and to get restriction fragments.

  7. Optical methods for microstructure determination of doped samples

    Science.gov (United States)

    Ciosek, Jerzy F.

    2008-12-01

    The optical methods to determine refractive index profile of layered materials are commonly used with spectroscopic ellipsometry or transmittance/reflectance spectrometry. Measurements of spectral reflection and transmission usually permit to characterize optical materials and determine their refractive index. However, it is possible to characterize of samples with dopants, impurities as well as defects using optical methods. Microstructures of a hydrogenated crystalline Si wafer and a layer of SiO2 - ZrO2 composition are investigated. The first sample is a Si(001):H Czochralski grown single crystalline wafer with 50 nm thick surface Si02 layer. Hydrogen dose implantation (D continue to be an important issue in microelectronic device and sensor fabrication. Hydrogen-implanted silicon (Si: H) has become a topic of remarkable interest, mostly because of the potential of implantation-induced platelets and micro-cavities for the creation of gettering -active areas and for Si layer splitting. Oxygen precipitation and atmospheric impurity are analysed. The second sample is the layer of co-evaporated SiO2 and ZrO2 materials using simultaneously two electron beam guns in reactive evaporation methods. The composition structure was investigated by X-Ray photoelectron spectroscopy (XPS), and spectroscopic ellipsometry methods. A non-uniformity and composition of layer are analysed using average density method.

  8. A simple and rapid cultural method for detection of Enterobacter sakazakii in environmental samples

    NARCIS (Netherlands)

    Guillaume-Gentil, O.; Sonnard, V.; Kandhai, M.C.; Marugg, J.; Joosten, H.

    2005-01-01

    A method was developed to detect and identify Enterobacter sakazakii in environmental samples. The method is based on selective enrichment at 45 ± 0.5°C in lauryl sulfate tryptose broth supplemented with 0.5 M NaCl and 10 mg/liter vancomycin (mLST) for 22 to 24 h followed by streaking on tryptone

  9. Reconciling PM10 analyses by different sampling methods for Iron King Mine tailings dust.

    Science.gov (United States)

    Li, Xu; Félix, Omar I; Gonzales, Patricia; Sáez, Avelino Eduardo; Ela, Wendell P

    2016-03-01

    The overall project objective at the Iron King Mine Superfund site is to determine the level and potential risk associated with heavy metal exposure of the proximate population emanating from the site's tailings pile. To provide sufficient size-fractioned dust for multi-discipline research studies, a dust generator was built and is now being used to generate size-fractioned dust samples for toxicity investigations using in vitro cell culture and animal exposure experiments as well as studies on geochemical characterization and bioassay solubilization with simulated lung and gastric fluid extractants. The objective of this study is to provide a robust method for source identification by comparing the tailing sample produced by dust generator and that collected by MOUDI sampler. As and Pb concentrations of the PM10 fraction in the MOUDI sample were much lower than in tailing samples produced by the dust generator, indicating a dilution of Iron King tailing dust by dust from other sources. For source apportionment purposes, single element concentration method was used based on the assumption that the PM10 fraction comes from a background source plus the Iron King tailing source. The method's conclusion that nearly all arsenic and lead in the PM10 dust fraction originated from the tailings substantiates our previous Pb and Sr isotope study conclusion. As and Pb showed a similar mass fraction from Iron King for all sites suggesting that As and Pb have the same major emission source. Further validation of this simple source apportionment method is needed based on other elements and sites.

  10. A Non-Uniformly Under-Sampled Blade Tip-Timing Signal Reconstruction Method for Blade Vibration Monitoring

    Directory of Open Access Journals (Sweden)

    Zheng Hu

    2015-01-01

    Full Text Available High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes.

  11. 40 CFR 80.582 - What are the sampling and testing methods for the fuel marker?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false What are the sampling and testing... commercially available diesel fuel which meets the applicable industry consensus and federal regulatory...) Qualification of test methods approved by voluntary consensus-based standards bodies. Any standard test method...

  12. Field Sample Preparation Method Development for Isotope Ratio Mass Spectrometry

    International Nuclear Information System (INIS)

    Leibman, C.; Weisbrod, K.; Yoshida, T.

    2015-01-01

    Non-proliferation and International Security (NA-241) established a working group of researchers from Los Alamos National Laboratory (LANL), Pacific Northwest National Laboratory (PNNL) and Savannah River National Laboratory (SRNL) to evaluate the utilization of in-field mass spectrometry for safeguards applications. The survey of commercial off-the-shelf (COTS) mass spectrometers (MS) revealed no instrumentation existed capable of meeting all the potential safeguards requirements for performance, portability, and ease of use. Additionally, fieldable instruments are unlikely to meet the International Target Values (ITVs) for accuracy and precision for isotope ratio measurements achieved with laboratory methods. The major gaps identified for in-field actinide isotope ratio analysis were in the areas of: 1. sample preparation and/or sample introduction, 2. size reduction of mass analyzers and ionization sources, 3. system automation, and 4. decreased system cost. Development work in 2 through 4, numerated above continues, in the private and public sector. LANL is focusing on developing sample preparation/sample introduction methods for use with the different sample types anticipated for safeguard applications. Addressing sample handling and sample preparation methods for MS analysis will enable use of new MS instrumentation as it becomes commercially available. As one example, we have developed a rapid, sample preparation method for dissolution of uranium and plutonium oxides using ammonium bifluoride (ABF). ABF is a significantly safer and faster alternative to digestion with boiling combinations of highly concentrated mineral acids. Actinides digested with ABF yield fluorides, which can then be analyzed directly or chemically converted and separated using established column chromatography techniques as needed prior to isotope analysis. The reagent volumes and the sample processing steps associated with ABF sample digestion lend themselves to automation and field

  13. Comparative Evaluation of Veriflow® Salmonella Species to USDA and FDA Culture-Based Methods for the Detection of Salmonella spp. in Food and Environmental Samples.

    Science.gov (United States)

    Puri, Amrita; Joelsson, Adam C; Terkhorn, Shawn P; Brown, Ashley S; Gaudioso, Zara E; Siciliano, Nicholas A

    2017-09-01

    Veriflow® Salmonella species (Veriflow SS) is a molecular-based assay for the presumptive detection of Salmonella spp. from environmental surfaces (stainless steel, sealed concrete, plastic, and ceramic tile), dairy (2% milk), raw meat (20% fat ground beef), chicken carcasses, and ready-to-eat (RTE) food (hot dogs). The assay utilizes a PCR detection method coupled with a rapid, visual, flow-based assay that develops in 3 min post-PCR amplification and requires only an 18 h enrichment for maximum sensitivity. The Veriflow SS system eliminates the need for sample purification, gel electrophoresis, or fluorophore-based detection of target amplification and does not require complex data analysis. This Performance Tested MethodSM validation study demonstrated the ability of the Veriflow SS method to detect low levels of artificially inoculated or naturally occurring Salmonella spp. in eight distinct environmental and food matrixes. In each reference comparison study, probability of detection analysis indicated that there was no significant difference between the Veriflow SS method and the U.S. Department of Agriculture Food Safety and Inspection Service Microbiology Laboratory Guidebook Chapter 4.06 and the U.S. Food and Drug Administration Bacteriological Analytical Manual Chapter 5 reference methods. A total of 104 Salmonella strains were detected in the inclusivity study, and 35 nonspecific organisms went undetected in the exclusivity study. The study results show that the Veriflow SS method is a sensitive, selective, and robust assay for the presumptive detection of Salmonella spp. sampled from environmental surfaces (stainless steel, sealed concrete, plastic, and ceramic tile), dairy (2% milk), raw meat (20% fat ground beef), chicken carcasses, and RTE food (hot dogs).

  14. Radiochemistry methods in DOE methods for evaluating environmental and waste management samples

    International Nuclear Information System (INIS)

    Fadeff, S.K.; Goheen, S.C.

    1994-08-01

    Current standard sources of radiochemistry methods are often inappropriate for use in evaluating US Department of Energy environmental and waste management (DOE/EW) samples. Examples of current sources include EPA, ASTM, Standard Methods for the Examination of Water and Wastewater and HASL-300. Applicability of these methods is limited to specific matrices (usually water), radiation levels (usually environmental levels), and analytes (limited number). Radiochemistry methods in DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) attempt to fill the applicability gap that exists between standard methods and those needed for DOE/EM activities. The Radiochemistry chapter in DOE Methods includes an ''analysis and reporting'' guidance section as well as radiochemistry methods. A basis for identifying the DOE/EM radiochemistry needs is discussed. Within this needs framework, the applicability of standard methods and targeted new methods is identified. Sources of new methods (consolidated methods from DOE laboratories and submissions from individuals) and the methods review process will be discussed. The processes involved in generating consolidated methods add editing individually submitted methods will be compared. DOE Methods is a living document and continues to expand by adding various kinds of methods. Radiochemistry methods are highlighted in this paper. DOE Methods is intended to be a resource for methods applicable to DOE/EM problems. Although it is intended to support DOE, the guidance and methods are not necessarily exclusive to DOE. The document is available at no cost through the Laboratory Management Division of DOE, Office of Technology Development

  15. 7 CFR 51.308 - Methods of sampling and calculation of percentages.

    Science.gov (United States)

    2010-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and calculation of percentages. (a) When the numerical... 7 Agriculture 2 2010-01-01 2010-01-01 false Methods of sampling and calculation of percentages. 51...

  16. Analytical Method to Estimate the Complex Permittivity of Oil Samples

    Directory of Open Access Journals (Sweden)

    Lijuan Su

    2018-03-01

    Full Text Available In this paper, an analytical method to estimate the complex dielectric constant of liquids is presented. The method is based on the measurement of the transmission coefficient in an embedded microstrip line loaded with a complementary split ring resonator (CSRR, which is etched in the ground plane. From this response, the dielectric constant and loss tangent of the liquid under test (LUT can be extracted, provided that the CSRR is surrounded by such LUT, and the liquid level extends beyond the region where the electromagnetic fields generated by the CSRR are present. For that purpose, a liquid container acting as a pool is added to the structure. The main advantage of this method, which is validated from the measurement of the complex dielectric constant of olive and castor oil, is that reference samples for calibration are not required.

  17. Sampling trace organic compounds in water: a comparison of a continuous active sampler to continuous passive and discrete sampling methods.

    Science.gov (United States)

    Coes, Alissa L; Paretti, Nicholas V; Foreman, William T; Iverson, Jana L; Alvarez, David A

    2014-03-01

    A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19-23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method. Published by Elsevier B.V.

  18. Sampling trace organic compounds in water: a comparison of a continuous active sampler to continuous passive and discrete sampling methods

    Science.gov (United States)

    Coes, Alissa L.; Paretti, Nicholas V.; Foreman, William T.; Iverson, Jana L.; Alvarez, David A.

    2014-01-01

    A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19–23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method.

  19. Comparison of Standard Culture-Based Method to Culture-Independent Method for Evaluation of Hygiene Effects on the Hand Microbiome.

    Science.gov (United States)

    Zapka, C; Leff, J; Henley, J; Tittl, J; De Nardo, E; Butler, M; Griggs, R; Fierer, N; Edmonds-Wilson, S

    2017-03-28

    Hands play a critical role in the transmission of microbiota on one's own body, between individuals, and on environmental surfaces. Effectively measuring the composition of the hand microbiome is important to hand hygiene science, which has implications for human health. Hand hygiene products are evaluated using standard culture-based methods, but standard test methods for culture-independent microbiome characterization are lacking. We sampled the hands of 50 participants using swab-based and glove-based methods prior to and following four hand hygiene treatments (using a nonantimicrobial hand wash, alcohol-based hand sanitizer [ABHS], a 70% ethanol solution, or tap water). We compared results among culture plate counts, 16S rRNA gene sequencing of DNA extracted directly from hands, and sequencing of DNA extracted from culture plates. Glove-based sampling yielded higher numbers of unique operational taxonomic units (OTUs) but had less diversity in bacterial community composition than swab-based sampling. We detected treatment-induced changes in diversity only by using swab-based samples ( P hand hygiene industry methods and for future hand microbiome studies. On the basis of our results and previously published studies, we propose recommendations for best practices in hand microbiome research. IMPORTANCE The hand microbiome is a critical area of research for diverse fields, such as public health and forensics. The suitability of culture-independent methods for assessing effects of hygiene products on microbiota has not been demonstrated. This is the first controlled laboratory clinical hand study to have compared traditional hand hygiene test methods with newer culture-independent characterization methods typically used by skin microbiologists. This study resulted in recommendations for hand hygiene product testing, development of methods, and future hand skin microbiome research. It also demonstrated the importance of inclusion of skin physiological metadata in

  20. Chromium speciation in environmental samples using a solid phase spectrophotometric method

    Science.gov (United States)

    Amin, Alaa S.; Kassem, Mohammed A.

    2012-10-01

    A solid phase extraction technique is proposed for preconcentration and speciation of chromium in natural waters using spectrophotometric analysis. The procedure is based on sorption of chromium(III) as 4-(2-benzothiazolylazo)2,2'-biphenyldiol complex on dextran-type anion-exchange gel (Sephadex DEAE A-25). After reduction of Cr(VI) by 0.5 ml of 96% concentrated H2SO4 and ethanol, the system was applied to the total chromium. The concentration of Cr(VI) was calculated as the difference between the total Cr and the Cr(III) content. The influences of some analytical parameters such as: pH of the aqueous solution, amounts of 4-(2-benzothiazolylazo)2,2'-biphenyldiol (BTABD), and sample volumes were investigated. The absorbance of the gel, at 628 and 750 nm, packed in a 1.0 mm cell, is measured directly. The molar absorptivities were found to be 2.11 × 107 and 3.90 × 107 L mol-1 cm-1 for 500 and 1000 ml, respectively. Calibration is linear over the range 0.05-1.45 μg L-1 with RSD of <1.85% (n = 8.0). Using 35 mg exchanger, the detection and quantification limits were 13 and 44 ng L-1 for 500 ml sample, whereas for 1000 ml sample were 8.0 and 27 ng L-1, respectively. Increasing the sample volume can enhance the sensitivity. No considerable interferences have been observed from other investigated anions and cations on the chromium speciation. The proposed method was applied to the speciation of chromium in natural waters and total chromium preconcentration in microwave digested tobacco, coffee, tea, and soil samples. The results were simultaneously compared with those obtained using an ET AAS method, whereby the validity of the method has been tested.

  1. Multi-frequency direct sampling method in inverse scattering problem

    Science.gov (United States)

    Kang, Sangwoo; Lambert, Marc; Park, Won-Kwang

    2017-10-01

    We consider the direct sampling method (DSM) for the two-dimensional inverse scattering problem. Although DSM is fast, stable, and effective, some phenomena remain unexplained by the existing results. We show that the imaging function of the direct sampling method can be expressed by a Bessel function of order zero. We also clarify the previously unexplained imaging phenomena and suggest multi-frequency DSM to overcome traditional DSM. Our method is evaluated in simulation studies using both single and multiple frequencies.

  2. Method validation for control determination of mercury in fresh fish and shrimp samples by solid sampling thermal decomposition/amalgamation atomic absorption spectrometry.

    Science.gov (United States)

    Torres, Daiane Placido; Martins-Teixeira, Maristela Braga; Cadore, Solange; Queiroz, Helena Müller

    2015-01-01

    A method for the determination of total mercury in fresh fish and shrimp samples by solid sampling thermal decomposition/amalgamation atomic absorption spectrometry (TDA AAS) has been validated following international foodstuff protocols in order to fulfill the Brazilian National Residue Control Plan. The experimental parameters have been previously studied and optimized according to specific legislation on validation and inorganic contaminants in foodstuff. Linearity, sensitivity, specificity, detection and quantification limits, precision (repeatability and within-laboratory reproducibility), robustness as well as accuracy of the method have been evaluated. Linearity of response was satisfactory for the two range concentrations available on the TDA AAS equipment, between approximately 25.0 and 200.0 μg kg(-1) (square regression) and 250.0 and 2000.0 μg kg(-1) (linear regression) of mercury. The residues for both ranges were homoscedastic and independent, with normal distribution. Correlation coefficients obtained for these ranges were higher than 0.995. Limits of quantification (LOQ) and of detection of the method (LDM), based on signal standard deviation (SD) for a low-in-mercury sample, were 3.0 and 1.0 μg kg(-1), respectively. Repeatability of the method was better than 4%. Within-laboratory reproducibility achieved a relative SD better than 6%. Robustness of the current method was evaluated and pointed sample mass as a significant factor. Accuracy (assessed as the analyte recovery) was calculated on basis of the repeatability, and ranged from 89% to 99%. The obtained results showed the suitability of the present method for direct mercury measurement in fresh fish and shrimp samples and the importance of monitoring the analysis conditions for food control purposes. Additionally, the competence of this method was recognized by accreditation under the standard ISO/IEC 17025.

  3. Evaluation of Sampling Methods for Bacillus Spore ...

    Science.gov (United States)

    Journal Article Following a wide area release of biological materials, mapping the extent of contamination is essential for orderly response and decontamination operations. HVAC filters process large volumes of air and therefore collect highly representative particulate samples in buildings. HVAC filter extraction may have great utility in rapidly estimating the extent of building contamination following a large-scale incident. However, until now, no studies have been conducted comparing the two most appropriate sampling approaches for HVAC filter materials: direct extraction and vacuum-based sampling.

  4. Molecular Weights of Bovine and Porcine Heparin Samples: Comparison of Chromatographic Methods and Results of a Collaborative Survey

    Directory of Open Access Journals (Sweden)

    Sabrina Bertini

    2017-07-01

    Full Text Available In a collaborative study involving six laboratories in the USA, Europe, and India the molecular weight distributions of a panel of heparin sodium samples were determined, in order to compare heparin sodium of bovine intestinal origin with that of bovine lung and porcine intestinal origin. Porcine samples met the current criteria as laid out in the USP Heparin Sodium monograph. Bovine lung heparin samples had consistently lower average molecular weights. Bovine intestinal heparin was variable in molecular weight; some samples fell below the USP limits, some fell within these limits and others fell above the upper limits. These data will inform the establishment of pharmacopeial acceptance criteria for heparin sodium derived from bovine intestinal mucosa. The method for MW determination as described in the USP monograph uses a single, broad standard calibrant to characterize the chromatographic profile of heparin sodium on high-resolution silica-based GPC columns. These columns may be short-lived in some laboratories. Using the panel of samples described above, methods based on the use of robust polymer-based columns have been developed. In addition to the use of the USP’s broad standard calibrant for heparin sodium with these columns, a set of conditions have been devised that allow light-scattering detected molecular weight characterization of heparin sodium, giving results that agree well with the monograph method. These findings may facilitate the validation of variant chromatographic methods with some practical advantages over the USP monograph method.

  5. Molecular Weights of Bovine and Porcine Heparin Samples: Comparison of Chromatographic Methods and Results of a Collaborative Survey.

    Science.gov (United States)

    Bertini, Sabrina; Risi, Giulia; Guerrini, Marco; Carrick, Kevin; Szajek, Anita Y; Mulloy, Barbara

    2017-07-19

    In a collaborative study involving six laboratories in the USA, Europe, and India the molecular weight distributions of a panel of heparin sodium samples were determined, in order to compare heparin sodium of bovine intestinal origin with that of bovine lung and porcine intestinal origin. Porcine samples met the current criteria as laid out in the USP Heparin Sodium monograph. Bovine lung heparin samples had consistently lower average molecular weights. Bovine intestinal heparin was variable in molecular weight; some samples fell below the USP limits, some fell within these limits and others fell above the upper limits. These data will inform the establishment of pharmacopeial acceptance criteria for heparin sodium derived from bovine intestinal mucosa. The method for MW determination as described in the USP monograph uses a single, broad standard calibrant to characterize the chromatographic profile of heparin sodium on high-resolution silica-based GPC columns. These columns may be short-lived in some laboratories. Using the panel of samples described above, methods based on the use of robust polymer-based columns have been developed. In addition to the use of the USP's broad standard calibrant for heparin sodium with these columns, a set of conditions have been devised that allow light-scattering detected molecular weight characterization of heparin sodium, giving results that agree well with the monograph method. These findings may facilitate the validation of variant chromatographic methods with some practical advantages over the USP monograph method.

  6. Inside-sediment partitioning of PAH, PCB and organochlorine compounds and inferences on sampling and normalization methods

    International Nuclear Information System (INIS)

    Opel, Oliver; Palm, Wolf-Ulrich; Steffen, Dieter; Ruck, Wolfgang K.L.

    2011-01-01

    Comparability of sediment analyses for semivolatile organic substances is still low. Neither screening of the sediments nor organic-carbon based normalization is sufficient to obtain comparable results. We are showing the interdependency of grain-size effects with inside-sediment organic-matter distribution for PAH, PCB and organochlorine compounds. Surface sediment samples collected by Van-Veen grab were sieved and analyzed for 16 PAH, 6 PCB and 18 organochlorine pesticides (OCP) as well as organic-matter content. Since bulk concentrations are influenced by grain-size effects themselves, we used a novel normalization method based on the sum of concentrations in the separate grain-size fractions of the sediments. By calculating relative normalized concentrations, it was possible to clearly show underlying mechanisms throughout a heterogeneous set of samples. Furthermore, we were able to show that, for comparability, screening at <125 μm is best suited and can be further improved by additional organic-carbon normalization. - Research highlights: → New method for the comparison of heterogeneous sets of sediment samples. → Assessment of organic pollutants partitioning mechanisms in sediments. → Proposed method for more comparable sediment sampling. - Inside-sediment partitioning mechanisms are shown using a new mathematical approach and discussed in terms of sediment sampling and normalization.

  7. Influences of different sample preparation methods on tooth enamel ESR signals

    International Nuclear Information System (INIS)

    Zhang Wenyi; Jiao Ling; Zhang Liang'an; Pan Zhihong; Zeng Hongyu

    2005-01-01

    Objective: To study the influences of different sample preparation methods on tooth enamel ESR signals in order to reduce the effect of dentine on their sensitivities to radiation. Methods: The enamel was separated from dentine of non-irradiated adult teeth by mechanical, chemical, or both methods. The samples of different preparations were scanned by an ESR spectrometer before and after irradiation. Results: The response of ESR signals of samples prepared with different methods to radiation dose was significantly different. Conclusion: The selection of sample preparation method is very important for dose reconstruction by tooth enamel ESR dosimetry, especially in the low dose range. (authors)

  8. Comparison of soil sampling and analytical methods for asbestos at the Sumas Mountain Asbestos Site-Working towards a toolbox for better assessment.

    Science.gov (United States)

    Wroble, Julie; Frederick, Timothy; Frame, Alicia; Vallero, Daniel

    2017-01-01

    Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)'s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete ("grab") samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples.

  9. X-ray spectrum analysis of multi-component samples by a method of fundamental parameters using empirical ratios

    International Nuclear Information System (INIS)

    Karmanov, V.I.

    1986-01-01

    A type of the fundamental parameter method based on empirical relation of corrections for absorption and additional-excitation with absorbing characteristics of samples is suggested. The method is used for X-ray fluorescence analysis of multi-component samples of charges of welded electrodes. It is shown that application of the method is justified only for determination of titanium, calcium and silicon content in charges taking into account only corrections for absorption. Irn and manganese content can be calculated by the simple method of the external standard

  10. Ionic liquid-based dispersive microextraction of nitro toluenes in water samples

    International Nuclear Information System (INIS)

    Berton, Paula; Regmi, Bishnu P.; Spivak, David A.; Warner, Isiah M.

    2014-01-01

    We describe a method for dispersive liquid-liquid microextraction of nitrotoluene-based compounds. This method is based on use of the room temperature ionic liquid (RTIL) 1-hexyl-4-methylpyridinium bis(trifluoromethylsulfonyl)imide as the accepting phase, and is shown to work well for extraction of 4-nitrotoluene, 2,4-dinitrotoluene, and 2,4,6-trinitrotoluene. Separation and subsequent detection of analytes were accomplished via HPLC with UV detection. Several parameters that influence the efficiency of the extraction were optimized using experimental design. In this regard, a Plackett–Burman design was used for initial screening, followed by a central composite design to further optimize the influencing variables. For a 5-mL water sample, the optimized IL-DLLME procedure requires 26 mg of the RTIL as extraction solvent and 680 μL of methanol as the dispersant. Under optimum conditions, limits of detection (LODs) are lower than 1.05 μg L −1 . Relative standard deviations for 6 replicate determinations at a 4 μg L −1 analyte level are <4.3 % (calculated using peak areas). Correlation coefficients of >0.998 were achieved. This method was successfully applied to extraction and determination of nitrotoluene-based compounds in spiked tap and lake water samples. (author)

  11. Ion Sensor Properties of Fluorescent Schiff Bases Carrying Dipicolylamine Groups. A Simple Spectrofluorimetric Method to Determine Cu (II) in Water Samples.

    Science.gov (United States)

    Vanlı, Elvan; Mısır, Miraç Nedim; Alp, Hakan; Ak, Tuğba; Özbek, Nurhayat; Ocak, Ümmühan; Ocak, Miraç

    2017-09-01

    Four fluorescent Schiff bases carrying dipicolylamine groups were designed and synthesized to determine their ion sensor properties in partial aqueous solution. The corresponding amine compound and the aldehyde compounds such as 1-naphthaldehyde, 9-anthraldehyde, phenanthrene-9-carboxaldehyde and 1-pyrenecarboxaldehyde were used to prepare the new Schiff bases. The influence of many metal cations and anions on the spectroscopic properties of the ligands was investigated in ethanol-water (1:1) by means of emission spectrometry. From the spectrofluorimetric titrations, the complexation stoichiometry and complex stability constants of the ligands with Cd 2+ , Zn 2+ , Cu 2+ and Hg 2+ ions were determined. The ligands did not interact with the anions. However, the Schiff base derived from phenanthrene-9-carboxaldehyde showed sensitivity for Cu 2+ among the tested metal ions. The phenanthrene-based Schiff base was used as analytical ligand for the simple and fast determination of Cu 2+ ion in water samples. A modified standard addition method was used to eliminate matrix effect. The linear range was from 0.3 mg/L to 3.8 μg/L. Detection and quantification limits were 0.14 and 0.43 mg/L, respectively. Maximum contaminant level goal (MCLG) for copper in drinking water according to EPA is 1.3 mg/L. The proposed method has high sensitivity to determine copper in drinking waters.

  12. Highly selective ionic liquid-based microextraction method for sensitive trace cobalt determination in environmental and biological samples

    International Nuclear Information System (INIS)

    Berton, Paula; Wuilloud, Rodolfo G.

    2010-01-01

    A simple and rapid dispersive liquid-liquid microextraction procedure based on an ionic liquid (IL-DLLME) was developed for selective determination of cobalt (Co) with electrothermal atomic absorption spectrometry (ETAAS) detection. Cobalt was initially complexed with 1-nitroso-2-naphtol (1N2N) reagent at pH 4.0. The IL-DLLME procedure was then performed by using a few microliters of the room temperature ionic liquid (RTIL) 1-hexyl-3-methylimidazolium hexafluorophosphate [C 6 mim][PF 6 ] as extractant while methanol was the dispersant solvent. After microextraction procedure, the Co-enriched RTIL phase was solubilized in methanol and directly injected into the graphite furnace. The effect of several variables on Co-1N2N complex formation, extraction with the dispersed RTIL phase, and analyte detection with ETAAS, was carefully studied in this work. An enrichment factor of 120 was obtained with only 6 mL of sample solution and under optimal experimental conditions. The resultant limit of detection (LOD) was 3.8 ng L -1 , while the relative standard deviation (RSD) was 3.4% (at 1 μg L -1 Co level and n = 10), calculated from the peak height of absorbance signals. The accuracy of the proposed methodology was tested by analysis of a certified reference material. The method was successfully applied for the determination of Co in environmental and biological samples.

  13. Interlaboratory diagnostic accuracy of a Salmonella specific PCR-based method

    DEFF Research Database (Denmark)

    Malorny, B.; Hoorfar, Jeffrey; Hugas, M.

    2003-01-01

    A collaborative study involving four European laboratories was conducted to investigate the diagnostic accuracy of a Salmonella specific PCR-based method, which was evaluated within the European FOOD-PCR project (http://www.pcr.dk). Each laboratory analysed by the PCR a set of independent obtained...... presumably naturally contaminated samples and compared the results with the microbiological culture method. The PCR-based method comprised a preenrichment step in buffered peptone water followed by a thermal cell lysis using a closed tube resin-based method. Artificially contaminated minced beef and whole......-based diagnostic methods and is currently proposed as international standard document....

  14. A NEW METHOD FOR NON DESTRUCTIVE ESTIMATION OF Jc IN YBaCuO CERAMIC SAMPLES

    Directory of Open Access Journals (Sweden)

    Giancarlo Cordeiro Costa

    2014-12-01

    Full Text Available This work presents a new method for estimation of Jc as a bulk characteristic of YBCO blocks. The experimental magnetic interaction force between a SmCo permanent magnet and a YBCO block was compared to finite element method (FEM simulations results, allowing us to search a best fitting value to the critical current of the superconducting sample. As FEM simulations were based on Bean model , the critical current density was taken as an unknown parameter. This is a non destructive estimation method. since there is no need of breaking even a little piece of the sample for analysis.

  15. A general method to determine sampling windows for nonlinear mixed effects models with an application to population pharmacokinetic studies.

    Science.gov (United States)

    Foo, Lee Kien; McGree, James; Duffull, Stephen

    2012-01-01

    Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.

  16. International Conference on Robust Rank-Based and Nonparametric Methods

    CERN Document Server

    McKean, Joseph

    2016-01-01

    The contributors to this volume include many of the distinguished researchers in this area. Many of these scholars have collaborated with Joseph McKean to develop underlying theory for these methods, obtain small sample corrections, and develop efficient algorithms for their computation. The papers cover the scope of the area, including robust nonparametric rank-based procedures through Bayesian and big data rank-based analyses. Areas of application include biostatistics and spatial areas. Over the last 30 years, robust rank-based and nonparametric methods have developed considerably. These procedures generalize traditional Wilcoxon-type methods for one- and two-sample location problems. Research into these procedures has culminated in complete analyses for many of the models used in practice including linear, generalized linear, mixed, and nonlinear models. Settings are both multivariate and univariate. With the development of R packages in these areas, computation of these procedures is easily shared with r...

  17. Gap-Filling of Landsat 7 Imagery Using the Direct Sampling Method

    KAUST Repository

    Yin, Gaohong

    2016-12-28

    The failure of the Scan Line Corrector (SLC) on Landsat 7 imposed systematic data gaps on retrieved imagery and removed the capacity to provide spatially continuous fields. While a number of methods have been developed to fill these gaps, most of the proposed techniques are only applicable over relatively homogeneous areas. When they are applied to heterogeneous landscapes, retrieving image features and elements can become challenging. Here we present a gap-filling approach that is based on the adoption of the Direct Sampling multiple-point geostatistical method. The method employs a conditional stochastic resampling of known areas in a training image to simulate unknown locations. The approach is assessed across a range of both homogeneous and heterogeneous regions. Simulation results show that for homogeneous areas, satisfactory results can be obtained by simply adopting non-gap locations in the target image as baseline training data. For heterogeneous landscapes, bivariate simulations using an auxiliary variable acquired at a different date provides more accurate results than univariate simulations, especially as land cover complexity increases. Apart from recovering spatially continuous fields, one of the key advantages of the Direct Sampling is the relatively straightforward implementation process that relies on relatively few parameters.

  18. Methods for Sampling and Measurement of Compressed Air Contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Stroem, L

    1976-10-15

    In order to improve the technique for measuring oil and water entrained in a compressed air stream, a laboratory study has been made of some methods for sampling and measurement. For this purpose water or oil as artificial contaminants were injected in thin streams into a test loop, carrying dry compressed air. Sampling was performed in a vertical run, down-stream of the injection point. Wall attached liquid, coarse droplet flow, and fine droplet flow were sampled separately. The results were compared with two-phase flow theory and direct observation of liquid behaviour. In a study of sample transport through narrow tubes, it was observed that, below a certain liquid loading, the sample did not move, the liquid remaining stationary on the tubing wall. The basic analysis of the collected samples was made by gravimetric methods. Adsorption tubes were used with success to measure water vapour. A humidity meter with a sensor of the aluminium oxide type was found to be unreliable. Oil could be measured selectively by a flame ionization detector, the sample being pretreated in an evaporation- condensation unit

  19. Methods for Sampling and Measurement of Compressed Air Contaminants

    International Nuclear Information System (INIS)

    Stroem, L.

    1976-10-01

    In order to improve the technique for measuring oil and water entrained in a compressed air stream, a laboratory study has been made of some methods for sampling and measurement. For this purpose water or oil as artificial contaminants were injected in thin streams into a test loop, carrying dry compressed air. Sampling was performed in a vertical run, down-stream of the injection point. Wall attached liquid, coarse droplet flow, and fine droplet flow were sampled separately. The results were compared with two-phase flow theory and direct observation of liquid behaviour. In a study of sample transport through narrow tubes, it was observed that, below a certain liquid loading, the sample did not move, the liquid remaining stationary on the tubing wall. The basic analysis of the collected samples was made by gravimetric methods. Adsorption tubes were used with success to measure water vapour. A humidity meter with a sensor of the aluminium oxide type was found to be unreliable. Oil could be measured selectively by a flame ionization detector, the sample being pretreated in an evaporation- condensation unit

  20. Examination of Hydrate Formation Methods: Trying to Create Representative Samples

    Energy Technology Data Exchange (ETDEWEB)

    Kneafsey, T.J.; Rees, E.V.L.; Nakagawa, S.; Kwon, T.-H.

    2011-04-01

    Forming representative gas hydrate-bearing laboratory samples is important so that the properties of these materials may be measured, while controlling the composition and other variables. Natural samples are rare, and have often experienced pressure and temperature changes that may affect the property to be measured [Waite et al., 2008]. Forming methane hydrate samples in the laboratory has been done a number of ways, each having advantages and disadvantages. The ice-to-hydrate method [Stern et al., 1996], contacts melting ice with methane at the appropriate pressure to form hydrate. The hydrate can then be crushed and mixed with mineral grains under controlled conditions, and then compacted to create laboratory samples of methane hydrate in a mineral medium. The hydrate in these samples will be part of the load-bearing frame of the medium. In the excess gas method [Handa and Stupin, 1992], water is distributed throughout a mineral medium (e.g. packed moist sand, drained sand, moistened silica gel, other porous media) and the mixture is brought to hydrate-stable conditions (chilled and pressurized with gas), allowing hydrate to form. This method typically produces grain-cementing hydrate from pendular water in sand [Waite et al., 2004]. In the dissolved gas method [Tohidi et al., 2002], water with sufficient dissolved guest molecules is brought to hydrate-stable conditions where hydrate forms. In the laboratory, this is can be done by pre-dissolving the gas of interest in water and then introducing it to the sample under the appropriate conditions. With this method, it is easier to form hydrate from more soluble gases such as carbon dioxide. It is thought that this method more closely simulates the way most natural gas hydrate has formed. Laboratory implementation, however, is difficult, and sample formation is prohibitively time consuming [Minagawa et al., 2005; Spangenberg and Kulenkampff, 2005]. In another version of this technique, a specified quantity of gas

  1. Development of a LC-MS/MS Method for the Multi-Mycotoxin Determination in Composite Cereal-Based Samples

    Directory of Open Access Journals (Sweden)

    Barbara De Santis

    2017-05-01

    Full Text Available The analytical scenario for determining contaminants in the food and feed sector is constantly prompted by the progress and improvement of knowledge and expertise of researchers and by the technical innovation of the instrumentation available. Mycotoxins are agricultural contaminants of fungal origin occurring at all latitudes worldwide and being characterized by acute and chronic effects on human health and animal wellness, depending on the species sensitivity. The major mycotoxins of food concern are aflatoxin B1 and ochratoxin A, the first for its toxicity, and the second for its recurrent occurrence. However, the European legislation sets maximum limits for mycotoxins, such as aflatoxin B1, ochratoxin A, deoxynivalenol, fumonisins, and zearalenone, and indicative limits for T-2 and HT-2 toxins. Due to the actual probability that co-occurring mycotoxins are present in a food or feed product, nowadays, the availability of reliable, sensitive, and versatile multi-mycotoxin methods is assuming a relevant importance. Due to the wide range of matrices susceptible to mycotoxin contamination and the possible co-occurrence, a multi-mycotoxin and multi-matrix method was validated in liquid chromatography-tandem mass spectrometry (LC-MS/MS with the purpose to overcome specific matrix effects and analyze complex cereal-based samples within the Italian Total Diet Study project.

  2. Turbidity threshold sampling: Methods and instrumentation

    Science.gov (United States)

    Rand Eads; Jack Lewis

    2001-01-01

    Traditional methods for determining the frequency of suspended sediment sample collection often rely on measurements, such as water discharge, that are not well correlated to sediment concentration. Stream power is generally not a good predictor of sediment concentration for rivers that transport the bulk of their load as fines, due to the highly variable routing of...

  3. Method of quantitative analysis of fluorine in environmental samples using a pure-Ge detector

    International Nuclear Information System (INIS)

    Sera, K.; Terasaki, K.; Saitoh, Y.; Itoh, J.; Futatsugawa, S.; Murao, S.; Sakurai, S.

    2004-01-01

    We recently developed and reported a three-detector measuring system making use of a pure-Ge detector combined with two Si(Li) detectors. The efficiency curve of the pure-Ge detector was determined as relative efficiencies to those of the existing Si(Li) detectors and accuracy of it was confirmed by analyzing a few samples whose elemental concentrations were known. It was found that detection of fluorine becomes possible by analyzing prompt γ-rays and the detection limit was found to be less than 0.1 ppm for water samples. In this work, a method of quantitative analysis of fluorine has been established in order to investigate environmental contamination by fluorine. This method is based on the fact that both characteristic x-rays from many elements and 110 keV prompt γ-rays from fluorine can be detected in the same spectrum. The present method is applied to analyses of a few environmental samples such as tealeaves, feed for domestic animals and human bone. The results are consistent with those obtained by other methods and it is found that the present method is quite useful and convenient for investigation studies on regional pollution by fluorine. (author)

  4. A TIMS-based method for the high precision measurements of the three-isotope potassium composition of small samples

    DEFF Research Database (Denmark)

    Wielandt, Daniel Kim Peel; Bizzarro, Martin

    2011-01-01

    A novel thermal ionization mass spectrometry (TIMS) method for the three-isotope analysis of K has been developed, and ion chromatographic methods for the separation of K have been adapted for the processing of small samples. The precise measurement of K-isotopes is challenged by the presence of ...

  5. Target discrimination method for SAR images based on semisupervised co-training

    Science.gov (United States)

    Wang, Yan; Du, Lan; Dai, Hui

    2018-01-01

    Synthetic aperture radar (SAR) target discrimination is usually performed in a supervised manner. However, supervised methods for SAR target discrimination may need lots of labeled training samples, whose acquirement is costly, time consuming, and sometimes impossible. This paper proposes an SAR target discrimination method based on semisupervised co-training, which utilizes a limited number of labeled samples and an abundant number of unlabeled samples. First, Lincoln features, widely used in SAR target discrimination, are extracted from the training samples and partitioned into two sets according to their physical meanings. Second, two support vector machine classifiers are iteratively co-trained with the extracted two feature sets based on the co-training algorithm. Finally, the trained classifiers are exploited to classify the test data. The experimental results on real SAR images data not only validate the effectiveness of the proposed method compared with the traditional supervised methods, but also demonstrate the superiority of co-training over self-training, which only uses one feature set.

  6. [Study of spatial stratified sampling strategy of Oncomelania hupensis snail survey based on plant abundance].

    Science.gov (United States)

    Xun-Ping, W; An, Z

    2017-07-27

    Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.

  7. Evaluating the effect of disturbed ensemble distributions on SCFG based statistical sampling of RNA secondary structures

    Directory of Open Access Journals (Sweden)

    Scheid Anika

    2012-07-01

    Full Text Available Abstract Background Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent stochastic context-free grammar (SCFG that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples, where neither of these two competing approaches generally outperforms the other. Results In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones, then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst

  8. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    Science.gov (United States)

    Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl

    2018-06-01

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  9. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    Science.gov (United States)

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  10. Diagnosis of clinical samples spotted on FTA cards using PCR-based methods.

    Science.gov (United States)

    Jamjoom, Manal; Sultan, Amal H

    2009-04-01

    The broad clinical presentation of Leishmaniasis makes the diagnosis of current and past cases of this disease rather difficult. Differential diagnosis is important because diseases caused by other aetiologies and a clinical spectrum similar to that of leishmaniasis (e.g. leprosy, skin cancers and tuberculosis for CL; malaria and schistosomiasis for VL) are often present in endemic areas of endemicity. Presently, a variety of methods have been developed and tested to aid the identification and diagnosis of Leishmania. The advent of the PCR technology has opened new channels for the diagnosis of leishmaniasis in a variety of clinical materials. PCR is a simple, rapid procedure that has been adapted for diagnosis of leishmaniasis. A range of tools is currently available for the diagnosis and identification of leishmaniasis and Leishmania species, respectively. However, none of these diagnostic tools are examined and tested using samples spotted on FTA cards. Three different PCR-based approaches were examined including: kDNA minicircle, Leishmania 18S rRNA gene and PCR-RFLP of Intergenic region of ribosomal protein. PCR primers were designed that sit within the coding sequences of genes (relatively well conserved) but which amplify across the intervening intergenic sequence (relatively variable). These were used in PCR-RFLP on reference isolates of 10 of the most important Leishmania species: L. donovani, L. infantum, L. major & L. tropica. Digestion of PCR products with restriction enzymes produced species-specific restriction patterns allowed discrimination of reference isolates. The kDNA minicircle primers are highly sensitive in diagnosis of both bone marrow and skin smears from FTA cards. Leishmania 18S rRNA gene conserved region is sensitive in identification of bone marrow smear but less sensitive in diagnosing skin smears. The intergenic nested PCR-RFLP using P5 & P6 as well as P1 & P2 newly designed primers showed high level of reproducibility and sensitivity

  11. Validation of method in instrumental NAA for food products sample

    International Nuclear Information System (INIS)

    Alfian; Siti Suprapti; Setyo Purwanto

    2010-01-01

    NAA is a method of testing that has not been standardized. To affirm and confirm that this method is valid. it must be done validation of the method with various sample standard reference materials. In this work. the validation is carried for food product samples using NIST SRM 1567a (wheat flour) and NIST SRM 1568a (rice flour). The results show that the validation method for testing nine elements (Al, K, Mg, Mn, Na, Ca, Fe, Se and Zn) in SRM 1567a and eight elements (Al, K, Mg, Mn, Na, Ca, Se and Zn ) in SRM 1568a pass the test of accuracy and precision. It can be conclude that this method has power to give valid result in determination element of the food products samples. (author)

  12. Validation of EIA sampling methods - bacterial and biochemical analysis

    Digital Repository Service at National Institute of Oceanography (India)

    Sheelu, G.; LokaBharathi, P.A.; Nair, S.; Raghukumar, C.; Mohandass, C.

    to temporal factors. Paired T-test between pre- and post-disturbance samples suggested that the above methods of sampling and variables like TC, protein and TOC could be used for monitoring disturbance....

  13. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples

    Science.gov (United States)

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-01

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.

  14. Application of In-Segment Multiple Sampling in Object-Based Classification

    Directory of Open Access Journals (Sweden)

    Nataša Đurić

    2014-12-01

    Full Text Available When object-based analysis is applied to very high-resolution imagery, pixels within the segments reveal large spectral inhomogeneity; their distribution can be considered complex rather than normal. When normality is violated, the classification methods that rely on the assumption of normally distributed data are not as successful or accurate. It is hard to detect normality violations in small samples. The segmentation process produces segments that vary highly in size; samples can be very big or very small. This paper investigates whether the complexity within the segment can be addressed using multiple random sampling of segment pixels and multiple calculations of similarity measures. In order to analyze the effect sampling has on classification results, statistics and probability value equations of non-parametric two-sample Kolmogorov-Smirnov test and parametric Student’s t-test are selected as similarity measures in the classification process. The performance of both classifiers was assessed on a WorldView-2 image for four land cover classes (roads, buildings, grass and trees and compared to two commonly used object-based classifiers—k-Nearest Neighbor (k-NN and Support Vector Machine (SVM. Both proposed classifiers showed a slight improvement in the overall classification accuracies and produced more accurate classification maps when compared to the ground truth image.

  15. Measuring larval nematode contamination on cattle pastures: Comparing two herbage sampling methods.

    Science.gov (United States)

    Verschave, S H; Levecke, B; Duchateau, L; Vercruysse, J; Charlier, J

    2015-06-15

    Assessing levels of pasture larval contamination is frequently used to study the population dynamics of the free-living stages of parasitic nematodes of livestock. Direct quantification of infective larvae (L3) on herbage is the most applied method to measure pasture larval contamination. However, herbage collection remains labour intensive and there is a lack of studies addressing the variation induced by the sampling method and the required sample size. The aim of this study was (1) to compare two different sampling methods in terms of pasture larval count results and time required to sample, (2) to assess the amount of variation in larval counts at the level of sample plot, pasture and season, respectively and (3) to calculate the required sample size to assess pasture larval contamination with a predefined precision using random plots across pasture. Eight young stock pastures of different commercial dairy herds were sampled in three consecutive seasons during the grazing season (spring, summer and autumn). On each pasture, herbage samples were collected through both a double-crossed W-transect with samples taken every 10 steps (method 1) and four random located plots of 0.16 m(2) with collection of all herbage within the plot (method 2). The average (± standard deviation (SD)) pasture larval contamination using sampling methods 1 and 2 was 325 (± 479) and 305 (± 444)L3/kg dry herbage (DH), respectively. Large discrepancies in pasture larval counts of the same pasture and season were often seen between methods, but no significant difference (P = 0.38) in larval counts between methods was found. Less time was required to collect samples with method 2. This difference in collection time between methods was most pronounced for pastures with a surface area larger than 1 ha. The variation in pasture larval counts from samples generated by random plot sampling was mainly due to the repeated measurements on the same pasture in the same season (residual variance

  16. Sequential injection titration method using second-order signals: determination of acidity in plant oils and biodiesel samples.

    Science.gov (United States)

    del Río, Vanessa; Larrechi, M Soledad; Callao, M Pilar

    2010-06-15

    A new concept of flow titration is proposed and demonstrated for the determination of total acidity in plant oils and biodiesel. We use sequential injection analysis (SIA) with a diode array spectrophotometric detector linked to chemometric tools such as multivariate curve resolution-alternating least squares (MCR-ALS). This system is based on the evolution of the basic specie of an acid-base indicator, alizarine, when it comes into contact with a sample that contains free fatty acids. The gradual pH change in the reactor coil due to diffusion and reaction phenomenona allows the sequential appearance of both species of the indicator in the detector coil, recording a data matrix for each sample. The SIA-MCR-ALS method helps to reduce the amounts of sample, the reagents and the time consumed. Each determination consumes 0.413ml of sample, 0.250ml of indicator and 3ml of carrier (ethanol) and generates 3.333ml of waste. The frequency of the analysis is high (12 samples h(-1) including all steps, i.e., cleaning, preparing and analysing). The utilized reagents are of common use in the laboratory and it is not necessary to use the reagents of perfect known concentration. The method was applied to determine acidity in plant oil and biodiesel samples. Results obtained by the proposed method compare well with those obtained by the official European Community method that is time consuming and uses large amounts of organic solvents.

  17. Computational methods and modeling. 1. Sampling a Position Uniformly in a Trilinear Hexahedral Volume

    International Nuclear Information System (INIS)

    Urbatsch, Todd J.; Evans, Thomas M.; Hughes, H. Grady

    2001-01-01

    peaks form a line at the center of the cube along the z direction. The second problem consists of six pyramids, each with its base on a unit cube face and peak at the center of the unit cube. The third problem consists of six tetrahedrons. The fourth problem has five side-by-side, long, slender hexahedrons that generally run in the z direction. The hexahedron in the middle looks like a French fry (French-fried potato stick) that has been twisted, stretched, and skewed. The remaining four cells in the unit cube are blocks whose interior faces are somewhat twisted and skewed. Of these test problems, the French-fry problem probably best represents cells in actual simulations. We used the Python scripting language to code the sampling methods. The frequency of sampling a cell in a test is proportional to its fractional volume. The observed pdfs, expected to be uniform, were obtained by binning each sampled x, y, and z and normalizing by the number of samples and multiplying by the number of bins. The exact method samples all problems correctly. The approximate method exactly samples the wedge problem because the wedge problem's pdfs are only uniform or linear. However, the approximate method does not correctly sample the pyramids, tetrahedrons, or the French fry, as shown in Fig. 1. For the pyramids and tetrahedrons, the assumption of separable variables breaks down; the approximate method fails to capture the second degree of the pdf, which in these cases is due to a two-dimensional degeneracy. The French-fry problem has no actual degeneracies, only tendencies toward two-dimensional degeneracies. The approximate sampling method, as expected, fails to capture the second degree of the pdfs in the French-fry problem, but the portion of the pdfs due to the second degree is smaller than it is in problems with actual degeneracies, so the sampling error is smaller. The approximate sampling method is about twice as fast as the exact method. For the wedge problem, both methods

  18. Bioanalytical methods for the determination of cocaine and metabolites in human biological samples.

    Science.gov (United States)

    Barroso, M; Gallardo, E; Queiroz, J A

    2009-08-01

    Determination of cocaine and its metabolites in biological specimens is of great importance, not only in clinical and forensic toxicology, but also in workplace drug testing. These compounds are normally screened for using sensitive immunological methods. However, screening methods are unspecific and, therefore, the posterior confirmation of presumably positive samples by a specific technique is mandatory. Although GC-MS-based techniques are still the most commonly used for confirmation purposes of cocaine and its metabolites in biological specimens, the advent of LC-MS and LC-MS/MS has enabled the detection of even lower amounts of these drugs, which assumes particular importance when sample volume available is small, as frequently occurs with oral fluid. This paper will review recently-published papers that describe procedures for detection of cocaine and metabolites, not only in the most commonly used specimens, such as blood and urine, but also in other 'alternative' matrices (e.g., oral fluid and hair) with a special focus on sample preparation and chromatographic analysis.

  19. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  20. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  1. Clustering Methods with Qualitative Data: A Mixed Methods Approach for Prevention Research with Small Samples

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.

    2016-01-01

    Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

  2. Soybean yield modeling using bootstrap methods for small samples

    Energy Technology Data Exchange (ETDEWEB)

    Dalposso, G.A.; Uribe-Opazo, M.A.; Johann, J.A.

    2016-11-01

    One of the problems that occur when working with regression models is regarding the sample size; once the statistical methods used in inferential analyzes are asymptotic if the sample is small the analysis may be compromised because the estimates will be biased. An alternative is to use the bootstrap methodology, which in its non-parametric version does not need to guess or know the probability distribution that generated the original sample. In this work we used a set of soybean yield data and physical and chemical soil properties formed with fewer samples to determine a multiple linear regression model. Bootstrap methods were used for variable selection, identification of influential points and for determination of confidence intervals of the model parameters. The results showed that the bootstrap methods enabled us to select the physical and chemical soil properties, which were significant in the construction of the soybean yield regression model, construct the confidence intervals of the parameters and identify the points that had great influence on the estimated parameters. (Author)

  3. Comparison of address-based sampling and random-digit dialing methods for recruiting young men as controls in a case-control study of testicular cancer susceptibility.

    Science.gov (United States)

    Clagett, Bartholt; Nathanson, Katherine L; Ciosek, Stephanie L; McDermoth, Monique; Vaughn, David J; Mitra, Nandita; Weiss, Andrew; Martonik, Rachel; Kanetsky, Peter A

    2013-12-01

    Random-digit dialing (RDD) using landline telephone numbers is the historical gold standard for control recruitment in population-based epidemiologic research. However, increasing cell-phone usage and diminishing response rates suggest that the effectiveness of RDD in recruiting a random sample of the general population, particularly for younger target populations, is decreasing. In this study, we compared landline RDD with alternative methods of control recruitment, including RDD using cell-phone numbers and address-based sampling (ABS), to recruit primarily white men aged 18-55 years into a study of testicular cancer susceptibility conducted in the Philadelphia, Pennsylvania, metropolitan area between 2009 and 2012. With few exceptions, eligible and enrolled controls recruited by means of RDD and ABS were similar with regard to characteristics for which data were collected on the screening survey. While we find ABS to be a comparably effective method of recruiting young males compared with landline RDD, we acknowledge the potential impact that selection bias may have had on our results because of poor overall response rates, which ranged from 11.4% for landline RDD to 1.7% for ABS.

  4. Sampling pig farms at the abattoir in a cross-sectional study - Evaluation of a sampling method.

    Science.gov (United States)

    Birkegård, Anna Camilla; Halasa, Tariq; Toft, Nils

    2017-09-15

    A cross-sectional study design is relatively inexpensive, fast and easy to conduct when compared to other study designs. Careful planning is essential to obtaining a representative sample of the population, and the recommended approach is to use simple random sampling from an exhaustive list of units in the target population. This approach is rarely feasible in practice, and other sampling procedures must often be adopted. For example, when slaughter pigs are the target population, sampling the pigs on the slaughter line may be an alternative to on-site sampling at a list of farms. However, it is difficult to sample a large number of farms from an exact predefined list, due to the logistics and workflow of an abattoir. Therefore, it is necessary to have a systematic sampling procedure and to evaluate the obtained sample with respect to the study objective. We propose a method for 1) planning, 2) conducting, and 3) evaluating the representativeness and reproducibility of a cross-sectional study when simple random sampling is not possible. We used an example of a cross-sectional study with the aim of quantifying the association of antimicrobial resistance and antimicrobial consumption in Danish slaughter pigs. It was not possible to visit farms within the designated timeframe. Therefore, it was decided to use convenience sampling at the abattoir. Our approach was carried out in three steps: 1) planning: using data from meat inspection to plan at which abattoirs and how many farms to sample; 2) conducting: sampling was carried out at five abattoirs; 3) evaluation: representativeness was evaluated by comparing sampled and non-sampled farms, and the reproducibility of the study was assessed through simulated sampling based on meat inspection data from the period where the actual data collection was carried out. In the cross-sectional study samples were taken from 681 Danish pig farms, during five weeks from February to March 2015. The evaluation showed that the sampling

  5. A method for sampling microbial aerosols using high altitude balloons.

    Science.gov (United States)

    Bryan, N C; Stewart, M; Granger, D; Guzik, T G; Christner, B C

    2014-12-01

    Owing to the challenges posed to microbial aerosol sampling at high altitudes, very little is known about the abundance, diversity, and extent of microbial taxa in the Earth-atmosphere system. To directly address this knowledge gap, we designed, constructed, and tested a system that passively samples aerosols during ascent through the atmosphere while tethered to a helium-filled latex sounding balloon. The sampling payload is ~ 2.7 kg and comprised of an electronics box and three sampling chambers (one serving as a procedural control). Each chamber is sealed with retractable doors that can be commanded to open and close at designated altitudes. The payload is deployed together with radio beacons that transmit GPS coordinates (latitude, longitude and altitude) in real time for tracking and recovery. A cut mechanism separates the payload string from the balloon at any desired altitude, returning all equipment safely to the ground on a parachute. When the chambers are opened, aerosol sampling is performed using the Rotorod® collection method (40 rods per chamber), with each rod passing through 0.035 m3 per km of altitude sampled. Based on quality control measurements, the collection of ~ 100 cells rod(-1) provided a 3-sigma confidence level of detection. The payload system described can be mated with any type of balloon platform and provides a tool for characterizing the vertical distribution of microorganisms in the troposphere and stratosphere. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Radiochemistry methods in DOE Methods for Evaluating Environmental and Waste Management Samples: Addressing new challenges

    International Nuclear Information System (INIS)

    Fadeff, S.K.; Goheen, S.C.; Riley, R.G.

    1994-01-01

    Radiochemistry methods in Department of Energy Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) add to the repertoire of other standard methods in support of U.S. Department of Energy environmental restoration and waste management (DOE/EM) radiochemical characterization activities. Current standard sources of radiochemistry methods are not always applicable for evaluating DOE/EM samples. Examples of current sources include those provided by the US Environmental Protection Agency, the American Society for Testing and Materials, Standard Methods for the Examination of Water and Wastewater, and Environmental Measurements Laboratory Procedures Manual (HASL-300). The applicability of these methods is generally limited to specific matrices (usually water), low-level radioactive samples, and a limited number of analytes. DOE Methods complements these current standard methods by addressing the complexities of EM characterization needs. The process for determining DOE/EM radiochemistry characterization needs is discussed. In this context of DOE/EM needs, the applicability of other sources of standard radiochemistry methods is defined, and gaps in methodology are identified. Current methods in DOE Methods and the EM characterization needs they address are discussed. Sources of new methods and the methods incorporation process are discussed. The means for individuals to participate in (1) identification of DOE/EM needs, (2) the methods incorporation process, and (3) submission of new methods are identified

  7. A comprehensive comparison of perpendicular distance sampling methods for sampling downed coarse woody debris

    Science.gov (United States)

    Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine; Michael S. Williams

    2013-01-01

    Many new methods for sampling down coarse woody debris have been proposed in the last dozen or so years. One of the most promising in terms of field application, perpendicular distance sampling (PDS), has several variants that have been progressively introduced in the literature. In this study, we provide an overview of the different PDS variants and comprehensive...

  8. Cassette-based in-situ TEM sample inspection in the dual-beam FIB

    International Nuclear Information System (INIS)

    Kendrick, A B; Moore, T M; Zaykova-Feldman, L; Amador, G; Hammer, M

    2008-01-01

    A novel method is presented, combining site-specific TEM sample preparation and in-situ STEM analysis in a dual-beam microscope (FIB/SEM) fitted with a chamber mounted nano-manipulator. TEM samples are prepared using a modified in-situ, lift-out method, whereby the samples are thinned and oriented for immediate in-situ STEM analysis using the tilt, translation, and rotation capabilities of a FIB/SEM sample stage, a nano-manipulator, and a novel cassette. This cassette can provide a second tilt axis, orthogonal to the stage tilt axis, so that the STEM image contrast can be optimized to reveal the structural features of the sample (true STEM imaging in the FIB/SEM). The angles necessary for stage rotation and probe shaft rotation are calculated based on the position of the nano-manipulator relative to the stage and door and the stage tilt angle. A FIB/SEM instrument, equipped with a high resolution scanning electron column, can provide sufficiently high image resolution to enable many failure analysis and process control applications to be successfully carried out without requiring the use of a separate dedicated TEM/STEM instrument. The benefits of this novel approach are increased throughput and reduced cost per sample. Comparative analysis of different sample preparation methods is provided, and the STEM images obtained are shown.

  9. Final LDRD report : development of sample preparation methods for ChIPMA-based imaging mass spectrometry of tissue samples.

    Energy Technology Data Exchange (ETDEWEB)

    Maharrey, Sean P.; Highley, Aaron M.; Behrens, Richard, Jr.; Wiese-Smith, Deneille

    2007-12-01

    The objective of this short-term LDRD project was to acquire the tools needed to use our chemical imaging precision mass analyzer (ChIPMA) instrument to analyze tissue samples. This effort was an outgrowth of discussions with oncologists on the need to find the cellular origin of signals in mass spectra of serum samples, which provide biomarkers for ovarian cancer. The ultimate goal would be to collect chemical images of biopsy samples allowing the chemical images of diseased and nondiseased sections of a sample to be compared. The equipment needed to prepare tissue samples have been acquired and built. This equipment includes an cyro-ultramicrotome for preparing thin sections of samples and a coating unit. The coating unit uses an electrospray system to deposit small droplets of a UV-photo absorbing compound on the surface of the tissue samples. Both units are operational. The tissue sample must be coated with the organic compound to enable matrix assisted laser desorption/ionization (MALDI) and matrix enhanced secondary ion mass spectrometry (ME-SIMS) measurements with the ChIPMA instrument Initial plans to test the sample preparation using human tissue samples required development of administrative procedures beyond the scope of this LDRD. Hence, it was decided to make two types of measurements: (1) Testing the spatial resolution of ME-SIMS by preparing a substrate coated with a mixture of an organic matrix and a bio standard and etching a defined pattern in the coating using a liquid metal ion beam, and (2) preparing and imaging C. elegans worms. Difficulties arose in sectioning the C. elegans for analysis and funds and time to overcome these difficulties were not available in this project. The facilities are now available for preparing biological samples for analysis with the ChIPMA instrument. Some further investment of time and resources in sample preparation should make this a useful tool for chemical imaging applications.

  10. A multi-dimensional sampling method for locating small scatterers

    International Nuclear Information System (INIS)

    Song, Rencheng; Zhong, Yu; Chen, Xudong

    2012-01-01

    A multiple signal classification (MUSIC)-like multi-dimensional sampling method (MDSM) is introduced to locate small three-dimensional scatterers using electromagnetic waves. The indicator is built with the most stable part of signal subspace of the multi-static response matrix on a set of combinatorial sampling nodes inside the domain of interest. It has two main advantages compared to the conventional MUSIC methods. First, the MDSM is more robust against noise. Second, it can work with a single incidence even for multi-scatterers. Numerical simulations are presented to show the good performance of the proposed method. (paper)

  11. Research on test of product based on spatial sampling criteria and variable step sampling mechanism

    Science.gov (United States)

    Li, Ruihong; Han, Yueping

    2014-09-01

    This paper presents an effective approach for online testing the assembly structures inside products using multiple views technique and X-ray digital radiography system based on spatial sampling criteria and variable step sampling mechanism. Although there are some objects inside one product to be tested, there must be a maximal rotary step for an object within which the least structural size to be tested is predictable. In offline learning process, Rotating the object by the step and imaging it and so on until a complete cycle is completed, an image sequence is obtained that includes the full structural information for recognition. The maximal rotary step is restricted by the least structural size and the inherent resolution of the imaging system. During online inspection process, the program firstly finds the optimum solutions to all different target parts in the standard sequence, i.e., finds their exact angles in one cycle. Aiming at the issue of most sizes of other targets in product are larger than that of the least structure, the paper adopts variable step-size sampling mechanism to rotate the product specific angles with different steps according to different objects inside the product and match. Experimental results show that the variable step-size method can greatly save time compared with the traditional fixed-step inspection method while the recognition accuracy is guaranteed.

  12. Multi-volatile method for aroma analysis using sequential dynamic headspace sampling with an application to brewed coffee.

    Science.gov (United States)

    Ochiai, Nobuo; Tsunokawa, Jun; Sasamoto, Kikuo; Hoffmann, Andreas

    2014-12-05

    A novel multi-volatile method (MVM) using sequential dynamic headspace (DHS) sampling for analysis of aroma compounds in aqueous sample was developed. The MVM consists of three different DHS method parameters sets including choice of the replaceable adsorbent trap. The first DHS sampling at 25 °C using a carbon-based adsorbent trap targets very volatile solutes with high vapor pressure (>20 kPa). The second DHS sampling at 25 °C using the same type of carbon-based adsorbent trap targets volatile solutes with moderate vapor pressure (1-20 kPa). The third DHS sampling using a Tenax TA trap at 80 °C targets solutes with low vapor pressure (0.9910) and high sensitivity (limit of detection: 1.0-7.5 ng mL(-1)) even with MS scan mode. The feasibility and benefit of the method was demonstrated with analysis of a wide variety of aroma compounds in brewed coffee. Ten potent aroma compounds from top-note to base-note (acetaldehyde, 2,3-butanedione, 4-ethyl guaiacol, furaneol, guaiacol, 3-methyl butanal, 2,3-pentanedione, 2,3,5-trimethyl pyrazine, vanillin, and 4-vinyl guaiacol) could be identified together with an additional 72 aroma compounds. Thirty compounds including 9 potent aroma compounds were quantified in the range of 74-4300 ng mL(-1) (RSD<10%, n=5). Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  13. Methods for Characterisation of unknown Suspect Radioactive Samples

    International Nuclear Information System (INIS)

    Sahagia, M.; Grigorescu, E.L.; Luca, A.; Razdolescu, A.C.; Ivan, C.

    2001-01-01

    Full text: The paper presents various identification and measurement methods, used for the expertise of a wide variety of suspect radioactive materials, whose circulation was not legally stated. The main types of examined samples were: radioactive sources, illegally trafficked; suspect radioactive materials or radioactively contaminated devices; uranium tablets; fire detectors containing 241 Am sources; osmium samples containing radioactive 185 Os or enriched 187 Os. The types of analyses and determination methods were as follows: the chemical composition was determined by using identification reagents or by neutron activation analysis; the radionuclide composition was determined by using gamma-ray spectrometry; the activity and particle emission rates were determined by using calibrated radiometric equipment; the absorbed dose rate at the wall of all types of containers and samples was determined by using calibrated dose ratemeters. The radiation exposure risk for population, due to these radioactive materials, was evaluated for every case. (author)

  14. A genetic algorithm-based framework for wavelength selection on sample categorization.

    Science.gov (United States)

    Anzanello, Michel J; Yamashita, Gabrielli; Marcelo, Marcelo; Fogliatto, Flávio S; Ortiz, Rafael S; Mariotti, Kristiane; Ferrão, Marco F

    2017-08-01

    In forensic and pharmaceutical scenarios, the application of chemometrics and optimization techniques has unveiled common and peculiar features of seized medicine and drug samples, helping investigative forces to track illegal operations. This paper proposes a novel framework aimed at identifying relevant subsets of attenuated total reflectance Fourier transform infrared (ATR-FTIR) wavelengths for classifying samples into two classes, for example authentic or forged categories in case of medicines, or salt or base form in cocaine analysis. In the first step of the framework, the ATR-FTIR spectra were partitioned into equidistant intervals and the k-nearest neighbour (KNN) classification technique was applied to each interval to insert samples into proper classes. In the next step, selected intervals were refined through the genetic algorithm (GA) by identifying a limited number of wavelengths from the intervals previously selected aimed at maximizing classification accuracy. When applied to Cialis®, Viagra®, and cocaine ATR-FTIR datasets, the proposed method substantially decreased the number of wavelengths needed to categorize, and increased the classification accuracy. From a practical perspective, the proposed method provides investigative forces with valuable information towards monitoring illegal production of drugs and medicines. In addition, focusing on a reduced subset of wavelengths allows the development of portable devices capable of testing the authenticity of samples during police checking events, avoiding the need for later laboratorial analyses and reducing equipment expenses. Theoretically, the proposed GA-based approach yields more refined solutions than the current methods relying on interval approaches, which tend to insert irrelevant wavelengths in the retained intervals. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Comparison of Relative Bias, Precision, and Efficiency of Sampling Methods for Natural Enemies of Soybean Aphid (Hemiptera: Aphididae).

    Science.gov (United States)

    Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W

    2015-06-01

    Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount. © The Authors 2015. Published by Oxford University Press on behalf of

  16. A rapid and sensitive analytical method for the determination of 14 pyrethroids in water samples.

    Science.gov (United States)

    Feo, M L; Eljarrat, E; Barceló, D

    2010-04-09

    A simple, efficient and environmentally friendly analytical methodology is proposed for extracting and preconcentrating pyrethroids from water samples prior to gas chromatography-negative ion chemical ionization mass spectrometry (GC-NCI-MS) analysis. Fourteen pyrethroids were selected for this work: bifenthrin, cyfluthrin, lambda-cyhalothrin, cypermethrin, deltamethrin, esfenvalerate, fenvalerate, fenpropathrin, tau-fluvalinate, permethrin, phenothrin, resmethrin, tetramethrin and tralomethrin. The method is based on ultrasound-assisted emulsification-extraction (UAEE) of a water-immiscible solvent in an aqueous medium. Chloroform was used as extraction solvent in the UAEE technique. Target analytes were quantitatively extracted achieving an enrichment factor of 200 when 20 mL aliquot of pure water spiked with pyrethroid standards was extracted. The method was also evaluated with tap water and river water samples. Method detection limits (MDLs) ranged from 0.03 to 35.8 ng L(-1) with RSDs values or =0.998. Recovery values were in the range of 45-106%, showing satisfactory robustness of the method for analyzing pyrethroids in water samples. The proposed methodology was applied for the analysis of river water samples. Cypermethrin was detected at concentration levels ranging from 4.94 to 30.5 ng L(-1). Copyright 2010 Elsevier B.V. All rights reserved.

  17. A DOE manual: DOE Methods for Evaluating Environmental and Waste Management Samples

    International Nuclear Information System (INIS)

    Goheen, S.C.; McCulloch, M.; Riley, R.G.

    1994-01-01

    Waste Management inherently requires knowledge of the waste's chemical composition. The waste can often be analyzed by established methods; however, if the samples are radioactive, or are plagued by other complications, established methods may not be feasible. The US Department of Energy (DOE) has been faced with managing some waste types that are not amenable to standard or available methods, so new or modified sampling and analysis methods are required. These methods are incorporated into DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods), which is a guidance/methods document for sampling and analysis activities in support of DOE sites. It is a document generated by consensus of the DOE laboratory staff and is intended to fill the gap within existing guidance documents (e. g., the Environmental Protection Agency's (EPA's) Test Methods for Evaluating Solid Waste, SW-846), which apply to low-level or non-radioactive samples. DOE Methods fills the gap by including methods that take into account the complexities of DOE site matrices. The most recent update, distributed in October 1993, contained quality assurance (QA), quality control (QC), safety, sampling, organic analysis, inorganic analysis, and radioanalytical guidance as well as 29 methods. The next update, which will be distributed in April 1994, will contain 40 methods and will therefore have greater applicability. All new methods are either peer reviewed or labeled ''draft'' methods. Draft methods were added to speed the release of methods to field personnel

  18. Computational fragment-based screening using RosettaLigand: the SAMPL3 challenge

    Science.gov (United States)

    Kumar, Ashutosh; Zhang, Kam Y. J.

    2012-05-01

    SAMPL3 fragment based virtual screening challenge provides a valuable opportunity for researchers to test their programs, methods and screening protocols in a blind testing environment. We participated in SAMPL3 challenge and evaluated our virtual fragment screening protocol, which involves RosettaLigand as the core component by screening a 500 fragments Maybridge library against bovine pancreatic trypsin. Our study reaffirmed that the real test for any virtual screening approach would be in a blind testing environment. The analyses presented in this paper also showed that virtual screening performance can be improved, if a set of known active compounds is available and parameters and methods that yield better enrichment are selected. Our study also highlighted that to achieve accurate orientation and conformation of ligands within a binding site, selecting an appropriate method to calculate partial charges is important. Another finding is that using multiple receptor ensembles in docking does not always yield better enrichment than individual receptors. On the basis of our results and retrospective analyses from SAMPL3 fragment screening challenge we anticipate that chances of success in a fragment screening process could be increased significantly with careful selection of receptor structures, protein flexibility, sufficient conformational sampling within binding pocket and accurate assignment of ligand and protein partial charges.

  19. Sampling and sample preparation methods for the analysis of trace elements in biological material

    International Nuclear Information System (INIS)

    Sansoni, B.; Iyengar, V.

    1978-05-01

    The authors attempt to give a most systamtic possible treatment of the sample taking and sample preparation of biological material (particularly in human medicine) for trace analysis (e.g. neutron activation analysis, atomic absorption spectrometry). Contamination and loss problems are discussed as well as the manifold problems of the different consistency of solid and liquid biological materials, as well as the stabilization of the sample material. The process of dry and wet ashing is particularly dealt with, where new methods are also described. (RB) [de

  20. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    KAUST Repository

    Beck, Joakim

    2018-02-19

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized for a specified error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a single-loop Monte Carlo method that uses the Laplace approximation of the return value of the inner loop. The first demonstration example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  1. Quadrature demodulation based circuit implementation of pulse stream for ultrasonic signal FRI sparse sampling

    International Nuclear Information System (INIS)

    Shoupeng, Song; Zhou, Jiang

    2017-01-01

    Converting ultrasonic signal to ultrasonic pulse stream is the key step of finite rate of innovation (FRI) sparse sampling. At present, ultrasonic pulse-stream-forming techniques are mainly based on digital algorithms. No hardware circuit that can achieve it has been reported. This paper proposes a new quadrature demodulation (QD) based circuit implementation method for forming an ultrasonic pulse stream. Elaborating on FRI sparse sampling theory, the process of ultrasonic signal is explained, followed by a discussion and analysis of ultrasonic pulse-stream-forming methods. In contrast to ultrasonic signal envelope extracting techniques, a quadrature demodulation method (QDM) is proposed. Simulation experiments were performed to determine its performance at various signal-to-noise ratios (SNRs). The circuit was then designed, with mixing module, oscillator, low pass filter (LPF), and root of square sum module. Finally, application experiments were carried out on pipeline sample ultrasonic flaw testing. The experimental results indicate that the QDM can accurately convert ultrasonic signal to ultrasonic pulse stream, and reverse the original signal information, such as pulse width, amplitude, and time of arrival. This technique lays the foundation for ultrasonic signal FRI sparse sampling directly with hardware circuitry. (paper)

  2. [Progress in sample preparation and analytical methods for trace polar small molecules in complex samples].

    Science.gov (United States)

    Zhang, Qianchun; Luo, Xialin; Li, Gongke; Xiao, Xiaohua

    2015-09-01

    Small polar molecules such as nucleosides, amines, amino acids are important analytes in biological, food, environmental, and other fields. It is necessary to develop efficient sample preparation and sensitive analytical methods for rapid analysis of these polar small molecules in complex matrices. Some typical materials in sample preparation, including silica, polymer, carbon, boric acid and so on, are introduced in this paper. Meanwhile, the applications and developments of analytical methods of polar small molecules, such as reversed-phase liquid chromatography, hydrophilic interaction chromatography, etc., are also reviewed.

  3. Error baseline rates of five sample preparation methods used to characterize RNA virus populations.

    Directory of Open Access Journals (Sweden)

    Jeffrey R Kugelman

    Full Text Available Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5 of all compared methods.

  4. Error baseline rates of five sample preparation methods used to characterize RNA virus populations

    Science.gov (United States)

    Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717

  5. Sample preparation for large-scale bioanalytical studies based on liquid chromatographic techniques.

    Science.gov (United States)

    Medvedovici, Andrei; Bacalum, Elena; David, Victor

    2018-01-01

    Quality of the analytical data obtained for large-scale and long term bioanalytical studies based on liquid chromatography depends on a number of experimental factors including the choice of sample preparation method. This review discusses this tedious part of bioanalytical studies, applied to large-scale samples and using liquid chromatography coupled with different detector types as core analytical technique. The main sample preparation methods included in this paper are protein precipitation, liquid-liquid extraction, solid-phase extraction, derivatization and their versions. They are discussed by analytical performances, fields of applications, advantages and disadvantages. The cited literature covers mainly the analytical achievements during the last decade, although several previous papers became more valuable in time and they are included in this review. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Comparison of alkali fusion and acid digestion methods for radiochemical separation of Uranium from dietary samples

    International Nuclear Information System (INIS)

    Kamesh Viswanathan, B.; Arunachalam, Kantha D.; Sathesh Kumar, A.; Jayakrishana, K.; Shanmugamsundaram, H.; Rao, D.D.

    2014-01-01

    Several methods exist for separation and measurement of uranium in dietary samples such as neutron activation analysis (NAA), alpha spectrometric determination, inductively coupled plasma mass spectrometry (ICP-MS) and fluorimetry. For qualitative determination of activity, NAA and alpha spectrometry are said to be superior to evaluate the isotopes of uranium ( 238 U, 234 U and 235 U). In case of alpha spectrometry, the samples have to undergo radiochemical analysis for separation from other elements for uranium detection. In our studies, uranium was determined in food matrices by acid digestion (AD) and alkali fusion (AF) methods. The recovery yield of uranium in food matrices was compared in order to get consistent yield. The average activity levels of 238 U and 234 U in food samples were calculated based on recovery yield of 232 U in the samples. The average recovery of 232 U in AD method was 22 ± 8% and in AF method, it was 14.9 ± 1.3%. The spread is more in AD method than the AF method from their mean. The lowest recovery of 232 U was found in AF method. This is due to the interference of other elements in the sample during electroplating. Experimental results showed that the uranium separation by AD method has better recovery than the AF method. The consistency in recovery of 232 U was better for AF method, which was lower than the AD method. However, overall for both the methods, the recovery can be termed as poor and need rigorous follow up studies for consistently higher recoveries (>50%) in these type of biological samples. There are reports indicating satisfactory recoveries of around 80% with 232 U as tracer in the food matrices

  7. A Statistic-Based Calibration Method for TIADC System

    Directory of Open Access Journals (Sweden)

    Kuojun Yang

    2015-01-01

    Full Text Available Time-interleaved technique is widely used to increase the sampling rate of analog-to-digital converter (ADC. However, the channel mismatches degrade the performance of time-interleaved ADC (TIADC. Therefore, a statistic-based calibration method for TIADC is proposed in this paper. The average value of sampling points is utilized to calculate offset error, and the summation of sampling points is used to calculate gain error. After offset and gain error are obtained, they are calibrated by offset and gain adjustment elements in ADC. Timing skew is calibrated by an iterative method. The product of sampling points of two adjacent subchannels is used as a metric for calibration. The proposed method is employed to calibrate mismatches in a four-channel 5 GS/s TIADC system. Simulation results show that the proposed method can estimate mismatches accurately in a wide frequency range. It is also proved that an accurate estimation can be obtained even if the signal noise ratio (SNR of input signal is 20 dB. Furthermore, the results obtained from a real four-channel 5 GS/s TIADC system demonstrate the effectiveness of the proposed method. We can see that the spectra spurs due to mismatches have been effectively eliminated after calibration.

  8. Finite-key analysis for quantum key distribution with weak coherent pulses based on Bernoulli sampling

    Science.gov (United States)

    Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato

    2017-07-01

    An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.

  9. Use of Language Sample Analysis by School-Based SLPs: Results of a Nationwide Survey

    Science.gov (United States)

    Pavelko, Stacey L.; Owens, Robert E., Jr.; Ireland, Marie; Hahs-Vaughn, Debbie L.

    2016-01-01

    Purpose: This article examines use of language sample analysis (LSA) by school-based speech-language pathologists (SLPs), including characteristics of language samples, methods of transcription and analysis, barriers to LSA use, and factors affecting LSA use, such as American Speech-Language-Hearing Association certification, number of years'…

  10. Theory of sampling and its application in tissue based diagnosis

    Directory of Open Access Journals (Sweden)

    Kayser Gian

    2009-02-01

    Full Text Available Abstract Background A general theory of sampling and its application in tissue based diagnosis is presented. Sampling is defined as extraction of information from certain limited spaces and its transformation into a statement or measure that is valid for the entire (reference space. The procedure should be reproducible in time and space, i.e. give the same results when applied under similar circumstances. Sampling includes two different aspects, the procedure of sample selection and the efficiency of its performance. The practical performance of sample selection focuses on search for localization of specific compartments within the basic space, and search for presence of specific compartments. Methods When a sampling procedure is applied in diagnostic processes two different procedures can be distinguished: I the evaluation of a diagnostic significance of a certain object, which is the probability that the object can be grouped into a certain diagnosis, and II the probability to detect these basic units. Sampling can be performed without or with external knowledge, such as size of searched objects, neighbourhood conditions, spatial distribution of objects, etc. If the sample size is much larger than the object size, the application of a translation invariant transformation results in Kriege's formula, which is widely used in search for ores. Usually, sampling is performed in a series of area (space selections of identical size. The size can be defined in relation to the reference space or according to interspatial relationship. The first method is called random sampling, the second stratified sampling. Results Random sampling does not require knowledge about the reference space, and is used to estimate the number and size of objects. Estimated features include area (volume fraction, numerical, boundary and surface densities. Stratified sampling requires the knowledge of objects (and their features and evaluates spatial features in relation to

  11. Single- versus multiple-sample method to measure glomerular filtration rate.

    Science.gov (United States)

    Delanaye, Pierre; Flamant, Martin; Dubourg, Laurence; Vidal-Petiot, Emmanuelle; Lemoine, Sandrine; Cavalier, Etienne; Schaeffner, Elke; Ebert, Natalie; Pottel, Hans

    2018-01-08

    There are many different ways to measure glomerular filtration rate (GFR) using various exogenous filtration markers, each having their own strengths and limitations. However, not only the marker, but also the methodology may vary in many ways, including the use of urinary or plasma clearance, and, in the case of plasma clearance, the number of time points used to calculate the area under the concentration-time curve, ranging from only one (Jacobsson method) to eight (or more) blood samples. We collected the results obtained from 5106 plasma clearances (iohexol or 51Cr-ethylenediaminetetraacetic acid (EDTA)) using three to four time points, allowing GFR calculation using the slope-intercept method and the Bröchner-Mortensen correction. For each time point, the Jacobsson formula was applied to obtain the single-sample GFR. We used Bland-Altman plots to determine the accuracy of the Jacobsson method at each time point. The single-sample method showed within 10% concordances with the multiple-sample method of 66.4%, 83.6%, 91.4% and 96.0% at the time points 120, 180, 240 and ≥300 min, respectively. Concordance was poorer at lower GFR levels, and this trend is in parallel with increasing age. Results were similar in males and females. Some discordance was found in the obese subjects. Single-sample GFR is highly concordant with a multiple-sample strategy, except in the low GFR range (<30 mL/min). © The Author 2018. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  12. Surveying immigrants without sampling frames - evaluating the success of alternative field methods.

    Science.gov (United States)

    Reichel, David; Morales, Laura

    2017-01-01

    This paper evaluates the sampling methods of an international survey, the Immigrant Citizens Survey, which aimed at surveying immigrants from outside the European Union (EU) in 15 cities in seven EU countries. In five countries, no sample frame was available for the target population. Consequently, alternative ways to obtain a representative sample had to be found. In three countries 'location sampling' was employed, while in two countries traditional methods were used with adaptations to reach the target population. The paper assesses the main methodological challenges of carrying out a survey among a group of immigrants for whom no sampling frame exists. The samples of the survey in these five countries are compared to results of official statistics in order to assess the accuracy of the samples obtained through the different sampling methods. It can be shown that alternative sampling methods can provide meaningful results in terms of core demographic characteristics although some estimates differ to some extent from the census results.

  13. Assessing the composition of fragmented agglutinated foraminiferal assemblages in ancient sediments: comparison of counting and area-based methods in Famennian samples (Late Devonian)

    Science.gov (United States)

    Girard, Catherine; Dufour, Anne-Béatrice; Charruault, Anne-Lise; Renaud, Sabrina

    2018-01-01

    Benthic foraminifera have been used as proxies for various paleoenvironmental variables such as food availability, carbon flux from surface waters, microhabitats, and indirectly water depth. Estimating assemblage composition based on morphotypes, as opposed to genus- or species-level identification, potentially loses important ecological information but opens the way to the study of ancient time periods. However, the ability to accurately constrain benthic foraminiferal assemblages has been questioned when the most abundant foraminifera are fragile agglutinated forms, particularly prone to fragmentation. Here we test an alternate method for accurately estimating the composition of fragmented assemblages. The cumulated area per morphotype method is assessed, i.e., the sum of the area of all tests or fragments of a given morphotype in a sample. The percentage of each morphotype is calculated as a portion of the total cumulated area. Percentages of different morphotypes based on counting and cumulated area methods are compared one by one and analyzed using principal component analyses, a co-inertia analysis, and Shannon diversity indices. Morphotype percentages are further compared to an estimate of water depth based on microfacies description. Percentages of the morphotypes are not related to water depth. In all cases, counting and cumulated area methods deliver highly similar results, suggesting that the less time-consuming traditional counting method may provide robust estimates of assemblages. The size of each morphotype may deliver paleobiological information, for instance regarding biomass, but should be considered carefully due to the pervasive issue of fragmentation.

  14. Sampling and analysis methods for geothermal fluids and gases

    Energy Technology Data Exchange (ETDEWEB)

    Shannon, D. W.

    1978-01-01

    The data obtained for the first round robin sample collected at Mesa 6-2 wellhead, East Mesa Test Site, Imperial Valley are summarized. Test results are listed by method used for cross reference to the analytic methods section. Results obtained for radioactive isotopes present in the brine sample are tabulated. The data obtained for the second round robin sample collected from the Woolsey No. 1 first stage flash unit, San Diego Gas and Electric Niland Test Facility are presented in the same manner. Lists of the participants of the two round robins are given. Data from miscellaneous analyses are included. Summaries of values derived from the round robin raw data are presented. (MHR)

  15. Test of methods for retrospective activity size distribution determination from filter samples

    International Nuclear Information System (INIS)

    Meisenberg, Oliver; Tschiersch, Jochen

    2015-01-01

    Determining the activity size distribution of radioactive aerosol particles requires sophisticated and heavy equipment, which makes measurements at large number of sites difficult and expensive. Therefore three methods for a retrospective determination of size distributions from aerosol filter samples in the laboratory were tested for their applicability. Extraction into a carrier liquid with subsequent nebulisation showed size distributions with a slight but correctable bias towards larger diameters compared with the original size distribution. Yields in the order of magnitude of 1% could be achieved. Sonication-assisted extraction into a carrier liquid caused a coagulation mode to appear in the size distribution. Sonication-assisted extraction into the air did not show acceptable results due to small yields. The method of extraction into a carrier liquid without sonication was applied to aerosol samples from Chernobyl in order to calculate inhalation dose coefficients for 137 Cs based on the individual size distribution. The effective dose coefficient is about half of that calculated with a default reference size distribution. - Highlights: • Activity size distributions can be recovered after aerosol sampling on filters. • Extraction into a carrier liquid and subsequent nebulisation is appropriate. • This facilitates the determination of activity size distributions for individuals. • Size distributions from this method can be used for individual dose coefficients. • Dose coefficients were calculated for the workers at the new Chernobyl shelter

  16. The use of Geographic Information System (GIS) and non-GIS methods to assess the external validity of samples postcollection.

    Science.gov (United States)

    Richardson, Esther; Good, Margaret; McGrath, Guy; More, Simon J

    2009-09-01

    External validity is fundamental to veterinary diagnostic investigation, reflecting the accuracy with which sample results can be extrapolated to a broader population of interest. Probability sampling methods are routinely used during the collection of samples from populations, specifically to maximize external validity. Nonprobability sampling (e.g., of blood samples collected as part of routine surveillance programs or laboratory submissions) may provide useful data for further posthoc epidemiological analysis, adding value to the collection and submission of samples. As the sample has already been submitted, the analyst or investigator does not have any control over the sampling methodology, and hence external validity as routine probability sampling methods may not have been employed. The current study describes several Geographic Information System (GIS) and non-GIS methods, applied posthoc, to assess the external validity of samples collected using both probability and nonprobability sampling methods. These methods could equally be employed for inspecting other datasets. Mapping was conducted using ArcView 9.1. Based on this posthoc assessment, results from the random field sample could provide an externally valid, albeit relatively imprecise, estimate of national disease prevalence, of disease prevalence in 3 of the 4 provinces (all but Ulster, in the north and northwest, where sample size was small), and in beef and dairy herds. This study provides practical methods for examining the external validity of samples postcollection.

  17. Method for Measuring Thermal Conductivity of Small Samples Having Very Low Thermal Conductivity

    Science.gov (United States)

    Miller, Robert A.; Kuczmarski, Maria a.

    2009-01-01

    This paper describes the development of a hot plate method capable of using air as a standard reference material for the steady-state measurement of the thermal conductivity of very small test samples having thermal conductivity on the order of air. As with other approaches, care is taken to ensure that the heat flow through the test sample is essentially one-dimensional. However, unlike other approaches, no attempt is made to use heated guards to block the flow of heat from the hot plate to the surroundings. It is argued that since large correction factors must be applied to account for guard imperfections when sample dimensions are small, it may be preferable to simply measure and correct for the heat that flows from the heater disc to directions other than into the sample. Experimental measurements taken in a prototype apparatus, combined with extensive computational modeling of the heat transfer in the apparatus, show that sufficiently accurate measurements can be obtained to allow determination of the thermal conductivity of low thermal conductivity materials. Suggestions are made for further improvements in the method based on results from regression analyses of the generated data.

  18. Gap-Filling of Landsat 7 Imagery Using the Direct Sampling Method

    Directory of Open Access Journals (Sweden)

    Gaohong Yin

    2016-12-01

    Full Text Available The failure of the Scan Line Corrector (SLC on Landsat 7 imposed systematic data gaps on retrieved imagery and removed the capacity to provide spatially continuous fields. While a number of methods have been developed to fill these gaps, most of the proposed techniques are only applicable over relatively homogeneous areas. When they are applied to heterogeneous landscapes, retrieving image features and elements can become challenging. Here we present a gap-filling approach that is based on the adoption of the Direct Sampling multiple-point geostatistical method. The method employs a conditional stochastic resampling of known areas in a training image to simulate unknown locations. The approach is assessed across a range of both homogeneous and heterogeneous regions. Simulation results show that for homogeneous areas, satisfactory results can be obtained by simply adopting non-gap locations in the target image as baseline training data. For heterogeneous landscapes, bivariate simulations using an auxiliary variable acquired at a different date provides more accurate results than univariate simulations, especially as land cover complexity increases. Apart from recovering spatially continuous fields, one of the key advantages of the Direct Sampling is the relatively straightforward implementation process that relies on relatively few parameters.

  19. Testing an Impedance Non-destructive Method to Evaluate Steel-Fiber Concrete Samples

    Science.gov (United States)

    Komarkova, Tereza; Fiala, Pavel; Steinbauer, Miloslav; Roubal, Zdenek

    2018-02-01

    Steel-fiber reinforced concrete is a composite material characterized by outstanding tensile properties and resistance to the development of cracks. The concrete, however, exhibits such characteristics only on the condition that the steel fibers in the final, hardened composite have been distributed evenly. The current methods to evaluate the distribution and concentration of a fiber composite are either destructive or exhibit a limited capability of evaluating the concentration and orientation of the fibers. In this context, the paper discusses tests related to the evaluation of the density and orientation of fibers in a composite material. Compared to the approaches used to date, the proposed technique is based on the evaluation of the electrical impedance Z in the band close to the resonance of the sensor-sample configuration. Using analytically expressed equations, we can evaluate the monitored part of the composite and its density at various depths of the tested sample. The method employs test blocks of composites, utilizing the resonance of the measuring device and the measured sample set; the desired state occurs within the interval of between f=3 kHz and 400 kHz.

  20. Efficient free energy calculations by combining two complementary tempering sampling methods.

    Science.gov (United States)

    Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun

    2017-01-14

    Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.

  1. SnagPRO: snag and tree sampling and analysis methods for wildlife

    Science.gov (United States)

    Lisa J. Bate; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe sampling methods and provide software to accurately and efficiently estimate snag and tree densities at desired scales to meet a variety of research and management objectives. The methods optimize sampling effort by choosing a plot size appropriate for the specified forest conditions and sampling goals. Plot selection and data analyses are supported by...

  2. Chapter 11. Community analysis-based methods

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Y.; Wu, C.H.; Andersen, G.L.; Holden, P.A.

    2010-05-01

    Microbial communities are each a composite of populations whose presence and relative abundance in water or other environmental samples are a direct manifestation of environmental conditions, including the introduction of microbe-rich fecal material and factors promoting persistence of the microbes therein. As shown by culture-independent methods, different animal-host fecal microbial communities appear distinctive, suggesting that their community profiles can be used to differentiate fecal samples and to potentially reveal the presence of host fecal material in environmental waters. Cross-comparisons of microbial communities from different hosts also reveal relative abundances of genetic groups that can be used to distinguish sources. In increasing order of their information richness, several community analysis methods hold promise for MST applications: phospholipid fatty acid (PLFA) analysis, denaturing gradient gel electrophoresis (DGGE), terminal restriction fragment length polymorphism (TRFLP), cloning/sequencing, and PhyloChip. Specific case studies involving TRFLP and PhyloChip approaches demonstrate the ability of community-based analyses of contaminated waters to confirm a diagnosis of water quality based on host-specific marker(s). The success of community-based MST for comprehensively confirming fecal sources relies extensively upon using appropriate multivariate statistical approaches. While community-based MST is still under evaluation and development as a primary diagnostic tool, results presented herein demonstrate its promise. Coupled with its inherently comprehensive ability to capture an unprecedented amount of microbiological data that is relevant to water quality, the tools for microbial community analysis are increasingly accessible, and community-based approaches have unparalleled potential for translation into rapid, perhaps real-time, monitoring platforms.

  3. Silver nanoparticles plasmon resonance-based method for the determination of uric acid in human plasma and urine samples

    International Nuclear Information System (INIS)

    Amjadi, M.; Rahimpour, E.

    2012-01-01

    We have developed a simple and sensitive colorimetric procedure for the quantification of trace amounts of uric acid. It is based on the finding that uric acid in a medium containing ammonia and sodium hydroxide at 65 0 C can reduce silver ions to form yellow silver nanoparticles (Ag NPs). These are stabilized in solution by using poly(vinyl alcohol) as a capping agent. The yellow color of the solution that results from the localized surface plasmon resonance of Ag NPs can be observed by the bare eye. The absorbance at 415 nm is proportional to the concentration of uric acid which therefore can be determined quantitatively. The calibration curve is linear in the concentration range from 10 to 200 nM, with a limit of detection of 3.3 nM. The method was successfully applied to the determination of uric acid in human plasma and urine samples. (author)

  4. MStern Blotting–High Throughput Polyvinylidene Fluoride (PVDF) Membrane-Based Proteomic Sample Preparation for 96-Well Plates*

    OpenAIRE

    Berger, Sebastian T.; Ahmed, Saima; Muntel, Jan; Cuevas Polo, Nerea; Bachur, Richard; Kentsis, Alex; Steen, Judith; Steen, Hanno

    2015-01-01

    We describe a 96-well plate compatible membrane-based proteomic sample processing method, which enables the complete processing of 96 samples (or multiples thereof) within a single workday. This method uses a large-pore hydrophobic PVDF membrane that efficiently adsorbs proteins, resulting in fast liquid transfer through the membrane and significantly reduced sample processing times. Low liquid transfer speeds have prevented the useful 96-well plate implementation of FASP as a widely used mem...

  5. Evaluation of factor for one-point venous blood sampling method based on the causality model

    International Nuclear Information System (INIS)

    Matsutomo, Norikazu; Onishi, Hideo; Kobara, Kouichi; Sasaki, Fumie; Watanabe, Haruo; Nagaki, Akio; Mimura, Hiroaki

    2009-01-01

    One-point venous blood sampling method (Mimura, et al.) can evaluate the regional cerebral blood flow (rCBF) value with a high degree of accuracy. However, the method is accompanied by complexity of technique because it requires a venous blood Octanol value, and its accuracy is affected by factors of input function. Therefore, we evaluated the factors that are used for input function to determine the accuracy input function and simplify the technique. The input function which uses the time-dependent brain count of 5 minutes, 15 minutes, and 25 minutes from administration, and the input function in which an objective variable is used as the artery octanol value to exclude the venous blood octanol value are created. Therefore, a correlation between these functions and rCBF value by the microsphere (MS) method is evaluated. Creation of a high-accuracy input function and simplification of technique are possible. The rCBF value obtained by the input function, the factor of which is a time-dependent brain count of 5 minutes from administration, and the objective variable is artery octanol value, had a high correlation with the MS method (y=0.899x+4.653, r=0.842). (author)

  6. Soil separator and sampler and method of sampling

    Science.gov (United States)

    O'Brien, Barry H [Idaho Falls, ID; Ritter, Paul D [Idaho Falls, ID

    2010-02-16

    A soil sampler includes a fluidized bed for receiving a soil sample. The fluidized bed may be in communication with a vacuum for drawing air through the fluidized bed and suspending particulate matter of the soil sample in the air. In a method of sampling, the air may be drawn across a filter, separating the particulate matter. Optionally, a baffle or a cyclone may be included within the fluidized bed for disentrainment, or dedusting, so only the finest particulate matter, including asbestos, will be trapped on the filter. The filter may be removable, and may be tested to determine the content of asbestos and other hazardous particulate matter in the soil sample.

  7. Modification of a two blood sample method used for measurement of GFR with 99mTc-DTPA.

    Science.gov (United States)

    Surma, Marian J; Płachcińska, Anna; Kuśmierek, Jacek

    2018-01-01

    Measurements of GFR may be performed with a slope/intercept method (S/I), using only two blood samples taken in strictly defined time points. The aim of the study was to modify this method in order to extend time intervals suitable for blood sampling. Modification was based on a variation of a Russel et al. model parameter, selection of time intervals suitable for blood sampling and assessment of uncertainty of calculated results. Archived values of GFR measurements of 169 patients with different renal function, from 5.5 to 179 mL/min, calculated with a multiple blood sample method were used. Concentrations of a radiopharmaceutical in consecutive minutes, from 60th to 190th after injection, were calculated theoretically, using archived parameters of biexponential functions describing a decrease in 99mTc-DTPA concentration in blood plasma with time. These values, together with injected activities, were treated as measurements and used for S/I clearance calculations. Next, values of S/I clearance were compared with the multiple blood sample method in order to calculate suitable values of exponent present in a Russel's model, for every combination of two blood sampling time points. A model was considered accurately fitted to measured values when SEE ≤ 3.6 mL/min. Assessments of uncertainty of obtained results were based on law of error superposition, taking into account mean square prediction error and also errors introduced by pipetting, time measurement and stochastic radioactive decay. The accepted criteria resulted in extension of time intervals suitable for blood sampling to: between 60 and 90 minutes after injection for the first sample and between 150 and 180 minutes for the second sample. Uncertainty of results was assessed as between 4 mL/min for GFR = 5-10 mL/min and 8 mL/min for GFR = 180 mL/min. Time intervals accepted for blood sampling fully satisfy nuclear medicine staff and ensure proper determination of GFR. Uncertainty of results is entirely

  8. Extraction method based on emulsion breaking for the determination of Cu, Fe and Pb in Brazilian automotive gasoline samples by high-resolution continuum source flame atomic absorption spectrometry

    Science.gov (United States)

    Leite, Clarice C.; de Jesus, Alexandre; Kolling, Leandro; Ferrão, Marco F.; Samios, Dimitrios; Silva, Márcia M.

    2018-04-01

    This work reports a new method for extraction of Cu, Fe and Pb from Brazilian automotive gasoline and their determination by high-resolution continuous source flame atomic absorption spectrometry (HR-CS FAAS). The method was based on the formation of water-in-oil emulsion by mixing 2.0 mL of extraction solution constituted by 12% (w/v) Triton X-100 and 5% (v/v) HNO3 with 10 mL of sample. After heating at 90 °C for 10 min, two well-defined phases were formed. The bottom phase (approximately 3.5 mL), composed of acidified water and part of the ethanol originally present in the gasoline sample, containing the extracted analytes was analyzed. The surfactant and HNO3 concentrations and the heating temperature employed in the process were optimized by Doehlert design, using a Brazilian gasoline sample spiked with Cu, Fe and Pb (organometallic compounds). The efficiency of extraction was investigated and it ranged from 80 to 89%. The calibration was accomplished by using matrix matching method. For this, the standards were obtained performing the same extraction procedure used for the sample, using emulsions obtained with a gasoline sample free of analytes and the addition of inorganic standards. Limits of detection obtained were 3.0, 5.0 and 14.0 μg L-1 for Cu, Fe and Pb, respectively. These limits were estimated for the original sample taking into account the preconcentration factor obtained. The accuracy of the proposed method was assured by recovery tests spiking the samples with organometallic standards and the obtained values ranged from 98 to 105%. Ten gasoline samples were analyzed and Fe was found in four samples (0.04-0.35 mg L-1) while Cu (0.28 mg L-1) and Pb (0.60 mg L-1) was found in just one sample.

  9. A new method for automatic discontinuity traces sampling on rock mass 3D model

    Science.gov (United States)

    Umili, G.; Ferrero, A.; Einstein, H. H.

    2013-02-01

    A new automatic method for discontinuity traces mapping and sampling on a rock mass digital model is described in this work. The implemented procedure allows one to automatically identify discontinuity traces on a Digital Surface Model: traces are detected directly as surface breaklines, by means of maximum and minimum principal curvature values of the vertices that constitute the model surface. Color influence and user errors, that usually characterize the trace mapping on images, are eliminated. Also trace sampling procedures based on circular windows and circular scanlines have been implemented: they are used to infer trace data and to calculate values of mean trace length, expected discontinuity diameter and intensity of rock discontinuities. The method is tested on a case study: results obtained applying the automatic procedure on the DSM of a rock face are compared to those obtained performing a manual sampling on the orthophotograph of the same rock face.

  10. Culture methods of allograft musculoskeletal tissue samples in Australian bacteriology laboratories.

    Science.gov (United States)

    Varettas, Kerry

    2013-12-01

    Samples of allograft musculoskeletal tissue are cultured by bacteriology laboratories to determine the presence of bacteria and fungi. In Australia, this testing is performed by 6 TGA-licensed clinical bacteriology laboratories with samples received from 10 tissue banks. Culture methods of swab and tissue samples employ a combination of solid agar and/or broth media to enhance micro-organism growth and maximise recovery. All six Australian laboratories receive Amies transport swabs and, except for one laboratory, a corresponding biopsy sample for testing. Three of the 6 laboratories culture at least one allograft sample directly onto solid agar. Only one laboratory did not use a broth culture for any sample received. An international literature review found that a similar combination of musculoskeletal tissue samples were cultured onto solid agar and/or broth media. Although variations of allograft musculoskeletal tissue samples, culture media and methods are used in Australian and international bacteriology laboratories, validation studies and method evaluations have challenged and supported their use in recovering fungi and aerobic and anaerobic bacteria.

  11. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    Science.gov (United States)

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  12. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks

    Directory of Open Access Journals (Sweden)

    Cuicui Zhang

    2014-12-01

    Full Text Available Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1 how to define diverse base classifiers from the small data; (2 how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  13. Development of analytical methods for the separation of plutonium, americium, curium and neptunium from environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Salminen, S.

    2009-07-01

    In this work, separation methods have been developed for the analysis of anthropogenic transuranium elements plutonium, americium, curium and neptunium from environmental samples contaminated by global nuclear weapons testing and the Chernobyl accident. The analytical methods utilized in this study are based on extraction chromatography. Highly varying atmospheric plutonium isotope concentrations and activity ratios were found at both Kurchatov (Kazakhstan), near the former Semipalatinsk test site, and Sodankylae (Finland). The origin of plutonium is almost impossible to identify at Kurchatov, since hundreds of nuclear tests were performed at the Semipalatinsk test site. In Sodankylae, plutonium in the surface air originated from nuclear weapons testing, conducted mostly by USSR and USA before the sampling year 1963. The variation in americium, curium and neptunium concentrations was great as well in peat samples collected in southern and central Finland in 1986 immediately after the Chernobyl accident. The main source of transuranium contamination in peats was from global nuclear test fallout, although there are wide regional differences in the fraction of Chernobyl-originated activity (of the total activity) for americium, curium and neptunium. The separation methods developed in this study yielded good chemical recovery for the elements investigated and adequately pure fractions for radiometric activity determination. The extraction chromatographic methods were faster compared to older methods based on ion exchange chromatography. In addition, extraction chromatography is a more environmentally friendly separation method than ion exchange, because less acidic waste solutions are produced during the analytical procedures. (orig.)

  14. Systematic Sampling and Cluster Sampling of Packet Delays

    OpenAIRE

    Lindh, Thomas

    2006-01-01

    Based on experiences of a traffic flow performance meter this papersuggests and evaluates cluster sampling and systematic sampling as methods toestimate average packet delays. Systematic sampling facilitates for exampletime analysis, frequency analysis and jitter measurements. Cluster samplingwith repeated trains of periodically spaced sampling units separated by randomstarting periods, and systematic sampling are evaluated with respect to accuracyand precision. Packet delay traces have been ...

  15. rCBF measurement by one-point venous sampling with the ARG method

    International Nuclear Information System (INIS)

    Yoshida, Nobuhiro; Okamoto, Toshiaki; Takahashi, Hidekado; Hattori, Teruo

    1997-01-01

    We investigated the possibility of using venous blood sampling instead of arterial blood sampling for the current method of ARG (autoradiography) used to determine regional cerebral blood flow (rCBF) on the basis of one session of arterial blood sampling and SPECT. For this purpose, the ratio of the arterial blood radioactivity count to the venous blood radioactivity count, the coefficient of variation, and the correlation and differences between arterial blood-based rCBF and venous blood-based rCBF were analyzed. The coefficient of variation was lowest (4.1%) 20 minutes after injection into the dorsum manus. When the relationship between venous and arterial blood counts was analyzed, arterial blood counts correlated well with venous blood counts collected at the dorsum manus 20 or 30 minutes after intravenous injection and with venous blood counts collected at the wrist 20 minutes after intravenous injection (r=0.97 or higher). The difference from rCBF determined on the basis of arterial blood was smallest (0.7) for rCBF determined on the basis of venous blood collected at the dorsum manus 20 minutes after intravenous injection. (author)

  16. Algorithm/Architecture Co-design of the Generalized Sampling Theorem Based De-Interlacer.

    NARCIS (Netherlands)

    Beric, A.; Haan, de G.; Sethuraman, R.; Meerbergen, van J.

    2005-01-01

    De-interlacing is a major determinant of image quality in a modern display processing chain. The de-interlacing method based on the generalized sampling theorem (GST)applied to motion estimation and motion compensation provides the best de-interlacing results. With HDTV interlaced input material

  17. Fast Reduction Method in Dominance-Based Information Systems

    Science.gov (United States)

    Li, Yan; Zhou, Qinghua; Wen, Yongchuan

    2018-01-01

    In real world applications, there are often some data with continuous values or preference-ordered values. Rough sets based on dominance relations can effectively deal with these kinds of data. Attribute reduction can be done in the framework of dominance-relation based approach to better extract decision rules. However, the computational cost of the dominance classes greatly affects the efficiency of attribute reduction and rule extraction. This paper presents an efficient method of computing dominance classes, and further compares it with traditional method with increasing attributes and samples. Experiments on UCI data sets show that the proposed algorithm obviously improves the efficiency of the traditional method, especially for large-scale data.

  18. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    Science.gov (United States)

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  19. Assessment of dust sampling methods for the study of cultivable-microorganism exposure in stables.

    NARCIS (Netherlands)

    Normand, A.C.; Vacheyrou, M.; Sudre, B.; Heederik, D.|info:eu-repo/dai/nl/072910542; Piarroux, R.

    2009-01-01

    Studies have shown a link between living on a farm, exposure to microbial components (e.g., endotoxins or beta-d-glucans), and a lower risk for allergic diseases and asthma. Due to the lack of validated sampling methods, studies of asthma and atopy have not relied on exposure assessment based on

  20. Sensitive spectrophotometric methods for determination of some organophosphorus pesticides in vegetable samples

    Directory of Open Access Journals (Sweden)

    MAGDA A. AKL

    2010-03-01

    Full Text Available Three rapid, simple, reproducible and sensitive spectrophotometric methods (A, B and C are described for the determination of two organophosphorus pesticides, (malathion and dimethoate in formulations and vegetable samples. The methods A and B involve the addition of an excess of Ce4+ into sulphuric acid medium and the determination of the unreacted oxidant by decreasing the red color of chromotrope 2R (C2R at a suitable lmax = 528 nm for method A, or a decrease in the orange pink color of rhodamine 6G (Rh6G at a suitable lmax = = 525 nm. The method C is based on the oxidation of malathion or dimethoate with the slight excess of N-bromosuccinimide (NBS and the determination of unreacted oxidant by reacting it with amaranth dye (AM in hydrochloric acid medium at a suitable lmax = 520 nm. A regression analysis of Beer-Lambert plots showed a good correlation in the concentration range of 0.1-4.2 μg mL−1. The apparent molar absorptivity, Sandell sensitivity, the detection and quantification limits were calculated. For more accurate analysis, Ringbom optimum concentration ranges are 0.25-4.0 μg mL−1. The developed methods were successfully applied to the determination of malathion, and dimethoate in their formulations and environmental vegetable samples.

  1. Detection of protozoa in water samples by formalin/ether concentration method.

    Science.gov (United States)

    Lora-Suarez, Fabiana; Rivera, Raul; Triviño-Valencia, Jessica; Gomez-Marin, Jorge E

    2016-09-01

    Methods to detect protozoa in water samples are expensive and laborious. We evaluated the formalin/ether concentration method to detect Giardia sp., Cryptosporidium sp. and Toxoplasma in water. In order to test the properties of the method, we spiked water samples with different amounts of each protozoa (0, 10 and 50 cysts or oocysts) in a volume of 10 L of water. Immunofluorescence assay was used for detection of Giardia and Cryptosporidium. Toxoplasma oocysts were identified by morphology. The mean percent of recovery in 10 repetitions of the entire method, in 10 samples spiked with ten parasites and read by three different observers, were for Cryptosporidium 71.3 ± 12, for Giardia 63 ± 10 and for Toxoplasma 91.6 ± 9 and the relative standard deviation of the method was of 17.5, 17.2 and 9.8, respectively. Intraobserver variation as measured by intraclass correlation coefficient, was fair for Toxoplasma, moderate for Cryptosporidium and almost perfect for Giardia. The method was then applied in 77 samples of raw and drinkable water in three different plant of water treatment. Cryptosporidium was found in 28 of 77 samples (36%) and Giardia in 31 of 77 samples (40%). Theses results identified significant differences in treatment process to reduce the presence of Giardia and Cryptosporidium. In conclusion, the formalin ether method to concentrate protozoa in water is a new alternative for low resources countries, where is urgently need to monitor and follow the presence of theses protozoa in drinkable water. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Materials and Methods for Streamlined Laboratory Analysis of Environmental Samples, FY 2016 Report

    Energy Technology Data Exchange (ETDEWEB)

    Addleman, Raymond S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Naes, Benjamin E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McNamara, Bruce K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Olsen, Khris B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Chouyyok, Wilaiwan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Willingham, David G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Spigner, Angel C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-11-30

    The International Atomic Energy Agency (IAEA) relies upon laboratory analysis of environmental samples (typically referred to as “swipes”) collected during on-site inspections of safeguarded facilities to support the detection and deterrence of undeclared activities. Unfortunately, chemical processing and assay of the samples is slow and expensive. A rapid, effective, and simple extraction process and analysis method is needed to provide certified results with improved timeliness at reduced costs (principally in the form of reduced labor), while maintaining or improving sensitivity and efficacy. To address these safeguard needs the Pacific Northwest National Laboratory (PNNL) explored and demonstrated improved methods for environmental sample (ES) analysis. Improvements for both bulk and particle analysis were explored. To facilitate continuity and adoption, the new sampling materials and processing methods will be compatible with existing IAEA protocols for ES analysis. PNNL collaborated with Oak Ridge National Laboratory (ORNL), which performed independent validation of the new bulk analysis methods and compared performance to traditional IAEA’s Network of Analytical Laboratories (NWAL) protocol. ORNL efforts are reported separately. This report describes PNNL’s FY 2016 progress, which was focused on analytical application supporting environmental monitoring of uranium enrichment plants and nuclear fuel processing. In the future the technology could be applied to other safeguard applications and analytes related to fuel manufacturing, reprocessing, etc. PNNL’s FY 2016 efforts were broken into two tasks and a summary of progress, accomplishments and highlights are provided below. Principal progress and accomplishments on Task 1, Optimize Materials and Methods for ICP-MS Environmental Sample Analysis, are listed below. • Completed initial procedure for rapid uranium extraction from ES swipes based upon carbonate-peroxide chemistry (delivered to ORNL for

  3. Novel sample preparation method for surfactant containing suppositories: effect of micelle formation on drug recovery.

    Science.gov (United States)

    Kalmár, Éva; Ueno, Konomi; Forgó, Péter; Szakonyi, Gerda; Dombi, György

    2013-09-01

    Rectal drug delivery is currently at the focus of attention. Surfactants promote drug release from the suppository bases and enhance the formulation properties. The aim of our work was to develop a sample preparation method for HPLC analysis for a suppository base containing 95% hard fat, 2.5% Tween 20 and 2.5% Tween 60. A conventional sample preparation method did not provide successful results as the recovery of the drug failed to fulfil the validation criterion 95-105%. This was caused by the non-ionic surfactants in the suppository base incorporating some of the drug, preventing its release. As guidance for the formulation from an analytical aspect, we suggest a well defined surfactant content based on the turbidimetric determination of the CMC (critical micelle formation concentration) in the applied methanol-water solvent. Our CMC data correlate well with the results of previous studies. As regards the sample preparation procedure, a study was performed of the effects of ionic strength and pH on the drug recovery with the avoidance of degradation of the drug during the procedure. Aminophenazone and paracetamol were used as model drugs. The optimum conditions for drug release from the molten suppository base were found to be 100 mM NaCl, 20-40 mM NaOH and a 30 min ultrasonic treatment of the final sample solution. As these conditions could cause the degradation of the drugs in the solution, this was followed by NMR spectroscopy, and the results indicated that degradation did not take place. The determined CMCs were 0.08 mM for Tween 20, 0.06 mM for Tween 60 and 0.04 mM for a combined Tween 20, Tween 60 system. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Comparison of sampling methods for radiocarbon dating of carbonyls in air samples via accelerator mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Schindler, Matthias, E-mail: matthias.schindler@physik.uni-erlangen.de; Kretschmer, Wolfgang; Scharf, Andreas; Tschekalinskij, Alexander

    2016-05-15

    Three new methods to sample and prepare various carbonyl compounds for radiocarbon measurements were developed and tested. Two of these procedures utilized the Strecker synthetic method to form amino acids from carbonyl compounds with either sodium cyanide or trimethylsilyl cyanide. The third procedure used semicarbazide to form crystalline carbazones with the carbonyl compounds. The resulting amino acids and semicarbazones were then separated and purified using thin layer chromatography. The separated compounds were then combusted to CO{sub 2} and reduced to graphite to determine {sup 14}C content by accelerator mass spectrometry (AMS). All of these methods were also compared with the standard carbonyl compound sampling method wherein a compound is derivatized with 2,4-dinitrophenylhydrazine and then separated by high-performance liquid chromatography (HPLC).

  5. Comparison of sampling methods for radiocarbon dating of carbonyls in air samples via accelerator mass spectrometry

    Science.gov (United States)

    Schindler, Matthias; Kretschmer, Wolfgang; Scharf, Andreas; Tschekalinskij, Alexander

    2016-05-01

    Three new methods to sample and prepare various carbonyl compounds for radiocarbon measurements were developed and tested. Two of these procedures utilized the Strecker synthetic method to form amino acids from carbonyl compounds with either sodium cyanide or trimethylsilyl cyanide. The third procedure used semicarbazide to form crystalline carbazones with the carbonyl compounds. The resulting amino acids and semicarbazones were then separated and purified using thin layer chromatography. The separated compounds were then combusted to CO2 and reduced to graphite to determine 14C content by accelerator mass spectrometry (AMS). All of these methods were also compared with the standard carbonyl compound sampling method wherein a compound is derivatized with 2,4-dinitrophenylhydrazine and then separated by high-performance liquid chromatography (HPLC).

  6. Assessment of reagent effectiveness and preservation methods for equine faecal samples

    Directory of Open Access Journals (Sweden)

    Eva Vavrouchova

    2015-03-01

    Full Text Available The aim of our study was to identify the most suitable flotation solution and effective preservation method for the examination of equine faeces samples using the FLOTAC technique. Samples from naturally infected horses were transported to the laboratory andanalysed accordingly. The sample from each horse was homogenized and divided into four parts: one was frozen, another two were preserved in different reagents such as sodium acetate-acetic-acid–formalin (SAF or 5% formalin.The last part was examined as a fresh sample in three different flotation solutions (Sheather´s solution, sodium chloride and sodium nitrate solution, all with a specific gravity 1.200. The preserved samples were examined in the period from 14 to21days after collection. According to our results, the sucrose solution was the most suitable flotation solution for fresh samples (small strongyle egg per gram was 706 compared to 360 in sodium chlorid and 507 in sodium nitrate and the sodium nitrate solution was the most efficient for the preserved samples (egg per gram was 382 compared to 295 in salt solution and 305 in sucrose solution. Freezing appears to be the most effective method of sample preservation, resulting in minimal damage to fragile strongyle eggs and therefore it is the most simple and effective preservation method for the examination of large numbers of faecal samples without the necessity of examining them all within 48 hours of collection. Deep freezing as a preservation method for equine faeces samples has not, according to our knowledge, been yet published.

  7. SPR based immunosensor for detection of Legionella pneumophila in water samples

    Science.gov (United States)

    Enrico, De Lorenzis; Manera, Maria G.; Montagna, Giovanni; Cimaglia, Fabio; Chiesa, Maurizio; Poltronieri, Palmiro; Santino, Angelo; Rella, Roberto

    2013-05-01

    Detection of legionellae by water sampling is an important factor in epidemiological investigations of Legionnaires' disease and its prevention. To avoid labor-intensive problems with conventional methods, an alternative, highly sensitive and simple method is proposed for detecting L. pneumophila in aqueous samples. A compact Surface Plasmon Resonance (SPR) instrumentation prototype, provided with proper microfluidics tools, is built. The developed immunosensor is capable of dynamically following the binding between antigens and the corresponding antibody molecules immobilized on the SPR sensor surface. A proper immobilization strategy is used in this work that makes use of an important efficient step aimed at the orientation of antibodies onto the sensor surface. The feasibility of the integration of SPR-based biosensing setups with microfluidic technologies, resulting in a low-cost and portable biosensor is demonstrated.

  8. A long-term validation of the modernised DC-ARC-OES solid-sample method.

    Science.gov (United States)

    Flórián, K; Hassler, J; Förster, O

    2001-12-01

    The validation procedure based on ISO 17025 standard has been used to study and illustrate both the longterm stability of the calibration process of the DC-ARC solid sample spectrometric method and the main validation criteria of the method. In the calculation of the validation characteristics depending on the linearity(calibration), also the fulfilment of predetermining criteria such as normality and homoscedasticity was checked. In order to decide whether there are any trends in the time-variation of the analytical signal or not, also the Neumann test of trend was applied and evaluated. Finally, a comparison with similar validation data of the ETV-ICP-OES method was carried out.

  9. Sample Based Unit Liter Dose Estimates

    International Nuclear Information System (INIS)

    JENSEN, L.

    2000-01-01

    The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999a) and the Final Safety Analysis Report (FSAR) (FDH 1999b) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in producing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks. The results given in this report are a revision to similar results given in an earlier version of the document (Jensen and Wilmarth 1999). The main difference between the results in this document and the earlier version is that the dose conversion factors (DCF) for converting μCi/g or μCi/L to Sv/L (sieverts per liter) have changed. There are now two DCFs, one based on ICRP-68 and one based on ICW-71 (Brevick 2000)

  10. A Table-Based Random Sampling Simulation for Bioluminescence Tomography

    Directory of Open Access Journals (Sweden)

    Xiaomeng Zhang

    2006-01-01

    Full Text Available As a popular simulation of photon propagation in turbid media, the main problem of Monte Carlo (MC method is its cumbersome computation. In this work a table-based random sampling simulation (TBRS is proposed. The key idea of TBRS is to simplify multisteps of scattering to a single-step process, through randomly table querying, thus greatly reducing the computing complexity of the conventional MC algorithm and expediting the computation. The TBRS simulation is a fast algorithm of the conventional MC simulation of photon propagation. It retained the merits of flexibility and accuracy of conventional MC method and adapted well to complex geometric media and various source shapes. Both MC simulations were conducted in a homogeneous medium in our work. Also, we present a reconstructing approach to estimate the position of the fluorescent source based on the trial-and-error theory as a validation of the TBRS algorithm. Good agreement is found between the conventional MC simulation and the TBRS simulation.

  11. Extending the alias Monte Carlo sampling method to general distributions

    International Nuclear Information System (INIS)

    Edwards, A.L.; Rathkopf, J.A.; Smidt, R.K.

    1991-01-01

    The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 12 figs., 2 tabs

  12. Research and application of sampling and analysis method of sodium aerosol

    International Nuclear Information System (INIS)

    Yu Xiaochen; Guo Qingzhou; Wen Ximeng

    1998-01-01

    Method of sampling-analysis for sodium aerosol is researched. The vacuum sampling technology is used in the sampling process, and the analysis method adopted is volumetric analysis and atomic absorption. When the absolute content of sodium is in the rang of 0.1 mg to 1.0 mg, the deviation of results between volumetric analysis and atomic absorption is less than 2%. The method has been applied in a sodium aerosol removal device successfully. The analysis range, accuracy and precision can meet the requirements for researching sodium aerosol

  13. Comparing hair-morphology and molecular methods to identify fecal samples from Neotropical felids.

    Directory of Open Access Journals (Sweden)

    Carlos C Alberts

    Full Text Available To avoid certain problems encountered with more-traditional and invasive methods in behavioral-ecology studies of mammalian predators, such as felids, molecular approaches have been employed to identify feces found in the field. However, this method requires a complete molecular biology laboratory, and usually also requires very fresh fecal samples to avoid DNA degradation. Both conditions are normally absent in the field. To address these difficulties, identification based on morphological characters (length, color, banding, scales and medullar patterns of hairs found in feces could be employed as an alternative. In this study we constructed a morphological identification key for guard hairs of eight Neotropical felids (jaguar, oncilla, Geoffroy's cat, margay, ocelot, Pampas cat, puma and jaguarundi and compared its efficiency to that of a molecular identification method, using the ATP6 region as a marker. For this molecular approach, we simulated some field conditions by postponing sample-conservation procedures. A blind test of the identification key obtained a nearly 70% overall success rate, which we considered equivalent to or better than the results of some molecular methods (probably due to DNA degradation found in other studies. The jaguar, puma and jaguarundi could be unequivocally discriminated from any other Neotropical felid. On a scale ranging from inadequate to excellent, the key proved poor only for the margay, with only 30% of its hairs successfully identified using this key; and have intermediate success rates for the remaining species, the oncilla, Geoffroy's cat, ocelot and Pampas cat, were intermediate. Complementary information about the known distributions of felid populations may be necessary to substantially improve the results obtained with the key. Our own molecular results were even better, since all blind-tested samples were correctly identified. Part of these identifications were made from samples kept in suboptimal

  14. Methodology Series Module 5: Sampling Strategies

    OpenAIRE

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ? Sampling Method?. There are essentially two types of sampling methods: 1) probability sampling ? based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling ? based on researcher's choice, population that accessible & available. Some of the non-probabilit...

  15. Statistical methods for mass spectrometry-based clinical proteomics

    NARCIS (Netherlands)

    Kakourou, A.

    2018-01-01

    The work presented in this thesis focuses on methods for the construction of diagnostic rules based on clinical mass spectrometry proteomic data. Mass spectrometry has become one of the key technologies for jointly measuring the expression of thousands of proteins in biological samples.

  16. A GPU code for analytic continuation through a sampling method

    Directory of Open Access Journals (Sweden)

    Johan Nordström

    2016-01-01

    Full Text Available We here present a code for performing analytic continuation of fermionic Green’s functions and self-energies as well as bosonic susceptibilities on a graphics processing unit (GPU. The code is based on the sampling method introduced by Mishchenko et al. (2000, and is written for the widely used CUDA platform from NVidia. Detailed scaling tests are presented, for two different GPUs, in order to highlight the advantages of this code with respect to standard CPU computations. Finally, as an example of possible applications, we provide the analytic continuation of model Gaussian functions, as well as more realistic test cases from many-body physics.

  17. SYBR green-based detection of Leishmania infantum DNA using peripheral blood samples.

    Science.gov (United States)

    Ghasemian, Mehrdad; Gharavi, Mohammad Javad; Akhlaghi, Lame; Mohebali, Mehdi; Meamar, Ahmad Reza; Aryan, Ehsan; Oormazdi, Hormozd; Ghayour, Zahra

    2016-03-01

    Parasitological methods for the diagnosis of visceral leishmaniasis (VL) require invasive sampling procedures. The aim of this study was to detect Leishmania infantum (L. infantum) DNA by real time-PCR method in peripheral blood of symptomatic VL patient and compared its performance with nested PCR, an established molecular method with very high diagnostic indices. 47 parasitologically confirmed VL patients diagnosed by direct agglutination test (DAT > 3200), bone marrow aspiration and presented characteristic clinical features (fever, hepatosplenomegaly, and anemia) and 40 controls (non-endemic healthy control-30, Malaria-2, Toxoplasma gondii-2, Mycobacterium tuberculosis-2, HBV-1, HCV-1, HSV-1 and CMV-1) were enrolled in this study. SYBR-green based real time-PCR and nested PCR was performed to amplify the Kinetoplast DNA minicircle gene using the DNA extracted from Buffy coat. From among 47 patients, 45 (95.7 %) were positive by both nested-PCR and real time-PCR. These results indicate that real time-PCR was not only as sensitive as a nested-PCR assay for detection of Leishmania kDNA in clinical sample, but also more rapid. The advantage of real time-PCR based methods over nested-PCR is simple to perform, more faster in which nested-PCR requires post-PCR processing and reducing contamination risk.

  18. Different methods for volatile sampling in mammals.

    Directory of Open Access Journals (Sweden)

    Marlen Kücklich

    Full Text Available Previous studies showed that olfactory cues are important for mammalian communication. However, many specific compounds that convey information between conspecifics are still unknown. To understand mechanisms and functions of olfactory cues, olfactory signals such as volatile compounds emitted from individuals need to be assessed. Sampling of animals with and without scent glands was typically conducted using cotton swabs rubbed over the skin or fur and analysed by gas chromatography-mass spectrometry (GC-MS. However, this method has various drawbacks, including a high level of contaminations. Thus, we adapted two methods of volatile sampling from other research fields and compared them to sampling with cotton swabs. To do so we assessed the body odor of common marmosets (Callithrix jacchus using cotton swabs, thermal desorption (TD tubes and, alternatively, a mobile GC-MS device containing a thermal desorption trap. Overall, TD tubes comprised most compounds (N = 113, with half of those compounds being volatile (N = 52. The mobile GC-MS captured the fewest compounds (N = 35, of which all were volatile. Cotton swabs contained an intermediate number of compounds (N = 55, but very few volatiles (N = 10. Almost all compounds found with the mobile GC-MS were also captured with TD tubes (94%. Hence, we recommend TD tubes for state of the art sampling of body odor of mammals or other vertebrates, particularly for field studies, as they can be easily transported, stored and analysed with high performance instruments in the lab. Nevertheless, cotton swabs capture compounds which still may contribute to the body odor, e.g. after bacterial fermentation, while profiles from mobile GC-MS include only the most abundant volatiles of the body odor.

  19. MR-based water content estimation in cartilage: design and validation of a method

    DEFF Research Database (Denmark)

    Shiguetomi Medina, Juan Manuel; Kristiansen, Maja Sophie; Ringgaard, Steffen

    Purpose: Design and validation of an MR-based method that allows the calculation of the water content in cartilage tissue. Methods and Materials: Cartilage tissue T1 map based water content MR sequences were used on a 37 Celsius degree stable system. The T1 map intensity signal was analyzed on 6...... cartilage samples from living animals (pig) and on 8 gelatin samples which water content was already known. For the data analysis a T1 intensity signal map software analyzer used. Finally, the method was validated after measuring and comparing 3 more cartilage samples in a living animal (pig). The obtained...... map based water content sequences can provide information that, after being analyzed using a T1-map analysis software, can be interpreted as the water contained inside a cartilage tissue. The amount of water estimated using this method was similar to the one obtained at the dry-freeze procedure...

  20. Non-uniform sampling and wide range angular spectrum method

    International Nuclear Information System (INIS)

    Kim, Yong-Hae; Byun, Chun-Won; Oh, Himchan; Lee, JaeWon; Pi, Jae-Eun; Heon Kim, Gi; Lee, Myung-Lae; Ryu, Hojun; Chu, Hye-Yong; Hwang, Chi-Sun

    2014-01-01

    A novel method is proposed for simulating free space field propagation from a source plane to a destination plane that is applicable for both small and large propagation distances. The angular spectrum method (ASM) was widely used for simulating near field propagation, but it caused a numerical error when the propagation distance was large because of aliasing due to under sampling. Band limited ASM satisfied the Nyquist condition on sampling by limiting a bandwidth of a propagation field to avoid an aliasing error so that it could extend the applicable propagation distance of the ASM. However, the band limited ASM also made an error due to the decrease of an effective sampling number in a Fourier space when the propagation distance was large. In the proposed wide range ASM, we use a non-uniform sampling in a Fourier space to keep a constant effective sampling number even though the propagation distance is large. As a result, the wide range ASM can produce simulation results with high accuracy for both far and near field propagation. For non-paraxial wave propagation, we applied the wide range ASM to a shifted destination plane as well. (paper)

  1. Acute Pain Perception During Different Sampling Methods for Respiratory Culture in Cystic Fibrosis Patients.

    Science.gov (United States)

    Eyns, Hanneke; De Wachter, Elke; Malfroot, Anne; Vaes, Peter

    2018-03-01

    .798], respectively). A relatively large range of pain experiences was observed in patients with CF during respiratory culture sampling, which underlines the importance of individual pain assessment. Nevertheless, clinicians can confidently choose the sampling method based on validity over patients' preference. Copyright © 2017 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  2. Alkaline Peptone Water-Based Enrichment Method for mcr-3 From Acute Diarrheic Outpatient Gut Samples

    Directory of Open Access Journals (Sweden)

    Qiaoling Sun

    2018-05-01

    Full Text Available A third plasmid-mediated colistin resistance gene, mcr-3, is increasingly being reported in Enterobacteriaceae and Aeromonas spp. from animals and humans. To investigate the molecular epidemiology of mcr in the gut flora of Chinese outpatients, 152 stool specimens were randomly collected from outpatients in our hospital from May to June, 2017. Stool specimens enriched in alkaline peptone water or Luria-Bertani (LB broth were screened for mcr-1, mcr-2, and mcr-3 using polymerase chain reaction (PCR-based assays. Overall, 19.1% (29/152 and 5.3% (8/152 of the stool samples enriched in alkaline peptone water were PCR-positive for mcr-1 and mcr-3, respectively, while 2.7% (4/152 of samples were positive for both mcr-1 and mcr-3. Strains isolated from the samples that were both mcr-1- and mcr-3-positive were subjected to antimicrobial susceptibility testing by broth microdilution. They were also screened for the presence of other resistance genes by PCR, while multilocus sequence typing and whole-genome sequencing were used to investigate the molecular epidemiology and genetic environment, respectively, of the resistance genes. mcr-3-positive Aeromonas veronii strain 126-14, containing a mcr-3.8-mcr-3-like2 segment, and mcr-1-positive Escherichia coli strain 126-1, belonging to sequence type 1485, were isolated from the sample from a diarrheic butcher with no history of colistin treatment. A. veronii 126-14 had a colistin minimum inhibitory concentration (MIC of 2 µg/mL and was susceptible to antibiotics in common use, while E. coli 126-1 produced TEM-1, CTX-M-55, and CTX-M-14 β-lactamases and was resistant to colistin, ceftazidime, and cefotaxime. Overall, there was a higher detection rate of mcr-3-carrying strains with low colistin MICs from the samples enriched in alkaline peptone water than from samples grown in LB broth.

  3. A Discrete-Time Chattering Free Sliding Mode Control with Multirate Sampling Method for Flight Simulator

    Directory of Open Access Journals (Sweden)

    Yunjie Wu

    2013-01-01

    Full Text Available In order to improve the tracking accuracy of flight simulator and expend its frequency response, a multirate-sampling-method-based discrete-time chattering free sliding mode control is developed and imported into the systems. By constructing the multirate sampling sliding mode controller, the flight simulator can perfectly track a given reference signal with an arbitrarily small dynamic tracking error, and the problems caused by a contradiction of reference signal period and control period in traditional design method can be eliminated. It is proved by theoretical analysis that the extremely high dynamic tracking precision can be obtained. Meanwhile, the robustness is guaranteed by sliding mode control even though there are modeling mismatch, external disturbances and measure noise. The validity of the proposed method is confirmed by experiments on flight simulator.

  4. A new turn-on fluorimetric method for the rapid speciation of Cr(III)/Cr(VI) species in tea samples with rhodamine-based fluorescent reagent

    Science.gov (United States)

    Özyol, Esra; Saçmacı, Şerife; Saçmacı, Mustafa; Ülgen, Ahmet

    2018-02-01

    A new fluorimetric method with rhodamine-based fluorescent agent was developed for the rapid speciation of Cr(III)/Cr(VI) in tea, soil and water samples. The system, which utilizes a fluorescent reagent, was used for the first time after synthesis/characterization of 3‧,6‧-bis(diethylamino)-2-{[(1E)-(2,4-dimethoxyphenyl)methylene] amino}spiro[isoindole-1,9‧-xanthen]-3(2H)-one (BDAS). The reagent responds instantaneously at room temperature in a 1:1 stoichiometric manner to the amount of Cr(III). The selectivity of this system for Cr(III) over other metal ions is remarkably high, and its sensitivity is below 0.01 mg L- 1 in aqueous solutions which enables a simplification without any pretreatment of the real sample. The method has a wide linear range of 0.1-10 mg L- 1 and a detection limit of 0.15 μg L- 1 for Cr(III) while the relative standard deviation was 0.1% for 0.1 mg L- 1 Cr(III) concentration. The results of detection and recovery experiments for Cr(III) in tea, soil and water were satisfactory, indicating that the method has better feasibility and application potential in the routine determination and speciation of Cr(III)/Cr(VI). The results of analysis of the certified reference material (INCT-TL-1 tea sample and CWW-TM-D waste water) are in good agreement with the certified value.

  5. Rapid, sensitive and cost effective method for isolation of viral DNA from feacal samples of dogs

    Directory of Open Access Journals (Sweden)

    Savi.

    2010-06-01

    Full Text Available A simple method for viral DNA extraction using chelex resin was developed. The method used was eco-friendly and cost effective compared to other methods such as phenol chloroform method which use health hazardous organic reagents. Further, a polymerase chain reaction (PCR based detection of canine parvovirus (CPV using primers from conserved region of VP2 gene was developed. To increase the sensitivity and specificity of reaction, nested PCR was designed. PCR reaction was optimized to amplify 747bp product of VP2 gene. The assay can be completed in few hours and doesn’t need hazardous chemicals. Thus, the sample preparation using chelating resin along with nested PCR seems to be a sensitive, specific and practical method for the detection of CPV in diarrhoeal feacal samples. [Vet. World 2010; 3(3.000: 105-106

  6. Methodology Series Module 5: Sampling Strategies.

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.

  7. Methodology series module 5: Sampling strategies

    Directory of Open Access Journals (Sweden)

    Maninder Singh Setia

    2016-01-01

    Full Text Available Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the 'Sampling Method'. There are essentially two types of sampling methods: 1 probability samplingbased on chance events (such as random numbers, flipping a coin etc.; and 2 non-probability samplingbased on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term 'random sample' when the researcher has used convenience sample. The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the 'generalizability' of these results. In such a scenario, the researcher may want to use 'purposive sampling' for the study.

  8. Synchronization of Hierarchical Time-Varying Neural Networks Based on Asynchronous and Intermittent Sampled-Data Control.

    Science.gov (United States)

    Xiong, Wenjun; Patel, Ragini; Cao, Jinde; Zheng, Wei Xing

    In this brief, our purpose is to apply asynchronous and intermittent sampled-data control methods to achieve the synchronization of hierarchical time-varying neural networks. The asynchronous and intermittent sampled-data controllers are proposed for two reasons: 1) the controllers may not transmit the control information simultaneously and 2) the controllers cannot always exist at any time . The synchronization is then discussed for a kind of hierarchical time-varying neural networks based on the asynchronous and intermittent sampled-data controllers. Finally, the simulation results are given to illustrate the usefulness of the developed criteria.In this brief, our purpose is to apply asynchronous and intermittent sampled-data control methods to achieve the synchronization of hierarchical time-varying neural networks. The asynchronous and intermittent sampled-data controllers are proposed for two reasons: 1) the controllers may not transmit the control information simultaneously and 2) the controllers cannot always exist at any time . The synchronization is then discussed for a kind of hierarchical time-varying neural networks based on the asynchronous and intermittent sampled-data controllers. Finally, the simulation results are given to illustrate the usefulness of the developed criteria.

  9. Passive Acoustic Source Localization at a Low Sampling Rate Based on a Five-Element Cross Microphone Array

    Directory of Open Access Journals (Sweden)

    Yue Kan

    2015-06-01

    Full Text Available Accurate acoustic source localization at a low sampling rate (less than 10 kHz is still a challenging problem for small portable systems, especially for a multitasking micro-embedded system. A modification of the generalized cross-correlation (GCC method with the up-sampling (US theory is proposed and defined as the US-GCC method, which can improve the accuracy of the time delay of arrival (TDOA and source location at a low sampling rate. In this work, through the US operation, an input signal with a certain sampling rate can be converted into another signal with a higher frequency. Furthermore, the optimal interpolation factor for the US operation is derived according to localization computation time and the standard deviation (SD of target location estimations. On the one hand, simulation results show that absolute errors of the source locations based on the US-GCC method with an interpolation factor of 15 are approximately from 1/15- to 1/12-times those based on the GCC method, when the initial same sampling rates of both methods are 8 kHz. On the other hand, a simple and small portable passive acoustic source localization platform composed of a five-element cross microphone array has been designed and set up in this paper. The experiments on the established platform, which accurately locates a three-dimensional (3D near-field target at a low sampling rate demonstrate that the proposed method is workable.

  10. Sampling methods for terrestrial amphibians and reptiles.

    Science.gov (United States)

    Paul Stephen Corn; R. Bruce. Bury

    1990-01-01

    Methods described for sampling amphibians and reptiles in Douglas-fir forests in the Pacific Northwest include pitfall trapping, time-constrained collecting, and surveys of coarse woody debris. The herpetofauna of this region differ in breeding and nonbreeding habitats and vagility, so that no single technique is sufficient for a community study. A combination of...

  11. Sampling and Pooling Methods for Capturing Herd Level Antibiotic Resistance in Swine Feces using qPCR and CFU Approaches

    DEFF Research Database (Denmark)

    Schmidt, Gunilla Veslemøy; Mellerup, Anders; Christiansen, Lasse Engbo

    2015-01-01

    The aim of this article was to define the sampling level and method combination that captures antibiotic resistance at pig herd level utilizing qPCR antibiotic resistance gene quantification and culture-based quantification of antibiotic resistant coliform indicator bacteria. Fourteen qPCR assays...... for commonly detected antibiotic resistance genes were developed, and used to quantify antibiotic resistance genes in total DNA from swine fecal samples that were obtained using different sampling and pooling methods. In parallel, the number of antibiotic resistant coliform indicator bacteria was determined...... when comparing individual sampling and pooling methods. qPCR on pooled samples was found to be a good representative for the general resistance level in a pig herd compared to the coliform CFU counts. It had significantly reduced relative standard deviations compared to coliform CFU counts in the same...

  12. Sample size calculations based on a difference in medians for positively skewed outcomes in health care studies

    Directory of Open Access Journals (Sweden)

    Aidan G. O’Keeffe

    2017-12-01

    Full Text Available Abstract Background In healthcare research, outcomes with skewed probability distributions are common. Sample size calculations for such outcomes are typically based on estimates on a transformed scale (e.g. log which may sometimes be difficult to obtain. In contrast, estimates of median and variance on the untransformed scale are generally easier to pre-specify. The aim of this paper is to describe how to calculate a sample size for a two group comparison of interest based on median and untransformed variance estimates for log-normal outcome data. Methods A log-normal distribution for outcome data is assumed and a sample size calculation approach for a two-sample t-test that compares log-transformed outcome data is demonstrated where the change of interest is specified as difference in median values on the untransformed scale. A simulation study is used to compare the method with a non-parametric alternative (Mann-Whitney U test in a variety of scenarios and the method is applied to a real example in neurosurgery. Results The method attained a nominal power value in simulation studies and was favourable in comparison to a Mann-Whitney U test and a two-sample t-test of untransformed outcomes. In addition, the method can be adjusted and used in some situations where the outcome distribution is not strictly log-normal. Conclusions We recommend the use of this sample size calculation approach for outcome data that are expected to be positively skewed and where a two group comparison on a log-transformed scale is planned. An advantage of this method over usual calculations based on estimates on the log-transformed scale is that it allows clinical efficacy to be specified as a difference in medians and requires a variance estimate on the untransformed scale. Such estimates are often easier to obtain and more interpretable than those for log-transformed outcomes.

  13. New kinetic-spectrophotometric method for monitoring the concentration of iodine in river and city water samples.

    Science.gov (United States)

    Farmany, A; Khosravi, A; Abbasi, S; Cheraghi, J; Hushmandfar, R; Sobhanardakani, S; Noorizadeh, H; Mortazavi, S S

    2013-01-01

    A new kinetic method has been developed for the determination of iodine in water samples. The method is based on the catalytic effect of I(-) with the oxidation of Indigo Carmine (IC) by KBrO(3) in the sulfuric acid medium. The optimum conditions obtained are 0.16 M sulfuric acid, 1 × 10(-3) M of IC, 1 × 10(-2) M KBrO(3), reaction temperature of 35°C, and reaction time of 80 s at 612 nm. Under the optimized conditions, the method allowed the quantification of I(-) in a range of 12-375 ng/mL with a detection limit of 0.46 ng/mL. The method was applied to the determination of iodine in river and city water samples with the satisfactorily results.

  14. Universal nucleic acids sample preparation method for cells, spores and their mixture

    Science.gov (United States)

    Bavykin, Sergei [Darien, IL

    2011-01-18

    The present invention relates to a method for extracting nucleic acids from biological samples. More specifically the invention relates to a universal method for extracting nucleic acids from unidentified biological samples. An advantage of the presently invented method is its ability to effectively and efficiently extract nucleic acids from a variety of different cell types including but not limited to prokaryotic or eukaryotic cells and/or recalcitrant organisms (i.e. spores). Unlike prior art methods which are focused on extracting nucleic acids from vegetative cell or spores, the present invention effectively extracts nucleic acids from spores, multiple cell types or mixtures thereof using a single method. Important that the invented method has demonstrated an ability to extract nucleic acids from spores and vegetative bacterial cells with similar levels effectiveness. The invented method employs a multi-step protocol which erodes the cell structure of the biological sample, isolates, labels, fragments nucleic acids and purifies labeled samples from the excess of dye.

  15. Comparison of Address-based Sampling and Random-digit Dialing Methods for Recruiting Young Men as Controls in a Case-Control Study of Testicular Cancer Susceptibility

    OpenAIRE

    Clagett, Bartholt; Nathanson, Katherine L.; Ciosek, Stephanie L.; McDermoth, Monique; Vaughn, David J.; Mitra, Nandita; Weiss, Andrew; Martonik, Rachel; Kanetsky, Peter A.

    2013-01-01

    Random-digit dialing (RDD) using landline telephone numbers is the historical gold standard for control recruitment in population-based epidemiologic research. However, increasing cell-phone usage and diminishing response rates suggest that the effectiveness of RDD in recruiting a random sample of the general population, particularly for younger target populations, is decreasing. In this study, we compared landline RDD with alternative methods of control recruitment, including RDD using cell-...

  16. Fluidics platform and method for sample preparation and analysis

    Science.gov (United States)

    Benner, W. Henry; Dzenitis, John M.; Bennet, William J.; Baker, Brian R.

    2014-08-19

    Herein provided are fluidics platform and method for sample preparation and analysis. The fluidics platform is capable of analyzing DNA from blood samples using amplification assays such as polymerase-chain-reaction assays and loop-mediated-isothermal-amplification assays. The fluidics platform can also be used for other types of assays and analyzes. In some embodiments, a sample in a sealed tube can be inserted directly. The following isolation, detection, and analyzes can be performed without a user's intervention. The disclosed platform may also comprises a sample preparation system with a magnetic actuator, a heater, and an air-drying mechanism, and fluid manipulation processes for extraction, washing, elution, assay assembly, assay detection, and cleaning after reactions and between samples.

  17. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  18. Two media method for linear attenuation coefficient determination of irregular soil samples

    International Nuclear Information System (INIS)

    Vici, Carlos Henrique Georges

    2004-01-01

    In several situations of nuclear applications, the knowledge of gamma-ray linear attenuation coefficient for irregular samples is necessary, such as in soil physics and geology. This work presents the validation of a methodology for the determination of the linear attenuation coefficient (μ) of irregular shape samples, in such a way that it is not necessary to know the thickness of the considered sample. With this methodology irregular soil samples (undeformed field samples) from Londrina region, north of Parana were studied. It was employed the two media method for the μ determination. It consists of the μ determination through the measurement of a gamma-ray beam attenuation by the sample sequentially immersed in two different media, with known and appropriately chosen attenuation coefficients. For comparison, the theoretical value of μ was calculated by the product of the mass attenuation coefficient, obtained by the WinXcom code, and the measured value of the density sample. This software employs the chemical composition of the samples and supplies a table of the mass attenuation coefficients versus the photon energy. To verify the validity of the two media method, compared with the simple gamma ray transmission method, regular pome stone samples were used. With these results for the attenuation coefficients and their respective deviations, it was possible to compare the two methods. In this way we concluded that the two media method is a good tool for the determination of the linear attenuation coefficient of irregular materials, particularly in the study of soils samples. (author)

  19. Sampling and examination methods used for TMI-2 samples

    International Nuclear Information System (INIS)

    Marley, A.W.; Akers, D.W.; McIsaac, C.V.

    1988-01-01

    The purpose of this paper is to summarize the sampling and examination techniques that were used in the collection and analysis of TMI-2 samples. Samples ranging from auxiliary building air to core debris were collected and analyzed. Handling of the larger samples and many of the smaller samples had to be done remotely and many standard laboratory analytical techniques were modified to accommodate the extremely high radiation fields associated with these samples. The TMI-2 samples presented unique problems with sampling and the laboratory analysis of prior molten fuel debris. 14 refs., 8 figs

  20. [Sample preparation methods for chromatographic analysis of organic components in atmospheric particulate matter].

    Science.gov (United States)

    Hao, Liang; Wu, Dapeng; Guan, Yafeng

    2014-09-01

    The determination of organic composition in atmospheric particulate matter (PM) is of great importance in understanding how PM affects human health, environment, climate, and ecosystem. Organic components are also the scientific basis for emission source tracking, PM regulation and risk management. Therefore, the molecular characterization of the organic fraction of PM has become one of the priority research issues in the field of environmental analysis. Due to the extreme complexity of PM samples, chromatographic methods have been the chief selection. The common procedure for the analysis of organic components in PM includes several steps: sample collection on the fiber filters, sample preparation (transform the sample into a form suitable for chromatographic analysis), analysis by chromatographic methods. Among these steps, the sample preparation methods will largely determine the throughput and the data quality. Solvent extraction methods followed by sample pretreatment (e. g. pre-separation, derivatization, pre-concentration) have long been used for PM sample analysis, and thermal desorption methods have also mainly focused on the non-polar organic component analysis in PM. In this paper, the sample preparation methods prior to chromatographic analysis of organic components in PM are reviewed comprehensively, and the corresponding merits and limitations of each method are also briefly discussed.

  1. Sampling methods for amphibians in streams in the Pacific Northwest.

    Science.gov (United States)

    R. Bruce Bury; Paul Stephen. Corn

    1991-01-01

    Methods describing how to sample aquatic and semiaquatic amphibians in small streams and headwater habitats in the Pacific Northwest are presented. We developed a technique that samples 10-meter stretches of selected streams, which was adequate to detect presence or absence of amphibian species and provided sample sizes statistically sufficient to compare abundance of...

  2. A general assignment method for oriented sample (OS) solid-state NMR of proteins based on the correlation of resonances through heteronuclear dipolar couplings in samples aligned parallel and perpendicular to the magnetic field.

    Science.gov (United States)

    Lu, George J; Son, Woo Sung; Opella, Stanley J

    2011-04-01

    A general method for assigning oriented sample (OS) solid-state NMR spectra of proteins is demonstrated. In principle, this method requires only a single sample of a uniformly ¹⁵N-labeled membrane protein in magnetically aligned bilayers, and a previously assigned isotropic chemical shift spectrum obtained either from solution NMR on micelle or isotropic bicelle samples or from magic angle spinning (MAS) solid-state NMR on unoriented proteoliposomes. The sequential isotropic resonance assignments are transferred to the OS solid-state NMR spectra of aligned samples by correlating signals from the same residue observed in protein-containing bilayers aligned with their normals parallel and perpendicular to the magnetic field. The underlying principle is that the resonances from the same residue have heteronuclear dipolar couplings that differ by exactly a factor of two between parallel and perpendicular alignments. The method is demonstrated on the membrane-bound form of Pf1 coat protein in phospholipid bilayers, whose assignments have been previously made using an earlier generation of methods that relied on the preparation of many selectively labeled (by residue type) samples. The new method provides the correct resonance assignments using only a single uniformly ¹⁵N-labeled sample, two solid-state NMR spectra, and a previously assigned isotropic spectrum. Significantly, this approach is equally applicable to residues in alpha helices, beta sheets, loops, and any other elements of tertiary structure. Moreover, the strategy bridges between OS solid-state NMR of aligned samples and solution NMR or MAS solid-state NMR of unoriented samples. In combination with the development of complementary experimental methods, it provides a step towards unifying these apparently different NMR approaches. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Acceptability of self-collection sampling for HPV-DNA testing in low-resource settings: a mixed methods approach.

    Science.gov (United States)

    Bansil, Pooja; Wittet, Scott; Lim, Jeanette L; Winkler, Jennifer L; Paul, Proma; Jeronimo, Jose

    2014-06-12

    Vaginal self-sampling with HPV-DNA tests is a promising primary screening method for cervical cancer. However, women's experiences, concerns and the acceptability of such tests in low-resource settings remain unknown. In India, Nicaragua, and Uganda, a mixed-method design was used to collect data from surveys (N = 3,863), qualitative interviews (N = 72; 20 providers and 52 women) and focus groups (N = 30 women) on women's and providers' experiences with self-sampling, women's opinions of sampling at home, and their future needs. Among surveyed women, 90% provided a self- collected sample. Of these, 75% reported it was easy, although 52% were initially concerned about hurting themselves and 24% were worried about not getting a good sample. Most surveyed women preferred self-sampling (78%). However it was not clear if they responded to the privacy of self-sampling or the convenience of avoiding a pelvic examination, or both. In follow-up interviews, most women reported that they didn't mind self-sampling, but many preferred to have a provider collect the vaginal sample. Most women also preferred clinic-based screening (as opposed to home-based self-sampling), because the sample could be collected by a provider, women could receive treatment if needed, and the clinic was sanitary and provided privacy. Self-sampling acceptability was higher when providers prepared women through education, allowed women to examine the collection brush, and were present during the self-collection process. Among survey respondents, aids that would facilitate self-sampling in the future were: staff help (53%), additional images in the illustrated instructions (31%), and a chance to practice beforehand with a doll/model (26%). Self-and vaginal-sampling are widely acceptable among women in low-resource settings. Providers have a unique opportunity to educate and prepare women for self-sampling and be flexible in accommodating women's preference for self-sampling.

  4. The impact of sample non-normality on ANOVA and alternative methods.

    Science.gov (United States)

    Lantz, Björn

    2013-05-01

    In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.

  5. CdTe detector based PIXE mapping of geological samples

    Energy Technology Data Exchange (ETDEWEB)

    Chaves, P.C., E-mail: cchaves@ctn.ist.utl.pt [Centro de Física Atómica da Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa (Portugal); IST/ITN, Instituto Superior Técnico, Universidade Técnica de Lisboa, Campus Tecnológico e Nuclear, EN10, 2686-953 Sacavém (Portugal); Taborda, A. [Centro de Física Atómica da Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa (Portugal); IST/ITN, Instituto Superior Técnico, Universidade Técnica de Lisboa, Campus Tecnológico e Nuclear, EN10, 2686-953 Sacavém (Portugal); Oliveira, D.P.S. de [Laboratório Nacional de Energia e Geologia (LNEG), Apartado 7586, 2611-901 Alfragide (Portugal); Reis, M.A. [Centro de Física Atómica da Universidade de Lisboa, Av. Prof. Gama Pinto 2, 1649-003 Lisboa (Portugal); IST/ITN, Instituto Superior Técnico, Universidade Técnica de Lisboa, Campus Tecnológico e Nuclear, EN10, 2686-953 Sacavém (Portugal)

    2014-01-01

    A sample collected from a borehole drilled approximately 10 km ESE of Bragança, Trás-os-Montes, was analysed by standard and high energy PIXE at both CTN (previous ITN) PIXE setups. The sample is a fine-grained metapyroxenite grading to coarse-grained in the base with disseminated sulphides and fine veinlets of pyrrhotite and pyrite. Matrix composition was obtained at the standard PIXE setup using a 1.25 MeV H{sup +} beam at three different spots. Medium and high Z elemental concentrations were then determined using the DT2fit and DT2simul codes (Reis et al., 2008, 2013 [1,2]), on the spectra obtained in the High Resolution and High Energy (HRHE)-PIXE setup (Chaves et al., 2013 [3]) by irradiation of the sample with a 3.8 MeV proton beam provided by the CTN 3 MV Tandetron accelerator. In this paper we present results, discuss detection limits of the method and the added value of the use of the CdTe detector in this context.

  6. Sampling Methods and the Accredited Population in Athletic Training Education Research

    Science.gov (United States)

    Carr, W. David; Volberding, Jennifer

    2009-01-01

    Context: We describe methods of sampling the widely-studied, yet poorly defined, population of accredited athletic training education programs (ATEPs). Objective: There are two purposes to this study; first to describe the incidence and types of sampling methods used in athletic training education research, and second to clearly define the…

  7. Comparison of fine particle measurements from a direct-reading instrument and a gravimetric sampling method.

    Science.gov (United States)

    Kim, Jee Young; Magari, Shannon R; Herrick, Robert F; Smith, Thomas J; Christiani, David C

    2004-11-01

    Particulate air pollution, specifically the fine particle fraction (PM2.5), has been associated with increased cardiopulmonary morbidity and mortality in general population studies. Occupational exposure to fine particulate matter can exceed ambient levels by a large factor. Due to increased interest in the health effects of particulate matter, many particle sampling methods have been developed In this study, two such measurement methods were used simultaneously and compared. PM2.5 was sampled using a filter-based gravimetric sampling method and a direct-reading instrument, the TSI Inc. model 8520 DUSTTRAK aerosol monitor. Both sampling methods were used to determine the PM2.5 exposure in a group of boilermakers exposed to welding fumes and residual fuel oil ash. The geometric mean PM2.5 concentration was 0.30 mg/m3 (GSD 3.25) and 0.31 mg/m3 (GSD 2.90)from the DUSTTRAK and gravimetric method, respectively. The Spearman rank correlation coefficient for the gravimetric and DUSTTRAK PM2.5 concentrations was 0.68. Linear regression models indicated that log, DUSTTRAK PM2.5 concentrations significantly predicted loge gravimetric PM2.5 concentrations (p gravimetric PM2.5 concentrations was found to be modified by surrogate measures for seasonal variation and type of aerosol. PM2.5 measurements from the DUSTTRAK are well correlated and highly predictive of measurements from the gravimetric sampling method for the aerosols in these work environments. However, results from this study suggest that aerosol particle characteristics may affect the relationship between the gravimetric and DUSTTRAK PM2.5 measurements. Recalibration of the DUSTTRAK for the specific aerosol, as recommended by the manufacturer, may be necessary to produce valid measures of airborne particulate matter.

  8. Target and suspect screening of psychoactive substances in sewage-based samples by UHPLC-QTOF.

    Science.gov (United States)

    Baz-Lomba, J A; Reid, Malcolm J; Thomas, Kevin V

    2016-03-31

    The quantification of illicit drug and pharmaceutical residues in sewage has been shown to be a valuable tool that complements existing approaches in monitoring the patterns and trends of drug use. The present work delineates the development of a novel analytical tool and dynamic workflow for the analysis of a wide range of substances in sewage-based samples. The validated method can simultaneously quantify 51 target psychoactive substances and pharmaceuticals in sewage-based samples using an off-line automated solid phase extraction (SPE-DEX) method, using Oasis HLB disks, followed by ultra-high performance liquid chromatography coupled to quadrupole time-of-flight mass spectrometry (UHPLC-QTOF) in MS(e). Quantification and matrix effect corrections were overcome with the use of 25 isotopic labeled internal standards (ILIS). Recoveries were generally greater than 60% and the limits of quantification were in the low nanogram-per-liter range (0.4-187 ng L(-1)). The emergence of new psychoactive substances (NPS) on the drug scene poses a specific analytical challenge since their market is highly dynamic with new compounds continuously entering the market. Suspect screening using high-resolution mass spectrometry (HRMS) simultaneously allowed the unequivocal identification of NPS based on a mass accuracy criteria of 5 ppm (of the molecular ion and at least two fragments) and retention time (2.5% tolerance) using the UNIFI screening platform. Applying MS(e) data against a suspect screening database of over 1000 drugs and metabolites, this method becomes a broad and reliable tool to detect and confirm NPS occurrence. This was demonstrated through the HRMS analysis of three different sewage-based sample types; influent wastewater, passive sampler extracts and pooled urine samples resulting in the concurrent quantification of known psychoactive substances and the identification of NPS and pharmaceuticals. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Methodology Series Module 5: Sampling Strategies

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability samplingbased on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability samplingbased on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438

  10. Validated method for the determination of perfluorinated compounds in placental tissue samples based on a simple extraction procedure followed by ultra-high performance liquid chromatography-tandem mass spectrometry analysis.

    Science.gov (United States)

    Martín, J; Rodríguez-Gómez, R; Zafra-Gómez, A; Alonso, E; Vílchez, J L; Navalón, A

    2016-04-01

    Xenobiotic exposure during pregnancy is inevitable. Determination of perfluorinated compounds (PFCs), chemicals described as environmental contaminants by Public Health Authorities due to their persistence, bioaccumulation and toxicity, is a challenge. In the present work, a method based on a simplified sample treatment involving freeze-drying, solvent extraction and dispersive clean-up of the extracts using C18 sorbents followed by an ultra-high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) analysis was developed and validated for the determination of five perfluorinated carboxylic acids (C4-C8) and perfluorooctane sulfonate (PFOS) in placental tissue samples. The most influential parameters affecting the extraction method and clean-up were optimized using Design of Experiments (DOE). The method was validated using matrix-matched calibration. Found limits of detection (LODs) ranged from 0.03 to 2 ng g(-1) and limits of quantification (LOQs) from 0.08 to 6 ng g(-1), while inter- and intra-day variability was under 14% in all cases. Recovery rates for spiked samples ranged from 94% to 113%. The method was satisfactorily applied for the determination of compounds in human placental tissue samples collected at delivery from 25 randomly selected women. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Method and apparatus for continuous sampling

    International Nuclear Information System (INIS)

    Marcussen, C.

    1982-01-01

    An apparatus and method for continuously sampling a pulverous material flow includes means for extracting a representative subflow from a pulverous material flow. A screw conveyor is provided to cause the extracted subflow to be pushed upwardly through a duct to an overflow. Means for transmitting a radiation beam transversely to the subflow in the duct, and means for sensing the transmitted beam through opposite pairs of windows in the duct are provided to measure the concentration of one or more constituents in the subflow. (author)

  12. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    Science.gov (United States)

    Zhang, Qian; Harman, Ciaran J.; Kirchner, James W.

    2018-02-01

    River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1) fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2) the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling - in the form of spectral slope (β) or other equivalent scaling parameters (e.g., Hurst exponent) - are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1) they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β = 0) to Brown noise (β = 2) and (2) their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths) in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb-Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among all methods for a wide range of

  13. Detection of sunn pest-damaged wheat samples using visible/near-infrared spectroscopy based on pattern recognition.

    Science.gov (United States)

    Basati, Zahra; Jamshidi, Bahareh; Rasekh, Mansour; Abbaspour-Gilandeh, Yousef

    2018-05-30

    The presence of sunn pest-damaged grains in wheat mass reduces the quality of flour and bread produced from it. Therefore, it is essential to assess the quality of the samples in collecting and storage centers of wheat and flour mills. In this research, the capability of visible/near-infrared (Vis/NIR) spectroscopy combined with pattern recognition methods was investigated for discrimination of wheat samples with different percentages of sunn pest-damaged. To this end, various samples belonging to five classes (healthy and 5%, 10%, 15% and 20% unhealthy) were analyzed using Vis/NIR spectroscopy (wavelength range of 350-1000 nm) based on both supervised and unsupervised pattern recognition methods. Principal component analysis (PCA) and hierarchical cluster analysis (HCA) as the unsupervised techniques and soft independent modeling of class analogies (SIMCA) and partial least squares-discriminant analysis (PLS-DA) as supervised methods were used. The results showed that Vis/NIR spectra of healthy samples were correctly clustered using both PCA and HCA. Due to the high overlapping between the four unhealthy classes (5%, 10%, 15% and 20%), it was not possible to discriminate all the unhealthy samples in individual classes. However, when considering only the two main categories of healthy and unhealthy, an acceptable degree of separation between the classes can be obtained after classification with supervised pattern recognition methods of SIMCA and PLS-DA. SIMCA based on PCA modeling correctly classified samples in two classes of healthy and unhealthy with classification accuracy of 100%. Moreover, the power of the wavelengths of 839 nm, 918 nm and 995 nm were more than other wavelengths to discriminate two classes of healthy and unhealthy. It was also concluded that PLS-DA provides excellent classification results of healthy and unhealthy samples (R 2  = 0.973 and RMSECV = 0.057). Therefore, Vis/NIR spectroscopy based on pattern recognition techniques

  14. Salmonella detection in poultry samples. Comparison of two commercial real-time PCR systems with culture methods for the detection of Salmonella spp. in environmental and fecal samples of poultry.

    Science.gov (United States)

    Sommer, D; Enderlein, D; Antakli, A; Schönenbrücher, H; Slaghuis, J; Redmann, T; Lierz, M

    2012-01-01

    The efficiency of two commercial PCR methods based on real-time technology, the foodproof® Salmonella detection system and the BAX® PCR Assay Salmonella system was compared to standardized culture methods (EN ISO 6579:2002 - Annex D) for the detection of Salmonella spp. in poultry samples. Four sample matrices (feed, dust, boot swabs, feces) obtained directly from poultry flocks, as well as artificially spiked samples of the same matrices, were used. All samples were tested for Salmonella spp. using culture methods first as the gold standard. In addition samples spiked with Salmonella Enteridis were tested to evaluate the sensitivity of both PCR methods. Furthermore all methods were evaluated in an annual ring-trial of the National Salmonella Reference Laboratory of Germany. Salmonella detection in the matrices feed, dust and boot swabs were comparable in both PCR systems whereas the results from feces differed markedly. The quality, especially the freshness, of the fecal samples had an influence on the sensitivity of the real-time PCR and the results of the culture methods. In fresh fecal samples an initial spiking level of 100cfu/25g Salmonella Enteritidis was detected. Two-days-dried fecal samples allowed the detection of 14cfu/25g. Both real- time PCR protocols appear to be suitable for the detection of Salmonella spp. in all four matrices. The foodproof® system detected eight samples more to be positive compared to the BAX® system, but had a potential false positive result in one case. In 7-days-dried samples none of the methods was able to detect Salmonella likely through letal cell damage. In general the advantage of PCR analyses over the culture method is the reduction of working time from 4-5 days to only 2 days. However, especially for the analysis of fecal samples official validation should be conducted according to the requirement of EN ISO6579:2002 - Annex D.

  15. Soil sampling in emergency situations

    International Nuclear Information System (INIS)

    Carvalho, Zenildo Lara de; Ramos Junior, Anthenor Costa

    1997-01-01

    The soil sampling methods used in Goiania's accident (1987) by the environmental team of Brazilian Nuclear Energy Commission (CNEN) are described. The development of this method of soil sampling to a emergency sampling method used in a Nuclear Emergency Exercise in Angra dos Reis Reactor Site (1991) is presented. A new method for soil sampling based on a Chernobyl environmental monitoring experience (1995) is suggested. (author)

  16. Study on quantification method based on Monte Carlo sampling for multiunit probabilistic safety assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Kye Min [KHNP Central Research Institute, Daejeon (Korea, Republic of); Han, Sang Hoon; Park, Jin Hee; Lim, Ho Gon; Yang, Joon Yang [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Heo, Gyun Young [Kyung Hee University, Yongin (Korea, Republic of)

    2017-06-15

    In Korea, many nuclear power plants operate at a single site based on geographical characteristics, but the population density near the sites is higher than that in other countries. Thus, multiunit accidents are a more important consideration than in other countries and should be addressed appropriately. Currently, there are many issues related to a multiunit probabilistic safety assessment (PSA). One of them is the quantification of a multiunit PSA model. A traditional PSA uses a Boolean manipulation of the fault tree in terms of the minimal cut set. However, such methods have some limitations when rare event approximations cannot be used effectively or a very small truncation limit should be applied to identify accident sequence combinations for a multiunit site. In particular, it is well known that seismic risk in terms of core damage frequency can be overestimated because there are many events that have a high failure probability. In this study, we propose a quantification method based on a Monte Carlo approach for a multiunit PSA model. This method can consider all possible accident sequence combinations in a multiunit site and calculate a more exact value for events that have a high failure probability. An example model for six identical units at a site was also developed and quantified to confirm the applicability of the proposed method.

  17. Sparse feature learning for instrument identification: Effects of sampling and pooling methods.

    Science.gov (United States)

    Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu

    2016-05-01

    Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.

  18. Revisiting random walk based sampling in networks: evasion of burn-in period and frequent regenerations.

    Science.gov (United States)

    Avrachenkov, Konstantin; Borkar, Vivek S; Kadavankandy, Arun; Sreedharan, Jithin K

    2018-01-01

    In the framework of network sampling, random walk (RW) based estimation techniques provide many pragmatic solutions while uncovering the unknown network as little as possible. Despite several theoretical advances in this area, RW based sampling techniques usually make a strong assumption that the samples are in stationary regime, and hence are impelled to leave out the samples collected during the burn-in period. This work proposes two sampling schemes without burn-in time constraint to estimate the average of an arbitrary function defined on the network nodes, for example, the average age of users in a social network. The central idea of the algorithms lies in exploiting regeneration of RWs at revisits to an aggregated super-node or to a set of nodes, and in strategies to enhance the frequency of such regenerations either by contracting the graph or by making the hitting set larger. Our first algorithm, which is based on reinforcement learning (RL), uses stochastic approximation to derive an estimator. This method can be seen as intermediate between purely stochastic Markov chain Monte Carlo iterations and deterministic relative value iterations. The second algorithm, which we call the Ratio with Tours (RT)-estimator, is a modified form of respondent-driven sampling (RDS) that accommodates the idea of regeneration. We study the methods via simulations on real networks. We observe that the trajectories of RL-estimator are much more stable than those of standard random walk based estimation procedures, and its error performance is comparable to that of respondent-driven sampling (RDS) which has a smaller asymptotic variance than many other estimators. Simulation studies also show that the mean squared error of RT-estimator decays much faster than that of RDS with time. The newly developed RW based estimators (RL- and RT-estimators) allow to avoid burn-in period, provide better control of stability along the sample path, and overall reduce the estimation time. Our

  19. Active Search on Carcasses versus Pitfall Traps: a Comparison of Sampling Methods.

    Science.gov (United States)

    Zanetti, N I; Camina, R; Visciarelli, E C; Centeno, N D

    2016-04-01

    The study of insect succession in cadavers and the classification of arthropods have mostly been done by placing a carcass in a cage, protected from vertebrate scavengers, which is then visited periodically. An alternative is to use specific traps. Few studies on carrion ecology and forensic entomology involving the carcasses of large vertebrates have employed pitfall traps. The aims of this study were to compare both sampling methods (active search on a carcass and pitfall trapping) for each coleopteran family, and to establish whether there is a discrepancy (underestimation and/or overestimation) in the presence of each family by either method. A great discrepancy was found for almost all families with some of them being more abundant in samples obtained through active search on carcasses and others in samples from traps, whereas two families did not show any bias towards a given sampling method. The fact that families may be underestimated or overestimated by the type of sampling technique highlights the importance of combining both methods, active search on carcasses and pitfall traps, in order to obtain more complete information on decomposition, carrion habitat and cadaveric families or species. Furthermore, a hypothesis advanced on the reasons for the underestimation by either sampling method showing biases towards certain families. Information about the sampling techniques indicating which would be more appropriate to detect or find a particular family is provided.

  20. The role of graphene-based sorbents in modern sample preparation techniques.

    Science.gov (United States)

    de Toffoli, Ana Lúcia; Maciel, Edvaldo Vasconcelos Soares; Fumes, Bruno Henrique; Lanças, Fernando Mauro

    2018-01-01

    The application of graphene-based sorbents in sample preparation techniques has increased significantly since 2011. These materials have good physicochemical properties to be used as sorbent and have shown excellent results in different sample preparation techniques. Graphene and its precursor graphene oxide have been considered to be good candidates to improve the extraction and concentration of different classes of target compounds (e.g., parabens, polycyclic aromatic hydrocarbon, pyrethroids, triazines, and so on) present in complex matrices. Its applications have been employed during the analysis of different matrices (e.g., environmental, biological and food). In this review, we highlight the most important characteristics of graphene-based material, their properties, synthesis routes, and the most important applications in both off-line and on-line sample preparation techniques. The discussion of the off-line approaches includes methods derived from conventional solid-phase extraction focusing on the miniaturized magnetic and dispersive modes. The modes of microextraction techniques called stir bar sorptive extraction, solid phase microextraction, and microextraction by packed sorbent are discussed. The on-line approaches focus on the use of graphene-based material mainly in on-line solid phase extraction, its variation called in-tube solid-phase microextraction, and on-line microdialysis systems. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. A combined method for DNA analysis and radiocarbon dating from a single sample.

    Science.gov (United States)

    Korlević, Petra; Talamo, Sahra; Meyer, Matthias

    2018-03-07

    Current protocols for ancient DNA and radiocarbon analysis of ancient bones and teeth call for multiple destructive samplings of a given specimen, thereby increasing the extent of undesirable damage to precious archaeological material. Here we present a method that makes it possible to obtain both ancient DNA sequences and radiocarbon dates from the same sample material. This is achieved by releasing DNA from the bone matrix through incubation with either EDTA or phosphate buffer prior to complete demineralization and collagen extraction utilizing the acid-base-acid-gelatinization and ultrafiltration procedure established in most radiocarbon dating laboratories. Using a set of 12 bones of different ages and preservation conditions we demonstrate that on average 89% of the DNA can be released from sample powder with minimal, or 38% without any, detectable collagen loss. We also detect no skews in radiocarbon dates compared to untreated samples. Given the different material demands for radiocarbon dating (500 mg of bone/dentine) and DNA analysis (10-100 mg), combined DNA and collagen extraction not only streamlines the sampling process but also drastically increases the amount of DNA that can be recovered from limited sample material.

  2. Survey research with a random digit dial national mobile phone sample in Ghana: Methods and sample quality

    Science.gov (United States)

    Sefa, Eunice; Adimazoya, Edward Akolgo; Yartey, Emmanuel; Lenzi, Rachel; Tarpo, Cindy; Heward-Mills, Nii Lante; Lew, Katherine; Ampeh, Yvonne

    2018-01-01

    Introduction Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample. Materials and methods The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census. Results The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample. Conclusions The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in

  3. Toward decentralized analysis of mercury (II) in real samples. A critical review on nanotechnology-based methodologies

    International Nuclear Information System (INIS)

    Botasini, Santiago; Heijo, Gonzalo; Méndez, Eduardo

    2013-01-01

    Graphical abstract: -- Highlights: •Several methods based on nanotechnology achieve limit of detections in the pM and nM ranges for mercury (II) analysis. •Most of these methods are validated in filtered water samples and/or spiked samples. •Thiols in real samples constitute an actual competence for any sensor based on the binding of mercury (II) ions. •Future research should include the study of matrix interferences including thiols and dissolved organic matter. -- Abstract: In recent years, it has increased the number of works focused on the development of novel nanoparticle-based sensors for mercury detection, mainly motivated by the need of low cost portable devices capable of giving fast and reliable analytical response, thus contributing to the analytical decentralization. Methodologies employing colorimetric, fluorometric, magnetic, and electrochemical output signals allowed reaching detection limits within the pM and nM ranges. Most of these developments proved their suitability in detecting and quantifying mercury (II) ions in synthetic solutions or spiked water samples. However, the state of art in these technologies is still behind the standard methods of mercury quantification, such as cold vapor atomic absorption spectrometry and inductively coupled plasma techniques, in terms of reliability and sensitivity. This is mainly because the response of nanoparticle-based sensors is highly affected by the sample matrix. The developed analytical nanosystems may fail in real samples because of the negative incidence of the ionic strength and the presence of exchangeable ligands. The aim of this review is to critically consider the recently published innovations in this area, and highlight the needs to include more realistic assays in future research in order to make these advances suitable for on-site analysis

  4. A Mixed Methods Sampling Methodology for a Multisite Case Study

    Science.gov (United States)

    Sharp, Julia L.; Mobley, Catherine; Hammond, Cathy; Withington, Cairen; Drew, Sam; Stringfield, Sam; Stipanovic, Natalie

    2012-01-01

    The flexibility of mixed methods research strategies makes such approaches especially suitable for multisite case studies. Yet the utilization of mixed methods to select sites for these studies is rarely reported. The authors describe their pragmatic mixed methods approach to select a sample for their multisite mixed methods case study of a…

  5. Race and Research Methods Anxiety in an Undergraduate Sample: The Potential Effects of Self-Perception

    Science.gov (United States)

    Eckberg, Deborah A.

    2015-01-01

    This study explores race as a potential predictor of research methods anxiety among a sample of undergraduates. While differences in academic achievement based on race and ethnicity have been well documented, few studies have examined racial differences in anxiety with regard to specific subject matter in undergraduate curricula. This exploratory…

  6. [Standard sample preparation method for quick determination of trace elements in plastic].

    Science.gov (United States)

    Yao, Wen-Qing; Zong, Rui-Long; Zhu, Yong-Fa

    2011-08-01

    Reference sample was prepared by masterbatch method, containing heavy metals with known concentration of electronic information products (plastic), the repeatability and precision were determined, and reference sample preparation procedures were established. X-Ray fluorescence spectroscopy (XRF) analysis method was used to determine the repeatability and uncertainty in the analysis of the sample of heavy metals and bromine element. The working curve and the metrical methods for the reference sample were carried out. The results showed that the use of the method in the 200-2000 mg x kg(-1) concentration range for Hg, Pb, Cr and Br elements, and in the 20-200 mg x kg(-1) range for Cd elements, exhibited a very good linear relationship, and the repeatability of analysis methods for six times is good. In testing the circuit board ICB288G and ICB288 from the Mitsubishi Heavy Industry Company, results agreed with the recommended values.

  7. A simplified method to recover urinary vesicles for clinical applications, and sample banking.

    Science.gov (United States)

    Musante, Luca; Tataruch, Dorota; Gu, Dongfeng; Benito-Martin, Alberto; Calzaferri, Giulio; Aherne, Sinead; Holthofer, Harry

    2014-12-23

    Urinary extracellular vesicles provide a novel source for valuable biomarkers for kidney and urogenital diseases: Current isolation protocols include laborious, sequential centrifugation steps which hampers their widespread research and clinical use. Furthermore, large individual urine sample volumes or sizable target cohorts are to be processed (e.g. for biobanking), the storage capacity is an additional problem. Thus, alternative methods are necessary to overcome such limitations. We have developed a practical vesicle isolation technique to yield easily manageable sample volumes in an exceptionally cost efficient way to facilitate their full utilization in less privileged environments and maximize the benefit of biobanking. Urinary vesicles were isolated by hydrostatic dialysis with minimal interference of soluble proteins or vesicle loss. Large volumes of urine were concentrated up to 1/100 of original volume and the dialysis step allowed equalization of urine physico-chemical characteristics. Vesicle fractions were found suitable to any applications, including RNA analysis. In the yield, our hydrostatic filtration dialysis system outperforms the conventional ultracentrifugation-based methods and the labour intensive and potentially hazardous step of ultracentrifugations are eliminated. Likewise, the need for trained laboratory personnel and heavy initial investment is avoided. Thus, our method qualifies as a method for laboratories working with urinary vesicles and biobanking.

  8. Enhanced Sampling in Free Energy Calculations: Combining SGLD with the Bennett's Acceptance Ratio and Enveloping Distribution Sampling Methods.

    Science.gov (United States)

    König, Gerhard; Miller, Benjamin T; Boresch, Stefan; Wu, Xiongwu; Brooks, Bernard R

    2012-10-09

    One of the key requirements for the accurate calculation of free energy differences is proper sampling of conformational space. Especially in biological applications, molecular dynamics simulations are often confronted with rugged energy surfaces and high energy barriers, leading to insufficient sampling and, in turn, poor convergence of the free energy results. In this work, we address this problem by employing enhanced sampling methods. We explore the possibility of using self-guided Langevin dynamics (SGLD) to speed up the exploration process in free energy simulations. To obtain improved free energy differences from such simulations, it is necessary to account for the effects of the bias due to the guiding forces. We demonstrate how this can be accomplished for the Bennett's acceptance ratio (BAR) and the enveloping distribution sampling (EDS) methods. While BAR is considered among the most efficient methods available for free energy calculations, the EDS method developed by Christ and van Gunsteren is a promising development that reduces the computational costs of free energy calculations by simulating a single reference state. To evaluate the accuracy of both approaches in connection with enhanced sampling, EDS was implemented in CHARMM. For testing, we employ benchmark systems with analytical reference results and the mutation of alanine to serine. We find that SGLD with reweighting can provide accurate results for BAR and EDS where conventional molecular dynamics simulations fail. In addition, we compare the performance of EDS with other free energy methods. We briefly discuss the implications of our results and provide practical guidelines for conducting free energy simulations with SGLD.

  9. Some refinements on the comparison of areal sampling methods via simulation

    Science.gov (United States)

    Jeffrey Gove

    2017-01-01

    The design of forest inventories and development of new sampling methods useful in such inventories normally have a two-fold target of design unbiasedness and minimum variance in mind. Many considerations such as costs go into the choices of sampling method for operational and other levels of inventory. However, the variance in terms of meeting a specified level of...

  10. Preparation Of Deposited Sediment Sample By Casting Method For Environmental Study

    International Nuclear Information System (INIS)

    Hutabarat, Tommy; Ristin PI, Evarista

    2000-01-01

    The preparation of deposited sediment sample by c asting m ethod for environmental study has been carried out. This method comprises separation of size fraction and casting process. The deposited sediment samples were wet sieved to separate the size fraction of >500 mum, (250-500) mum, (125-250) mum and (63-125) mum and settling procedures were followed for the separation of (40-63) mum, (20-40) mum, (10-20) mum and o C, ashed at 450 o C, respectively. In the casting process of sample, it was used polyester rapid cure resin and methyl ethyl ketone peroxide (MEKP) hardener. The moulded sediment sample was poured onto caster, allow for 60 hours long. The aim of this method is to get the casted sample which can be used effectively, efficiently and to be avoided from contamination of each other samples. Before casting, samples were grinded up to be fine. The result shows that casting product is ready to be used for natural radionuclide analysis

  11. A liquid chromatography-mass spectrometry method based on class characteristic fragmentation pathways to detect the class of indole-derivative synthetic cannabinoids in biological samples.

    Science.gov (United States)

    Mazzarino, Monica; de la Torre, Xavier; Botrè, Francesco

    2014-07-21

    This article describes a liquid chromatographic/tandem mass spectrometric method, based on the use of precursor ion scan as the acquisition mode, specifically developed to detect indole-derived cannabinoids (phenylacetylindoles, naphthoylindoles and benzoylindoles) in biological fluids (saliva, urine and blood). The method is designed to recognize one or more common "structural markers", corresponding to mass spectral fragments originating from the specific portion of the molecular structure that is common to the aminoalkylindole analogues and that is fundamental for their pharmacological classification. As such, the method is also suitable for detecting unknown substances, provided they contain the targeted portion of the molecular structure. The pre-treatment procedure consists in a liquid/liquid extraction step carried out at neutral pH: this is the only pretreatment in the case of analyses carried out in saliva, while it follows an enzymatic hydrolysis procedure in the case of urine samples, or a protein precipitation step in the case of blood samples. The chromatographic separation is achieved using an octadecyl reverse-phase 5 μm fused-core particle column; while the mass spectrometric detection is carried out by a triple-quadrupole instrument in positive electrospray ionization and precursor ion scan as acquisition mode, selecting, as mass spectral fragments, the indole (m/z 144), the carbonylnaphthalenyl (m/z 155) and the naphthalenyl (m/z 127) moieties. Once developed and optimized, the analytical procedure was validated in term of sensitivity (lower limits of detection in the range of 0.1-0.5 ng mL(-1)), specificity (no interference was detected at the retention times of the analytes under investigation), recovery (higher than 65% with a satisfactory repeatability: CV% lower than 10), matrix effect (lower than 30% for all the biological specimens tested), repeatability of the retention times (CV% lower than 0.1), robustness, and carry over (the positive

  12. A Preliminary Study on Sensitivity and Uncertainty Analysis with Statistic Method: Uncertainty Analysis with Cross Section Sampling from Lognormal Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis.

  13. A Preliminary Study on Sensitivity and Uncertainty Analysis with Statistic Method: Uncertainty Analysis with Cross Section Sampling from Lognormal Distribution

    International Nuclear Information System (INIS)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2013-01-01

    The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis

  14. Evaluation of preparation methods for MS-based analysis of intestinal epithelial cell proteomes

    DEFF Research Database (Denmark)

    Hesselager, Marianne Overgaard; Codrea, Marius Cosmin; Bendixen, Emøke

    2015-01-01

    analyzed by LC and electrospray QTOF-MS. The methods were evaluated according to efficiency, purity, transmembrane protein recovery, as well as for suitability to large-scale preparations. Our data clearly demonstrate that mucosal shaving is by far the best-suited method for in-depth MS analysis in terms...... are low in abundance, and large amounts of sample is needed for their preparation and for undertaking MS-based analysis. The aim of this study was to evaluate three different methods for isolation and preparation of pig intestinal epithelial cells for MS-based analysis of the proteome. Samples were...... of ease and speed of sample preparation, as well as protein recovery. In comparison, more gentle methods where intestinal epithelial cells are harvested by shaking are more time consuming, result in lower protein yield, and are prone to increased technical variation due to multiple steps involved....

  15. Global metabolite analysis of yeast: evaluation of sample preparation methods

    DEFF Research Database (Denmark)

    Villas-Bôas, Silas Granato; Højer-Pedersen, Jesper; Åkesson, Mats Fredrik

    2005-01-01

    Sample preparation is considered one of the limiting steps in microbial metabolome analysis. Eukaryotes and prokaryotes behave very differently during the several steps of classical sample preparation methods for analysis of metabolites. Even within the eukaryote kingdom there is a vast diversity...

  16. An adaptive sampling and windowing interrogation method in PIV

    Science.gov (United States)

    Theunissen, R.; Scarano, F.; Riethmuller, M. L.

    2007-01-01

    This study proposes a cross-correlation based PIV image interrogation algorithm that adapts the number of interrogation windows and their size to the image properties and to the flow conditions. The proposed methodology releases the constraint of uniform sampling rate (Cartesian mesh) and spatial resolution (uniform window size) commonly adopted in PIV interrogation. Especially in non-optimal experimental conditions where the flow seeding is inhomogeneous, this leads either to loss of robustness (too few particles per window) or measurement precision (too large or coarsely spaced interrogation windows). Two criteria are investigated, namely adaptation to the local signal content in the image and adaptation to local flow conditions. The implementation of the adaptive criteria within a recursive interrogation method is described. The location and size of the interrogation windows are locally adapted to the image signal (i.e., seeding density). Also the local window spacing (commonly set by the overlap factor) is put in relation with the spatial variation of the velocity field. The viability of the method is illustrated over two experimental cases where the limitation of a uniform interrogation approach appears clearly: a shock-wave-boundary layer interaction and an aircraft vortex wake. The examples show that the spatial sampling rate can be adapted to the actual flow features and that the interrogation window size can be arranged so as to follow the spatial distribution of seeding particle images and flow velocity fluctuations. In comparison with the uniform interrogation technique, the spatial resolution is locally enhanced while in poorly seeded regions the level of robustness of the analysis (signal-to-noise ratio) is kept almost constant.

  17. Optimization of a radiochemistry method for plutonium determination in biological samples

    International Nuclear Information System (INIS)

    Cerchetti, Maria L.; Arguelles, Maria G.

    2005-01-01

    Plutonium has been widely used for civilian an military activities. Nevertheless, the methods to control work exposition have not evolved in the same way, remaining as one of the major challengers for the radiological protection practice. Due to the low acceptable incorporation limit, the usual determination is based on indirect methods in urine samples. Our main objective was to optimize a technique used to monitor internal contamination of workers exposed to Plutonium isotopes. Different parameters were modified and their influence on the three steps of the method was evaluated. Those which gave the highest yield and feasibility were selected. The method involves: 1-) Sample concentration (coprecipitation); 2-) Plutonium purification; and 3-) Source preparation by electrodeposition. On the coprecipitation phase, changes on temperature and concentration of the carrier were evaluated. On the ion-exchange separation, changes on the type of the resin, elution solution for hydroxylamine (concentration and volume), length and column recycle were evaluated. Finally, on the electrodeposition phase, we modified the following: electrolytic solution, pH and time. Measures were made by liquid scintillation counting and alpha spectrometry (PIPS). We obtained the following yields: 88% for coprecipitation (at 60 C degree with 2 ml of CaHPO 4 ), 71% for ion-exchange (resins AG 1x8 Cl - 100-200 mesh, hydroxylamine 0.1N in HCl 0.2N as eluent, column between 4.5 and 8 cm), and 93% for electrodeposition (H 2 SO 4 -NH 4 OH, 100 minutes and pH from 2 to 2.8). The expand uncertainty was 30% (NC 95%), the decision threshold (Lc) was 0.102 Bq/L and the minimum detectable activity was 0.218 Bq/L of urine. We obtained an optimized method to screen workers exposed to Plutonium. (author)

  18. Comparison of four sampling methods for the detection of Salmonella in broiler litter.

    Science.gov (United States)

    Buhr, R J; Richardson, L J; Cason, J A; Cox, N A; Fairchild, B D

    2007-01-01

    Experiments were conducted to compare litter sampling methods for the detection of Salmonella. In experiment 1, chicks were challenged orally with a suspension of naladixic acid-resistant Salmonella and wing banded, and additional nonchallenged chicks were placed into each of 2 challenge pens. Nonchallenged chicks were placed into each nonchallenge pen located adjacent to the challenge pens. At 7, 8, 10, and 11 wk of age the litter was sampled using 4 methods: fecal droppings, litter grab, drag swab, and sock. For the challenge pens, Salmonella-positive samples were detected in 3 of 16 fecal samples, 6 of 16 litter grab samples, 7 of 16 drag swabs samples, and 7 of 16 sock samples. Samples from the nonchallenge pens were Salmonella positive in 2 of 16 litter grab samples, 9 of 16 drag swab samples, and 9 of 16 sock samples. In experiment 2, chicks were challenged with Salmonella, and the litter in the challenge and adjacent nonchallenge pens were sampled at 4, 6, and 8 wk of age with broilers remaining in all pens. For the challenge pens, Salmonella was detected in 10 of 36 fecal samples, 20 of 36 litter grab samples, 14 of 36 drag swab samples, and 26 of 36 sock samples. Samples from the adjacent nonchallenge pens were positive for Salmonella in 6 of 36 fecal droppings samples, 4 of 36 litter grab samples, 7 of 36 drag swab samples, and 19 of 36 sock samples. Sock samples had the highest rates of Salmonella detection. In experiment 3, the litter from a Salmonella-challenged flock was sampled at 7, 8, and 9 wk by socks and drag swabs. In addition, comparisons with drag swabs that were stepped on during sampling were made. Both socks (24 of 36, 67%) and drag swabs that were stepped on (25 of 36, 69%) showed significantly more Salmonella-positive samples than the traditional drag swab method (16 of 36, 44%). Drag swabs that were stepped on had comparable Salmonella detection level to that for socks. Litter sampling methods that incorporate stepping on the sample

  19. Some advances in importance sampling of reliability models based on zero variance approximation

    NARCIS (Netherlands)

    Reijsbergen, D.P.; de Boer, Pieter-Tjerk; Scheinhardt, Willem R.W.; Juneja, Sandeep

    We are interested in estimating, through simulation, the probability of entering a rare failure state before a regeneration state. Since this probability is typically small, we apply importance sampling. The method that we use is based on finding the most likely paths to failure. We present an

  20. A regression-based differential expression detection algorithm for microarray studies with ultra-low sample size.

    Directory of Open Access Journals (Sweden)

    Daniel Vasiliu

    Full Text Available Global gene expression analysis using microarrays and, more recently, RNA-seq, has allowed investigators to understand biological processes at a system level. However, the identification of differentially expressed genes in experiments with small sample size, high dimensionality, and high variance remains challenging, limiting the usability of these tens of thousands of publicly available, and possibly many more unpublished, gene expression datasets. We propose a novel variable selection algorithm for ultra-low-n microarray studies using generalized linear model-based variable selection with a penalized binomial regression algorithm called penalized Euclidean distance (PED. Our method uses PED to build a classifier on the experimental data to rank genes by importance. In place of cross-validation, which is required by most similar methods but not reliable for experiments with small sample size, we use a simulation-based approach to additively build a list of differentially expressed genes from the rank-ordered list. Our simulation-based approach maintains a low false discovery rate while maximizing the number of differentially expressed genes identified, a feature critical for downstream pathway analysis. We apply our method to microarray data from an experiment perturbing the Notch signaling pathway in Xenopus laevis embryos. This dataset was chosen because it showed very little differential expression according to limma, a powerful and widely-used method for microarray analysis. Our method was able to detect a significant number of differentially expressed genes in this dataset and suggest future directions for investigation. Our method is easily adaptable for analysis of data from RNA-seq and other global expression experiments with low sample size and high dimensionality.

  1. Establishing a public health analytical service based on chemical methods for detecting and quantifying Pacific ciguatoxin in fish samples.

    Science.gov (United States)

    Stewart, Ian; Eaglesham, Geoffrey K; Poole, Sue; Graham, Glenn; Paulo, Carl; Wickramasinghe, Wasantha; Sadler, Ross; Shaw, Glen R

    2010-10-01

    A referee analysis method for the detection and quantification of Pacific ciguatoxins in fish flesh has recently been established by the public health analytical laboratory for the State of Queensland, Australia. Fifty-six fish samples were analysed, which included 10 fillets purchased as negative controls. P-CTX-1 was identified in 27 samples, and P-CTX-2 and P-CTX-3 were found in 26 of those samples. The range of P-CTX-1 concentrations was 0.04-11.4 microg/kg fish flesh; coefficient of variation from 90 replicate analyses was 7.4%. A liquid chromatography/tandem mass spectrometry (HPLC-MS/MS) method utilising a rapid methanol extraction and clean-up is reliable and reproducible, with the detection limit at 0.03 microg/kg fish flesh. Some matrix effects are evident, with fish oil content a likely signal suppression factor. Species identification of samples by DNA sequence analysis revealed some evidence of fish substitution or inadvertent misidentification, which may have implications for the management and prevention of ciguatera poisoning. Blinded inspection of case notes from suspect ciguatera poisoning cases showed that reporting of ciguatera-related paraesthesias was highly predictable for the presence of ciguatoxins in analysed fish, with 13 of 14 expected cases having consumed fish that contained P-CTX-1 (p<0.001, Fishers Exact Test). Crown Copyright 2009. Published by Elsevier Ltd. All rights reserved.

  2. A New Method of Stress Measurement Based upon Elastic Deformation of Core Sample with Stress Relief by Drilling

    Science.gov (United States)

    Ito, T.; Funato, A.; Tamagawa, T.; Tezuka, K.; Yabe, Y.; Abe, S.; Ishida, A.; Ogasawara, H.

    2017-12-01

    When rock is cored at depth by drilling, anisotropic expansion occurs with the relief of anisotropic rock stresses, resulting in a sinusoidal variation of core diameter with a period of 180 deg. in the core roll angle. The circumferential variation of core diameter is given theoretically as a function of rock stresses. These new findings can lead various ideas to estimate the rock stress from circumferential variation of core diameter measured after the core retrieving. In the simplest case when a single core sample is only available, the difference between the maximum and minimum components of rock stress in a plane perpendicular to the drilled hole can be estimated from the maximum and minimum core diameters (see the detail in, Funato and Ito, IJRMMS, 2017). The advantages of this method include, (i) much easier measurement operation than those in other in-situ or in-lab estimation methods, and (ii) applicability in high stress environment where stress measurements need pressure for packers or pumping system for the hydro-fracturing methods higher than their tolerance levels. We have successfully tested the method at deep seismogenic zones in South African gold mines, and we are going to apply it to boreholes collared at 3 km depth and intersecting a M5.5 rupture plane several hundred meters below the mine workings in the ICDP project of "Drilling into Seismogenic zones of M2.0 - M5.5 earthquakes in deep South African gold mines" (DSeis) (e.g., http://www.icdp-online.org/projects/world/africa/orkney-s-africa/details/). If several core samples with different orientation are available, all of three principal components of 3D rock stress can be estimated. To realize this, we should have several boreholes drilled in different directions in a rock mass where the stress field is considered to be uniform. It is commonly carried out to dill boreholes in different directions from a mine gallery. Even in a deep borehole drilled vertically from the ground surface, the

  3. A Rapid Identification Method for Calamine Using Near-Infrared Spectroscopy Based on Multi-Reference Correlation Coefficient Method and Back Propagation Artificial Neural Network.

    Science.gov (United States)

    Sun, Yangbo; Chen, Long; Huang, Bisheng; Chen, Keli

    2017-07-01

    As a mineral, the traditional Chinese medicine calamine has a similar shape to many other minerals. Investigations of commercially available calamine samples have shown that there are many fake and inferior calamine goods sold on the market. The conventional identification method for calamine is complicated, therefore as a result of the large scale of calamine samples, a rapid identification method is needed. To establish a qualitative model using near-infrared (NIR) spectroscopy for rapid identification of various calamine samples, large quantities of calamine samples including crude products, counterfeits and processed products were collected and correctly identified using the physicochemical and powder X-ray diffraction method. The NIR spectroscopy method was used to analyze these samples by combining the multi-reference correlation coefficient (MRCC) method and the error back propagation artificial neural network algorithm (BP-ANN), so as to realize the qualitative identification of calamine samples. The accuracy rate of the model based on NIR and MRCC methods was 85%; in addition, the model, which took comprehensive multiple factors into consideration, can be used to identify crude calamine products, its counterfeits and processed products. Furthermore, by in-putting the correlation coefficients of multiple references as the spectral feature data of samples into BP-ANN, a BP-ANN model of qualitative identification was established, of which the accuracy rate was increased to 95%. The MRCC method can be used as a NIR-based method in the process of BP-ANN modeling.

  4. On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods

    Science.gov (United States)

    Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.

    2003-01-01

    Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.

  5. The study of combining Latin Hypercube Sampling method and LU decomposition method (LULHS method) for constructing spatial random field

    Science.gov (United States)

    WANG, P. T.

    2015-12-01

    Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.

  6. Accurate Frequency Estimation Based On Three-Parameter Sine-Fitting With Three FFT Samples

    Directory of Open Access Journals (Sweden)

    Liu Xin

    2015-09-01

    Full Text Available This paper presents a simple DFT-based golden section searching algorithm (DGSSA for the single tone frequency estimation. Because of truncation and discreteness in signal samples, Fast Fourier Transform (FFT and Discrete Fourier Transform (DFT are inevitable to cause the spectrum leakage and fence effect which lead to a low estimation accuracy. This method can improve the estimation accuracy under conditions of a low signal-to-noise ratio (SNR and a low resolution. This method firstly uses three FFT samples to determine the frequency searching scope, then – besides the frequency – the estimated values of amplitude, phase and dc component are obtained by minimizing the least square (LS fitting error of three-parameter sine fitting. By setting reasonable stop conditions or the number of iterations, the accurate frequency estimation can be realized. The accuracy of this method, when applied to observed single-tone sinusoid samples corrupted by white Gaussian noise, is investigated by different methods with respect to the unbiased Cramer-Rao Low Bound (CRLB. The simulation results show that the root mean square error (RMSE of the frequency estimation curve is consistent with the tendency of CRLB as SNR increases, even in the case of a small number of samples. The average RMSE of the frequency estimation is less than 1.5 times the CRLB with SNR = 20 dB and N = 512.

  7. [Examination of analytical method for triphenyltin (TPT) and tributyltin (TBT) to revise the official methods based on "Act on the Control of Household Products Containing Harmful Substances"].

    Science.gov (United States)

    Kawakami, Tsuyoshi; Isama, Kazuo; Nakashima, Harunobu; Yoshida, Jin; Ooshima, Tomoko; Ohno, Hiroyuki; Uemura, Hitoshi; Shioda, Hiroko; Kikuchi, Yoko; Matsuoka, Atsuko; Nishimura, Tetsuji

    2012-01-01

    The use of triphenyltin (TPT) and tributyltin (TBT) in some household products is banned by "Act on the Control of Household Products Containing Harmful Substances" in Japan. To revise the official analytical method, the method for detecting these organotin compounds was examined in six laboratories using a textile product, water-based adhesive, oil-based paint, which contained known amounts of TPT and TBT (0.1, 1.0, 10 μg/g). TPT and TBT were measured by GC-MS after ethyl-derivation with sodium tetraethylborate. The TBT recoveries in the samples were 70-120%. The TPT recoveries in the water-based adhesive samples were 80-110%, while its concentrations in the textile product and oil-based paint samples decreased because of dephenylation during storage. However, the precision of the method examined was satisfactory because most coefficients of variation for TPT and TBT in the samples were less than 10%. Furthermore, the revised method was able to detect concentrations lower than the officially regulated value. However, the sample matrix and the condition of analytical instrument might affect the estimated TPT and TBT concentrations. Therefore, the revised method may not be suitable for quantitative tests; rather, it can be employed to judge the acceptable levels of these organotin compounds by comparing the values of control sample containing regulated amounts of TPT and TBT with those for an unknown sample, with deuterated TPT and TBT as surrogate substances. It is desirable that TPT in textile and oil-based paint samples are analyzed immediately after the samples obtained because of the decomposition of TPT.

  8. Peak Bagging of red giant stars observed by Kepler: first results with a new method based on Bayesian nested sampling

    Science.gov (United States)

    Corsaro, Enrico; De Ridder, Joris

    2015-09-01

    The peak bagging analysis, namely the fitting and identification of single oscillation modes in stars' power spectra, coupled to the very high-quality light curves of red giant stars observed by Kepler, can play a crucial role for studying stellar oscillations of different flavor with an unprecedented level of detail. A thorough study of stellar oscillations would thus allow for deeper testing of stellar structure models and new insights in stellar evolution theory. However, peak bagging inferences are in general very challenging problems due to the large number of observed oscillation modes, hence of free parameters that can be involved in the fitting models. Efficiency and robustness in performing the analysis is what may be needed to proceed further. For this purpose, we developed a new code implementing the Nested Sampling Monte Carlo (NSMC) algorithm, a powerful statistical method well suited for Bayesian analyses of complex problems. In this talk we show the peak bagging of a sample of high signal-to-noise red giant stars by exploiting recent Kepler datasets and a new criterion for the detection of an oscillation mode based on the computation of the Bayesian evidence. Preliminary results for frequencies and lifetimes for single oscillation modes, together with acoustic glitches, are therefore presented.

  9. Peak Bagging of red giant stars observed by Kepler: first results with a new method based on Bayesian nested sampling

    Directory of Open Access Journals (Sweden)

    Corsaro Enrico

    2015-01-01

    Full Text Available The peak bagging analysis, namely the fitting and identification of single oscillation modes in stars’ power spectra, coupled to the very high-quality light curves of red giant stars observed by Kepler, can play a crucial role for studying stellar oscillations of different flavor with an unprecedented level of detail. A thorough study of stellar oscillations would thus allow for deeper testing of stellar structure models and new insights in stellar evolution theory. However, peak bagging inferences are in general very challenging problems due to the large number of observed oscillation modes, hence of free parameters that can be involved in the fitting models. Efficiency and robustness in performing the analysis is what may be needed to proceed further. For this purpose, we developed a new code implementing the Nested Sampling Monte Carlo (NSMC algorithm, a powerful statistical method well suited for Bayesian analyses of complex problems. In this talk we show the peak bagging of a sample of high signal-to-noise red giant stars by exploiting recent Kepler datasets and a new criterion for the detection of an oscillation mode based on the computation of the Bayesian evidence. Preliminary results for frequencies and lifetimes for single oscillation modes, together with acoustic glitches, are therefore presented.

  10. 222Rn in water: A comparison of two sample collection methods and two sample transport methods, and the determination of temporal variation in North Carolina ground water

    International Nuclear Information System (INIS)

    Hightower, J.H. III

    1994-01-01

    Objectives of this field experiment were: (1) determine whether there was a statistically significant difference between the radon concentrations of samples collected by EPA's standard method, using a syringe, and an alternative, slow-flow method; (2) determine whether there was a statistically significant difference between the measured radon concentrations of samples mailed vs samples not mailed; and (3) determine whether there was a temporal variation of water radon concentration over a 7-month period. The field experiment was conducted at 9 sites, 5 private wells, and 4 public wells, at various locations in North Carolina. Results showed that a syringe is not necessary for sample collection, there was generally no significant radon loss due to mailing samples, and there was statistically significant evidence of temporal variations in water radon concentrations

  11. [DOE method for evaluating environmental and waste management samples: Revision 1, Addendum 1

    Energy Technology Data Exchange (ETDEWEB)

    Goheen, S.C.

    1995-04-01

    The US Dapartment of Energy`s (DOE`s) environmental and waste management (EM) sampling and analysis activities require that large numbers of samples be analyzed for materials characterization, environmental surveillance, and site-remediation programs. The present document, DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods), is a supplemental resource for analyzing many of these samples.

  12. [DOE method for evaluating environmental and waste management samples: Revision 1, Addendum 1

    International Nuclear Information System (INIS)

    Goheen, S.C.

    1995-04-01

    The US Dapartment of Energy's (DOE's) environmental and waste management (EM) sampling and analysis activities require that large numbers of samples be analyzed for materials characterization, environmental surveillance, and site-remediation programs. The present document, DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods), is a supplemental resource for analyzing many of these samples

  13. Application of the neutron activation analysis method to the multielemental determination of food samples

    International Nuclear Information System (INIS)

    Maihara, V.A.

    1985-01-01

    The thermal neutron activation analysis method was applied to the determination of elements present at low concentrations and trace levels in samples of bread and milk powder using non-destructive analyses were based on gamma ray spectrometric measurements of samples and standards irradiated for periods which varied from some minutes to eight hours in a thermal neutron flux of about 10 12 n cm -2 s -1 . The concentrations obtained for milk powder were compared with the data obtained by other autors from different contries. For the bread, that comparison was not possible, because data about trace analysis in bread samples were not found. Besides, the results obtained for the various brands of bread and milk by means of non destructive and destructive analyses were compared using Student's t criterion. Some basic considerations about 'Detection Limit' were done, mainly in relation to its application in the technique used in the present work. The detection and determination limits of the trace elements analysed by destructive and non destructive techniques in bread and milk powder samples were determined using the Currie and Girardi methods. The precision of the analyses and the results obtained for the detection limits of the analysed trace elements are discussed. (Author) [pt

  14. Sampling-based nuclear data uncertainty quantification for continuous energy Monte-Carlo codes

    International Nuclear Information System (INIS)

    Zhu, T.

    2015-01-01

    Research on the uncertainty of nuclear data is motivated by practical necessity. Nuclear data uncertainties can propagate through nuclear system simulations into operation and safety related parameters. The tolerance for uncertainties in nuclear reactor design and operation can affect the economic efficiency of nuclear power, and essentially its sustainability. The goal of the present PhD research is to establish a methodology of nuclear data uncertainty quantification (NDUQ) for MCNPX, the continuous-energy Monte-Carlo (M-C) code. The high fidelity (continuous-energy treatment and flexible geometry modelling) of MCNPX makes it the choice of routine criticality safety calculations at PSI/LRS, but also raises challenges for NDUQ by conventional sensitivity/uncertainty (S/U) methods. For example, only recently in 2011, the capability of calculating continuous energy κ_e_f_f sensitivity to nuclear data was demonstrated in certain M-C codes by using the method of iterated fission probability. The methodology developed during this PhD research is fundamentally different from the conventional S/U approach: nuclear data are treated as random variables and sampled in accordance to presumed probability distributions. When sampled nuclear data are used in repeated model calculations, the output variance is attributed to the collective uncertainties of nuclear data. The NUSS (Nuclear data Uncertainty Stochastic Sampling) tool is based on this sampling approach and implemented to work with MCNPX’s ACE format of nuclear data, which also gives NUSS compatibility with MCNP and SERPENT M-C codes. In contrast, multigroup uncertainties are used for the sampling of ACE-formatted pointwise-energy nuclear data in a groupwise manner due to the more limited quantity and quality of nuclear data uncertainties. Conveniently, the usage of multigroup nuclear data uncertainties allows consistent comparison between NUSS and other methods (both S/U and sampling-based) that employ the same

  15. Sampling-based nuclear data uncertainty quantification for continuous energy Monte-Carlo codes

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, T.

    2015-07-01

    Research on the uncertainty of nuclear data is motivated by practical necessity. Nuclear data uncertainties can propagate through nuclear system simulations into operation and safety related parameters. The tolerance for uncertainties in nuclear reactor design and operation can affect the economic efficiency of nuclear power, and essentially its sustainability. The goal of the present PhD research is to establish a methodology of nuclear data uncertainty quantification (NDUQ) for MCNPX, the continuous-energy Monte-Carlo (M-C) code. The high fidelity (continuous-energy treatment and flexible geometry modelling) of MCNPX makes it the choice of routine criticality safety calculations at PSI/LRS, but also raises challenges for NDUQ by conventional sensitivity/uncertainty (S/U) methods. For example, only recently in 2011, the capability of calculating continuous energy κ{sub eff} sensitivity to nuclear data was demonstrated in certain M-C codes by using the method of iterated fission probability. The methodology developed during this PhD research is fundamentally different from the conventional S/U approach: nuclear data are treated as random variables and sampled in accordance to presumed probability distributions. When sampled nuclear data are used in repeated model calculations, the output variance is attributed to the collective uncertainties of nuclear data. The NUSS (Nuclear data Uncertainty Stochastic Sampling) tool is based on this sampling approach and implemented to work with MCNPX’s ACE format of nuclear data, which also gives NUSS compatibility with MCNP and SERPENT M-C codes. In contrast, multigroup uncertainties are used for the sampling of ACE-formatted pointwise-energy nuclear data in a groupwise manner due to the more limited quantity and quality of nuclear data uncertainties. Conveniently, the usage of multigroup nuclear data uncertainties allows consistent comparison between NUSS and other methods (both S/U and sampling-based) that employ the same

  16. Multicommuted flow injection method for fast photometric determination of phenolic compounds in commercial virgin olive oil samples.

    Science.gov (United States)

    Lara-Ortega, Felipe J; Sainz-Gonzalo, Francisco J; Gilbert-López, Bienvenida; García-Reyes, Juan F; Molina-Díaz, Antonio

    2016-01-15

    A multicommuted flow injection method has been developed for the determination of phenolic species in virgin olive oil samples. The method is based on the inhibitory effect of antioxidants on a stable and colored radical cation formation from the colorless compound N,N-dimethyl-p-phenylenediamine (DMPD(•+)) in acidic medium in the presence of Fe(III) as oxidant. The signal inhibition by phenolic species and other antioxidants is proportional to their concentration in the olive oil sample. Absorbance was recorded at 515nm by means of a modular fiber optic spectrometer. Oleuropein was used as the standard for phenols determination and 6-hydroxy-2,5,7,8-tetramethylchroman-2-carboxylic acid (trolox) was the reference standard used for total antioxidant content calculation. Linear response was observed within the range of 250-1000mg/kg oleuropein, which was in accordance with phenolic contents observed in commercial extra virgin olive oil in the present study. Fast and low-volume liquid-liquid extraction of the samples using 60% MeOH was made previous to their insertion in the flow multicommuted system. The five three-way solenoid valves used for multicommuted liquid handling were controlled by a homemade electronic interface and Java-written software. The proposed approach was applied to different commercial extra virgin olive oil samples and the results were consistent with those obtained by the Folin Ciocalteu (FC) method. Total time for the sample preparation and the analysis required in the present approach can be drastically reduced: the throughput of the present analysis is 8 samples/h in contrast to 1sample/h of the conventional FC method. The present method is easy to implement in routine analysis and can be regarded as a feasible alternative to FC method. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Nanoparticle-assisted laser desorption/ionization mass spectrometry: Novel sample preparation methods and nanoparticle screening for plant metabolite imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yagnik, Gargey B. [Iowa State Univ., Ames, IA (United States)

    2016-02-19

    The main goal of the presented research is development of nanoparticle based matrix-assisted laser desorption ionization-mass spectrometry (MALDI-MS). This dissertation includes the application of previously developed data acquisition methods, development of novel sample preparation methods, application and comparison of novel nanoparticle matrices, and comparison of two nanoparticle matrix application methods for MALDI-MS and MALDI-MS imaging.

  18. A sampling and metagenomic sequencing-based methodology for monitoring antimicrobial resistance in swine herds

    DEFF Research Database (Denmark)

    Munk, Patrick; Dalhoff Andersen, Vibe; de Knegt, Leonardo

    2016-01-01

    Objectives Reliable methods for monitoring antimicrobial resistance (AMR) in livestock and other reservoirs are essential to understand the trends, transmission and importance of agricultural resistance. Quantification of AMR is mostly done using culture-based techniques, but metagenomic read...... mapping shows promise for quantitative resistance monitoring. Methods We evaluated the ability of: (i) MIC determination for Escherichia coli; (ii) cfu counting of E. coli; (iii) cfu counting of aerobic bacteria; and (iv) metagenomic shotgun sequencing to predict expected tetracycline resistance based...... cultivation-based techniques in terms of predicting expected tetracycline resistance based on antimicrobial consumption. Our metagenomic approach had sufficient resolution to detect antimicrobial-induced changes to individual resistance gene abundances. Pen floor manure samples were found to represent rectal...

  19. Development of sample preparation method for honey analysis using PIXE

    International Nuclear Information System (INIS)

    Saitoh, Katsumi; Chiba, Keiko; Sera, Koichiro

    2008-01-01

    We developed an original preparation method for honey samples (samples in paste-like state) specifically designed for PIXE analysis. The results of PIXE analysis of thin targets prepared by adding a standard containing nine elements to honey samples demonstrated that the preparation method bestowed sufficient accuracy on quantitative values. PIXE analysis of 13 kinds of honey was performed, and eight mineral components (Si, P, S, K, Ca, Mn, Cu and Zn) were detected in all honey samples. The principal mineral components were K and Ca, and the quantitative value for K accounted for the majority of the total value for mineral components. K content in honey varies greatly depending on the plant source. Chestnuts had the highest K content. In fact, it was 2-3 times that of Manuka, which is known as a high quality honey. K content of false-acacia, which is produced in the greatest abundance, was 1/20 that of chestnuts. (author)

  20. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    Directory of Open Access Journals (Sweden)

    Q. Zhang

    2018-02-01

    Full Text Available River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1 fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2 the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling – in the form of spectral slope (β or other equivalent scaling parameters (e.g., Hurst exponent – are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1 they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β  =  0 to Brown noise (β  =  2 and (2 their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb–Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among