WorldWideScience

Sample records for bias sampling method

  1. Mapping species distributions with MAXENT using a geographically biased sample of presence data: a performance assessment of methods for correcting sampling bias.

    Science.gov (United States)

    Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.

  2. Ensemble-Biased Metadynamics: A Molecular Simulation Method to Sample Experimental Distributions.

    Science.gov (United States)

    Marinelli, Fabrizio; Faraldo-Gómez, José D

    2015-06-16

    We introduce an enhanced-sampling method for molecular dynamics (MD) simulations referred to as ensemble-biased metadynamics (EBMetaD). The method biases a conventional MD simulation to sample a molecular ensemble that is consistent with one or more probability distributions known a priori, e.g., experimental intramolecular distance distributions obtained by double electron-electron resonance or other spectroscopic techniques. To this end, EBMetaD adds an adaptive biasing potential throughout the simulation that discourages sampling of configurations inconsistent with the target probability distributions. The bias introduced is the minimum necessary to fulfill the target distributions, i.e., EBMetaD satisfies the maximum-entropy principle. Unlike other methods, EBMetaD does not require multiple simulation replicas or the introduction of Lagrange multipliers, and is therefore computationally efficient and straightforward in practice. We demonstrate the performance and accuracy of the method for a model system as well as for spin-labeled T4 lysozyme in explicit water, and show how EBMetaD reproduces three double electron-electron resonance distance distributions concurrently within a few tens of nanoseconds of simulation time. EBMetaD is integrated in the open-source PLUMED plug-in (www.plumed-code.org), and can be therefore readily used with multiple MD engines.

  3. Sampling international migrants with origin-based snowballing method:: New evidence on biases and limitations.

    Directory of Open Access Journals (Sweden)

    Cris Beauchemin

    2011-07-01

    Full Text Available This paper provides a methodological assessment of the advantages and drawbacks of the origin-based snowballing technique as a reliable method to construct representative samples of international migrants in destination areas. Using data from the MAFE-Senegal Project, our results indicate that this is a very risky method in terms of quantitative success. Besides, it implies some clear selection biases: it over-represents migrants more strongly connected to their home country, and it tends to overestimate both poverty in households at origin and the influence of previous migration experiences of social networks on individuals' out-migration.

  4. Sample-Size Planning for More Accurate Statistical Power: A Method Adjusting Sample Effect Sizes for Publication Bias and Uncertainty.

    Science.gov (United States)

    Anderson, Samantha F; Kelley, Ken; Maxwell, Scott E

    2017-09-01

    The sample size necessary to obtain a desired level of statistical power depends in part on the population value of the effect size, which is, by definition, unknown. A common approach to sample-size planning uses the sample effect size from a prior study as an estimate of the population value of the effect to be detected in the future study. Although this strategy is intuitively appealing, effect-size estimates, taken at face value, are typically not accurate estimates of the population effect size because of publication bias and uncertainty. We show that the use of this approach often results in underpowered studies, sometimes to an alarming degree. We present an alternative approach that adjusts sample effect sizes for bias and uncertainty, and we demonstrate its effectiveness for several experimental designs. Furthermore, we discuss an open-source R package, BUCSS, and user-friendly Web applications that we have made available to researchers so that they can easily implement our suggested methods.

  5. Comparison of Relative Bias, Precision, and Efficiency of Sampling Methods for Natural Enemies of Soybean Aphid (Hemiptera: Aphididae).

    Science.gov (United States)

    Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W

    2015-06-01

    Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount.

  6. Finite-Sample Bias Propagation in Autoregressive Estimation With the Yule–Walker Method

    NARCIS (Netherlands)

    Broersen, P.M.T.

    2009-01-01

    The Yule-Walker (YW) method for autoregressive (AR) estimation uses lagged-product (LP) autocorrelation estimates to compute an AR parametric spectral model. The LP estimates only have a small triangular bias in the estimated autocorrelation function and are asymptotically unbiased. However, using t

  7. Assessing total nitrogen in surface-water samples--precision and bias of analytical and computational methods

    Science.gov (United States)

    Rus, David L.; Patton, Charles J.; Mueller, David K.; Crawford, Charles G.

    2013-01-01

    The characterization of total-nitrogen (TN) concentrations is an important component of many surface-water-quality programs. However, three widely used methods for the determination of total nitrogen—(1) derived from the alkaline-persulfate digestion of whole-water samples (TN-A); (2) calculated as the sum of total Kjeldahl nitrogen and dissolved nitrate plus nitrite (TN-K); and (3) calculated as the sum of dissolved nitrogen and particulate nitrogen (TN-C)—all include inherent limitations. A digestion process is intended to convert multiple species of nitrogen that are present in the sample into one measureable species, but this process may introduce bias. TN-A results can be negatively biased in the presence of suspended sediment, and TN-K data can be positively biased in the presence of elevated nitrate because some nitrate is reduced to ammonia and is therefore counted twice in the computation of total nitrogen. Furthermore, TN-C may not be subject to bias but is comparatively imprecise. In this study, the effects of suspended-sediment and nitrate concentrations on the performance of these TN methods were assessed using synthetic samples developed in a laboratory as well as a series of stream samples. A 2007 laboratory experiment measured TN-A and TN-K in nutrient-fortified solutions that had been mixed with varying amounts of sediment-reference materials. This experiment identified a connection between suspended sediment and negative bias in TN-A and detected positive bias in TN-K in the presence of elevated nitrate. A 2009–10 synoptic-field study used samples from 77 stream-sampling sites to confirm that these biases were present in the field samples and evaluated the precision and bias of TN methods. The precision of TN-C and TN-K depended on the precision and relative amounts of the TN-component species used in their respective TN computations. Particulate nitrogen had an average variability (as determined by the relative standard deviation) of 13

  8. Are most samples of animals systematically biased? Consistent individual trait differences bias samples despite random sampling.

    Science.gov (United States)

    Biro, Peter A

    2013-02-01

    Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.

  9. Sampling Bias on Cup Anemometer Mean Winds

    Science.gov (United States)

    Kristensen, L.; Hansen, O. F.; Højstrup, J.

    2003-10-01

    The cup anemometer signal can be sampled in several ways to obtain the mean wind speed. Here we discuss the sampling of series of mean wind speeds from consecutive rotor rotations, followed by unweighted and weighted averaging. It is shown that the unweighted averaging creates a positive bias on the long-term mean wind speed, which is at least one order of magnitude larger than the positive bias from the weighted averaging, also known as the sample-and-hold method. For a homogeneous, neutrally stratified flow the first biases are 1%-2%. For comparison the biases due to fluctuations of the three wind velocity components and due to calibration non-linearity are determined under the same conditions. The largest of these is the v-bias from direction fluctuations. The calculations pertain to the Risø P2546A model cup anemometer.

  10. The gas chromatographic determination of volatile fatty acids in wastewater samples: evaluation of experimental biases in direct injection method against thermal desorption method.

    Science.gov (United States)

    Ullah, Md Ahsan; Kim, Ki-Hyun; Szulejko, Jan E; Cho, Jinwoo

    2014-04-11

    The production of short-chained volatile fatty acids (VFAs) by the anaerobic bacterial digestion of sewage (wastewater) affords an excellent opportunity to alternative greener viable bio-energy fuels (i.e., microbial fuel cell). VFAs in wastewater (sewage) samples are commonly quantified through direct injection (DI) into a gas chromatograph with a flame ionization detector (GC-FID). In this study, the reliability of VFA analysis by the DI-GC method has been examined against a thermal desorption (TD-GC) method. The results indicate that the VFA concentrations determined from an aliquot from each wastewater sample by the DI-GC method were generally underestimated, e.g., reductions of 7% (acetic acid) to 93.4% (hexanoic acid) relative to the TD-GC method. The observed differences between the two methods suggest the possibly important role of the matrix effect to give rise to the negative biases in DI-GC analysis. To further explore this possibility, an ancillary experiment was performed to examine bias patterns of three DI-GC approaches. For instance, the results of the standard addition (SA) method confirm the definite role of matrix effect when analyzing wastewater samples by DI-GC. More importantly, their biases tend to increase systematically with increasing molecular weight and decreasing VFA concentrations. As such, the use of DI-GC method, if applied for the analysis of samples with a complicated matrix, needs a thorough validation to improve the reliability in data acquisition.

  11. Linking questions to practices in the study of microbial pathogens: sampling bias and typing methods.

    Science.gov (United States)

    Gómez-Díaz, Elena

    2009-12-01

    The importance of understanding the population genetics and evolution of microbial pathogens is increasing as a result of the spread and re-emergence of many infectious diseases and their impact for public health. In the last few years, the development of high throughput multi-gene sequence methodologies has opened new opportunities for studying pathogen populations, providing reliable and robust means for both epidemiological and evolutionary investigations. For instance, for many pathogens, multilocus sequence typing has become the "gold standard" in molecular epidemiology, allowing strain identification and discovery. However, there is a huge gap between typing a clinical collection of isolates and making inferences about their evolutionary history and population genetics. Critical issues for studying microbial pathogens such as an adequate sampling design and the appropriate selection of the genetic technique are also required, and will rely on the scale of study and the characteristics of the biological system (e.g., multi- vs. single-host pathogens and vector vs. food or air-borne pathogens). My aim here is to discuss some of these issues in more detail and illustrate how these aspects are often overlooked and easily neglected in the field. Finally, given the rapid accumulation of complete genome sequences and the increasing effort on microbiology research, it is clear that now more than ever integrative approaches bringing together epidemiology and evolutionary biology are needed for understanding the diversity of microbial pathogens.

  12. Positive bias and vacuum chamber wall effect on total electron yield measurement: A re-consideration of the sample current method

    Science.gov (United States)

    Ye, Ming; Wang, Dan; Li, Yun; He, Yong-ning; Cui, Wan-zhao; Daneshmand, Mojgan

    2017-02-01

    The measurement of the total secondary electron yield (TEY, δ) is of fundamental importance in areas such as accelerator, spacecraft, detector, and plasma system. Most of the running TEY facilities in the world are based on the kind of bias strategy. The applied bias can assist in the collection of the secondary/primary electrons. In the prevailing sample current method, the TEY is obtained by the measurement of the current from the sample to ground with a negative/positive bias applied to the sample. One of the basic assumptions in this method is that the positive bias can retain most of the electrons emitted by the sample. This assumption is generally recognized based on the seeming fact that the low energy secondary electrons dominate the emitted electrons. In this work, by considering the full electron energy spectrum including both the true secondary and backscattered electrons, we give a new insight in this TEY measurement method. Through the analytical derivation as well as the Particle-in-Cell numerical simulation, we show that it is due to the following two factors, other than the assumption mentioned above, which make the sample current method works satisfactorily: (a) the TEY relative error is related to the TEY itself in the form of | 1 - δ | / δ , which indicates a smallest error when measuring samples with TEY closest to 1; and (b) the compensation effect of the vacuum chamber wall. Analytical results agree well with numerical simulations and furthermore, we present a correction method for reducing the TEY relative error when measuring samples with TEY below 1. By sweeping the positive bias from 50 to 500 V, a flat silver sample in the as-received state with maximum TEY larger than 2 and a laser etched sample with maximum TEY close to 1 were measured for further verification. The obtained experimental results agree well with the theoretical analysis.

  13. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method.

    Science.gov (United States)

    Cao, Youfang; Liang, Jie

    2013-07-14

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively

  14. Evaluating of bone healing around porous coated titanium implant and potential systematic bias on the traditional sampling method

    DEFF Research Database (Denmark)

    Babiker, Hassan; Ding, Ming; Overgaard, Søren

    2013-01-01

    could be affected by the various quality and quantity of bone in the local environment. Thus, implant fixation in one part might differ from the other part of the implant. This study aimed to investigate the influence of the sampling method on data evaluation. Material and methods: Titanium alloy implants...

  15. Bias Assessment of General Chemistry Analytes using Commutable Samples.

    Science.gov (United States)

    Koerbin, Gus; Tate, Jillian R; Ryan, Julie; Jones, Graham Rd; Sikaris, Ken A; Kanowski, David; Reed, Maxine; Gill, Janice; Koumantakis, George; Yen, Tina; St John, Andrew; Hickman, Peter E; Simpson, Aaron; Graham, Peter

    2014-11-01

    Harmonisation of reference intervals for routine general chemistry analytes has been a goal for many years. Analytical bias may prevent this harmonisation. To determine if analytical bias is present when comparing methods, the use of commutable samples, or samples that have the same properties as the clinical samples routinely analysed, should be used as reference samples to eliminate the possibility of matrix effect. The use of commutable samples has improved the identification of unacceptable analytical performance in the Netherlands and Spain. The International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) has undertaken a pilot study using commutable samples in an attempt to determine not only country specific reference intervals but to make them comparable between countries. Australia and New Zealand, through the Australasian Association of Clinical Biochemists (AACB), have also undertaken an assessment of analytical bias using commutable samples and determined that of the 27 general chemistry analytes studied, 19 showed sufficiently small between method biases as to not prevent harmonisation of reference intervals. Application of evidence based approaches including the determination of analytical bias using commutable material is necessary when seeking to harmonise reference intervals.

  16. Accelerated failure time model under general biased sampling scheme.

    Science.gov (United States)

    Kim, Jane Paik; Sit, Tony; Ying, Zhiliang

    2016-07-01

    Right-censored time-to-event data are sometimes observed from a (sub)cohort of patients whose survival times can be subject to outcome-dependent sampling schemes. In this paper, we propose a unified estimation method for semiparametric accelerated failure time models under general biased estimating schemes. The proposed estimator of the regression covariates is developed upon a bias-offsetting weighting scheme and is proved to be consistent and asymptotically normally distributed. Large sample properties for the estimator are also derived. Using rank-based monotone estimating functions for the regression parameters, we find that the estimating equations can be easily solved via convex optimization. The methods are confirmed through simulations and illustrated by application to real datasets on various sampling schemes including length-bias sampling, the case-cohort design and its variants. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. The estimation method of GPS instrumental biases

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A model of estimating the global positioning system (GPS) instrumental biases and the methods to calculate the relative instrumental biases of satellite and receiver are presented. The calculated results of GPS instrumental biases, the relative instrumental biases of satellite and receiver, and total electron content (TEC) are also shown. Finally, the stability of GPS instrumental biases as well as that of satellite and receiver instrumental biases are evaluated, indicating that they are very stable during a period of two months and a half.

  18. SURVIVAL ANALYSIS AND LENGTH-BIASED SAMPLING

    Directory of Open Access Journals (Sweden)

    Masoud Asgharian

    2010-12-01

    Full Text Available When survival data are colleted as part of a prevalent cohort study, the recruited cases have already experienced their initiating event. These prevalent cases are then followed for a fixed period of time at the end of which the subjects will either have failed or have been censored. When interests lies in estimating the survival distribution, from onset, of subjects with the disease, one must take into account that the survival times of the cases in a prevalent cohort study are left truncated. When it is possible to assume that there has not been any epidemic of the disease over the past period of time that covers the onset times of the subjects, one may assume that the underlying incidence process that generates the initiating event times is a stationary Poisson process. Under such assumption, the survival times of the recruited subjects are called “lengthbiased”. I discuss the challenges one is faced with in analyzing these type of data. To address the theoretical aspects of the work, I present asymptotic results for the NPMLE of the length-biased as well as the unbiased survival distribution. I also discuss estimating the unbiased survival function using only the follow-up time. This addresses the case that the onset times are either unknown or known with uncertainty. Some of our most recent work and open questions will be presented. These include some aspects of analysis of covariates, strong approximation, functional LIL and density estimation under length-biased sampling with right censoring. The results will be illustrated with survival data from patients with dementia, collected as part of the Canadian Study of Health and Aging (CSHA.

  19. Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach

    Science.gov (United States)

    Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio

    2015-01-01

    This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447–2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8–30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics. PMID:26452043

  20. Sampling Realistic Protein Conformations Using Local Structural Bias

    DEFF Research Database (Denmark)

    Hamelryck, Thomas Wim; Kent, John T.; Krogh, A.

    2006-01-01

    are subsequently accepted or rejected using an energy function. Conceptually, this often corresponds to separating local structural bias from the long-range interactions that stabilize the compact, native state. However, sampling protein conformations that are compatible with the local structural bias encoded......The prediction of protein structure from sequence remains a major unsolved problem in biology. The most successful protein structure prediction methods make use of a divide-and-conquer strategy to attack the problem: a conformational sampling method generates plausible candidate structures, which...... in a given protein sequence is a long-standing open problem, especially in continuous space. We describe an elegant and mathematically rigorous method to do this, and show that it readily generates native-like protein conformations simply by enforcing compactness. Our results have far-reaching implications...

  1. A method of language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik; Hengeveld, Kees

    1993-01-01

    In recent years more attention is paid to the quality of language samples in typological work. Without an adequate sampling strategy, samples may suffer from various kinds of bias. In this article we propose a sampling method in which the genetic criterion is taken as the most important: samples...... to determine how many languages from each phylum should be selected, given any required sample size....

  2. Maximum Likelihood Under Response Biased Sampling\\ud

    OpenAIRE

    Chambers, Raymond; Dorfman, Alan; Wang, Suojin

    2003-01-01

    Informative sampling occurs when the probability of inclusion in sample depends on\\ud the value of the survey response variable. Response or size biased sampling is a\\ud particular case of informative sampling where the inclusion probability is proportional\\ud to the value of this variable. In this paper we describe a general model for response\\ud biased sampling, which we call array sampling, and develop maximum likelihood and\\ud estimating equation theory appropriate to this situation. The ...

  3. A downhole passive sampling system to avoid bias and error from groundwater sample handling.

    Science.gov (United States)

    Britt, Sanford L; Parker, Beth L; Cherry, John A

    2010-07-01

    A new downhole groundwater sampler reduces bias and error due to sample handling and exposure while introducing minimal disturbance to natural flow conditions in the formation and well. This "In Situ Sealed", "ISS", or "Snap" sampling device includes removable/lab-ready sample bottles, a sampler device to hold double end-opening sample bottles in an open position, and a line for lowering the sampler system and triggering closure of the bottles downhole. Before deployment, each bottle is set open at both ends to allow flow-through during installation and equilibration downhole. Bottles are triggered to close downhole without well purging; the method is therefore "passive" or "nonpurge". The sample is retrieved in a sealed condition and remains unexposed until analysis. Data from six field studies comparing ISS sampling with traditional methods indicate ISS samples typically yield higher volatile organic compound (VOC) concentrations; in one case, significant chemical-specific differentials between sampling methods were discernible. For arsenic, filtered and unfiltered purge results were negatively and positively biased, respectively, compared to ISS results. Inorganic constituents showed parity with traditional methods. Overall, the ISS is versatile, avoids low VOC recovery bias, and enhances reproducibility while avoiding sampling complexity and purge water disposal.

  4. A method of language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik; Hengeveld, Kees

    1993-01-01

    In recent years more attention is paid to the quality of language samples in typological work. Without an adequate sampling strategy, samples may suffer from various kinds of bias. In this article we propose a sampling method in which the genetic criterion is taken as the most important: samples...... created with this method will reflect optimally the diversity of the languages of the world. On the basis of the internal structure of each genetic language tree a measure is computed that reflects the linguistic diversity in the language families represented by these trees. This measure is used...... to determine how many languages from each phylum should be selected, given any required sample size....

  5. Forecast Bias Correction: A Second Order Method

    CERN Document Server

    Crowell, Sean

    2010-01-01

    The difference between a model forecast and actual observations is called forecast bias. This bias is due to either incomplete model assumptions and/or poorly known parameter values and initial/boundary conditions. In this paper we discuss a method for estimating corrections to parameters and initial conditions that would account for the forecast bias. A set of simple experiments with the logistic ordinary differential equation is performed using an iterative version of a first order version of our method to compare with the second order version of the method.

  6. Peptide Backbone Sampling Convergence with the Adaptive Biasing Force Algorithm

    Science.gov (United States)

    Faller, Christina E.; Reilly, Kyle A.; Hills, Ronald D.; Guvench, Olgun

    2013-01-01

    Complete Boltzmann sampling of reaction coordinates in biomolecular systems continues to be a challenge for unbiased molecular dynamics simulations. A growing number of methods have been developed for applying biases to biomolecular systems to enhance sampling while enabling recovery of the unbiased (Boltzmann) distribution of states. The Adaptive Biasing Force (ABF) algorithm is one such method, and works by canceling out the average force along the desired reaction coordinate(s) using an estimate of this force progressively accumulated during the simulation. Upon completion of the simulation, the potential of mean force, and therefore Boltzmann distribution of states, is obtained by integrating this average force. In an effort to characterize the expected performance in applications such as protein loop sampling, ABF was applied to the full ranges of the Ramachandran ϕ/ψ backbone dihedral reaction coordinates for dipeptides of the 20 amino acids using all-atom explicit-water molecular dynamics simulations. Approximately half of the dipeptides exhibited robust and rapid convergence of the potential of mean force as a function of ϕ/ψ in triplicate 50-ns simulations, while the remainder exhibited varying degrees of less complete convergence. The greatest difficulties in achieving converged ABF sampling were seen in the branched-sidechain amino acids threonine and valine, as well as the special case of proline. Proline dipeptide sampling was further complicated by trans-to-cis peptide bond isomerization not observed in unbiased control molecular dynamics simulations. Overall, the ABF method was found to be a robust means of sampling the entire ϕ/ψ reaction coordinate for the 20 amino acids, including high free-energy regions typically inaccessible in standard molecular dynamics simulations. PMID:23215032

  7. Approach-Induced Biases in Human Information Sampling

    Science.gov (United States)

    Hunt, Laurence T.; Rutledge, Robb B.; Malalasekera, W. M. Nishantha; Kennerley, Steven W.; Dolan, Raymond J.

    2016-01-01

    Information sampling is often biased towards seeking evidence that confirms one’s prior beliefs. Despite such biases being a pervasive feature of human behavior, their underlying causes remain unclear. Many accounts of these biases appeal to limitations of human hypothesis testing and cognition, de facto evoking notions of bounded rationality, but neglect more basic aspects of behavioral control. Here, we investigated a potential role for Pavlovian approach in biasing which information humans will choose to sample. We collected a large novel dataset from 32,445 human subjects, making over 3 million decisions, who played a gambling task designed to measure the latent causes and extent of information-sampling biases. We identified three novel approach-related biases, formalized by comparing subject behavior to a dynamic programming model of optimal information gathering. These biases reflected the amount of information sampled (“positive evidence approach”), the selection of which information to sample (“sampling the favorite”), and the interaction between information sampling and subsequent choices (“rejecting unsampled options”). The prevalence of all three biases was related to a Pavlovian approach-avoid parameter quantified within an entirely independent economic decision task. Our large dataset also revealed that individual differences in the amount of information gathered are a stable trait across multiple gameplays and can be related to demographic measures, including age and educational attainment. As well as revealing limitations in cognitive processing, our findings suggest information sampling biases reflect the expression of primitive, yet potentially ecologically adaptive, behavioral repertoires. One such behavior is sampling from options that will eventually be chosen, even when other sources of information are more pertinent for guiding future action. PMID:27832071

  8. Estimation of accuracy and bias in genetic evaluations with genetic groups using sampling

    NARCIS (Netherlands)

    Hickey, J.M.; Keane, M.G.; Kenny, D.A.; Cromie, A.R.; Mulder, H.A.; Veerkamp, R.F.

    2008-01-01

    Accuracy and bias of estimated breeding values are important measures of the quality of genetic evaluations. A sampling method that accounts for the uncertainty in the estimation of genetic group effects was used to calculate accuracy and bias of estimated effects. The method works by repeatedly sim

  9. Heterogeneous Causal Effects and Sample Selection Bias

    DEFF Research Database (Denmark)

    Breen, Richard; Choi, Seongsoo; Holm, Anders

    2015-01-01

    The role of education in the process of socioeconomic attainment is a topic of long standing interest to sociologists and economists. Recently there has been growing interest not only in estimating the average causal effect of education on outcomes such as earnings, but also in estimating how cau......, and we illustrate our arguments and our method using National Longitudinal Survey of Youth 1979 (NLSY79) data....

  10. Biased sampling, over-identified parameter problems and beyond

    CERN Document Server

    Qin, Jing

    2017-01-01

    This book is devoted to biased sampling problems (also called choice-based sampling in Econometrics parlance) and over-identified parameter estimation problems. Biased sampling problems appear in many areas of research, including Medicine, Epidemiology and Public Health, the Social Sciences and Economics. The book addresses a range of important topics, including case and control studies, causal inference, missing data problems, meta-analysis, renewal process and length biased sampling problems, capture and recapture problems, case cohort studies, exponential tilting genetic mixture models etc. The goal of this book is to make it easier for Ph. D students and new researchers to get started in this research area. It will be of interest to all those who work in the health, biological, social and physical sciences, as well as those who are interested in survey methodology and other areas of statistical science, among others. .

  11. Comparison of sampling methods for animal manure

    NARCIS (Netherlands)

    Derikx, P.J.L.; Ogink, N.W.M.; Hoeksma, P.

    1997-01-01

    Currently available and recently developed sampling methods for slurry and solid manure were tested for bias and reproducibility in the determination of total phosphorus and nitrogen content of samples. Sampling methods were based on techniques in which samples were taken either during loading from

  12. Comparison of sampling methods for animal manure

    NARCIS (Netherlands)

    Derikx, P.J.L.; Ogink, N.W.M.; Hoeksma, P.

    1997-01-01

    Currently available and recently developed sampling methods for slurry and solid manure were tested for bias and reproducibility in the determination of total phosphorus and nitrogen content of samples. Sampling methods were based on techniques in which samples were taken either during loading from

  13. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Directory of Open Access Journals (Sweden)

    Ashton M Verdery

    Full Text Available This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS. Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  14. Reduction of noise and bias in randomly sampled power spectra

    DEFF Research Database (Denmark)

    Buchhave, Preben; Velte, Clara Marika

    2015-01-01

    We consider the origin of noise and distortion in power spectral estimates of randomly sampled data, specifically velocity data measured with a burst-mode laser Doppler anemometer. The analysis guides us to new ways of reducing noise and removing spectral bias, e.g., distortions caused by modific......We consider the origin of noise and distortion in power spectral estimates of randomly sampled data, specifically velocity data measured with a burst-mode laser Doppler anemometer. The analysis guides us to new ways of reducing noise and removing spectral bias, e.g., distortions caused...

  15. Health Indicators: Eliminating bias from convenience sampling estimators

    OpenAIRE

    Hedt, Bethany L.; Pagano, Marcello

    2011-01-01

    Public health practitioners are often called upon to make inference about a health indicator for a population at large when the sole available information are data gathered from a convenience sample, such as data gathered on visitors to a clinic. These data may be of the highest quality and quite extensive, but the biases inherent in a convenience sample preclude the legitimate use of powerful inferential tools that are usually associated with a random sample. In general, we know nothing abou...

  16. Assessing the Bias in Communication Networks Sampled from Twitter

    CERN Document Server

    González-Bailón, Sandra; Rivero, Alejandro; Borge-Holthoefer, Javier; Moreno, Yamir

    2012-01-01

    We collect and analyse messages exchanged in Twitter using two of the platform's publicly available APIs (the search and stream specifications). We assess the differences between the two samples, and compare the networks of communication reconstructed from them. The empirical context is given by political protests taking place in May 2012: we track online communication around these protests for the period of one month, and reconstruct the network of mentions and re-tweets according to the two samples. We find that the search API over-represents the more central users and does not offer an accurate picture of peripheral activity; we also find that the bias is greater for the network of mentions. We discuss the implications of this bias for the study of diffusion dynamics and collective action in the digital era, and advocate the need for more uniform sampling procedures in the study of online communication.

  17. CEO emotional bias and dividend policy: Bayesian network method

    Directory of Open Access Journals (Sweden)

    Azouzi Mohamed Ali

    2012-10-01

    Full Text Available This paper assumes that managers, investors, or both behave irrationally. In addition, even though scholars have investigated behavioral irrationality from three angles, investor sentiment, investor biases and managerial biases, we focus on the relationship between one of the managerial biases, overconfidence and dividend policy. Previous research investigating the relationship between overconfidence and financial decisions has studied investment, financing decisions and firm values. However, there are only a few exceptions to examine how a managerial emotional bias (optimism, loss aversion and overconfidence affects dividend policies. This stream of research contends whether to distribute dividends or not depends on how managers perceive of the company’s future. I will use Bayesian network method to examine this relation. Emotional bias has been measured by means of a questionnaire comprising several items. As for the selected sample, it has been composed of some100 Tunisian executives. Our results have revealed that leader affected by behavioral biases (optimism, loss aversion, and overconfidence adjusts its dividend policy choices based on their ability to assess alternatives (optimism and overconfidence and risk perception (loss aversion to create of shareholder value and ensure its place at the head of the management team.

  18. Sampling system and method

    Science.gov (United States)

    Decker, David L.; Lyles, Brad F.; Purcell, Richard G.; Hershey, Ronald Lee

    2013-04-16

    The present disclosure provides an apparatus and method for coupling conduit segments together. A first pump obtains a sample and transmits it through a first conduit to a reservoir accessible by a second pump. The second pump further conducts the sample from the reservoir through a second conduit.

  19. CEO emotional bias and investment decision, Bayesian network method

    Directory of Open Access Journals (Sweden)

    Jarboui Anis

    2012-08-01

    Full Text Available This research examines the determinants of firms’ investment introducing a behavioral perspective that has received little attention in corporate finance literature. The following central hypothesis emerges from a set of recently developed theories: Investment decisions are influenced not only by their fundamentals but also depend on some other factors. One factor is the biasness of any CEO to their investment, biasness depends on the cognition and emotions, because some leaders use them as heuristic for the investment decision instead of fundamentals. This paper shows how CEO emotional bias (optimism, loss aversion and overconfidence affects the investment decisions. The proposed model of this paper uses Bayesian Network Method to examine this relationship. Emotional bias has been measured by means of a questionnaire comprising several items. As for the selected sample, it has been composed of some 100 Tunisian executives. Our results have revealed that the behavioral analysis of investment decision implies leader affected by behavioral biases (optimism, loss aversion, and overconfidence adjusts its investment choices based on their ability to assess alternatives (optimism and overconfidence and risk perception (loss aversion to create of shareholder value and ensure its place at the head of the management team.

  20. Health indicators: eliminating bias from convenience sampling estimators.

    Science.gov (United States)

    Hedt, Bethany L; Pagano, Marcello

    2011-02-28

    Public health practitioners are often called upon to make inference about a health indicator for a population at large when the sole available information are data gathered from a convenience sample, such as data gathered on visitors to a clinic. These data may be of the highest quality and quite extensive, but the biases inherent in a convenience sample preclude the legitimate use of powerful inferential tools that are usually associated with a random sample. In general, we know nothing about those who do not visit the clinic beyond the fact that they do not visit the clinic. An alternative is to take a random sample of the population. However, we show that this solution would be wasteful if it excluded the use of available information. Hence, we present a simple annealing methodology that combines a relatively small, and presumably far less expensive, random sample with the convenience sample. This allows us to not only take advantage of powerful inferential tools, but also provides more accurate information than that available from just using data from the random sample alone. Copyright © 2011 John Wiley & Sons, Ltd.

  1. Sampling system and method

    Energy Technology Data Exchange (ETDEWEB)

    Decker, David L.; Lyles, Brad F.; Purcell, Richard G.; Hershey, Ronald Lee

    2017-03-07

    In one embodiment, the present disclosure provides an apparatus and method for supporting a tubing bundle during installation or removal. The apparatus includes a clamp for securing the tubing bundle to an external wireline. In various examples, the clamp is external to the tubing bundle or integral with the tubing bundle. According to one method, a tubing bundle and wireline are deployed together and the tubing bundle periodically secured to the wireline using a clamp. In another embodiment, the present disclosure provides an apparatus and method for coupling conduit segments together. A first pump obtains a sample and transmits it through a first conduit to a reservoir accessible by a second pump. The second pump further conducts the sample from the reservoir through a second conduit. In a specific example, one or more clamps are used to connect the first and/or second conduits to an external wireline.

  2. Smoothed Biasing Forces Yield Unbiased Free Energies with the Extended-System Adaptive Biasing Force Method.

    Science.gov (United States)

    Lesage, Adrien; Lelièvre, Tony; Stoltz, Gabriel; Hénin, Jérôme

    2016-12-27

    We report a theoretical description and numerical tests of the extended-system adaptive biasing force method (eABF), together with an unbiased estimator of the free energy surface from eABF dynamics. Whereas the original ABF approach uses its running estimate of the free energy gradient as the adaptive biasing force, eABF is built on the idea that the exact free energy gradient is not necessary for efficient exploration, and that it is still possible to recover the exact free energy separately with an appropriate estimator. eABF does not directly bias the collective coordinates of interest, but rather fictitious variables that are harmonically coupled to them; therefore is does not require second derivative estimates, making it easily applicable to a wider range of problems than ABF. Furthermore, the extended variables present a smoother, coarse-grain-like sampling problem on a mollified free energy surface, leading to faster exploration and convergence. We also introduce CZAR, a simple, unbiased free energy estimator from eABF trajectories. eABF/CZAR converges to the physical free energy surface faster than standard ABF for a wide range of parameters.

  3. Radioactive air sampling methods

    CERN Document Server

    Maiello, Mark L

    2010-01-01

    Although the field of radioactive air sampling has matured and evolved over decades, it has lacked a single resource that assimilates technical and background information on its many facets. Edited by experts and with contributions from top practitioners and researchers, Radioactive Air Sampling Methods provides authoritative guidance on measuring airborne radioactivity from industrial, research, and nuclear power operations, as well as naturally occuring radioactivity in the environment. Designed for industrial hygienists, air quality experts, and heath physicists, the book delves into the applied research advancing and transforming practice with improvements to measurement equipment, human dose modeling of inhaled radioactivity, and radiation safety regulations. To present a wide picture of the field, it covers the international and national standards that guide the quality of air sampling measurements and equipment. It discusses emergency response issues, including radioactive fallout and the assets used ...

  4. Cognition and Instruction: Reasoning about bias in sampling

    Science.gov (United States)

    Watson, Jane; Kelly, Ben

    2005-02-01

    Although sampling has been mentioned as part of the chance and data component of the mathematics curriculum since about 1990, little research attention has been aimed specifically at school students' understanding of this descriptive area. This study considers the initial understanding of bias in sampling by 639 students in grades 3, 5, 7, and 9. Three hundred and forty-one of these students then undertook a series of lessons on chance and data with an emphasis on chance, data handling, sampling, and variation. A post-test was administered to 285 of these students and two years later all available students from the original group (328) were again tested. This study considers the initial level of understanding of students, the nature of the lessons undertaken at each grade level, the post-instruction performance of those who undertook lessons, and the longitudinal performance after two years of all available students. Overall instruction was associated with improved performance, which was retained over two years but there was little difference between those who had or had not experienced instruction. Results for specific grades, some of which went against the overall trend are discussed, as well as educational implications for the teaching of sampling across the years of schooling based on the classroom observations and the changes observed.

  5. Information bias in health research: definition, pitfalls, and adjustment methods.

    Science.gov (United States)

    Althubaiti, Alaa

    2016-01-01

    As with other fields, medical sciences are subject to different sources of bias. While understanding sources of bias is a key element for drawing valid conclusions, bias in health research continues to be a very sensitive issue that can affect the focus and outcome of investigations. Information bias, otherwise known as misclassification, is one of the most common sources of bias that affects the validity of health research. It originates from the approach that is utilized to obtain or confirm study measurements. This paper seeks to raise awareness of information bias in observational and experimental research study designs as well as to enrich discussions concerning bias problems. Specifying the types of bias can be essential to limit its effects and, the use of adjustment methods might serve to improve clinical evaluation and health care practice.

  6. [Several common biases and control measures during sampling survey of eye diseases in China].

    Science.gov (United States)

    Guan, Huai-jin

    2008-06-01

    Bias is a common artificial error during sampling survey in eye diseases, and is a major impact factor for validity and reliability of the survey. The causes and the control measures of several biases regarding current sampling survey of eye diseases in China were analyzed and discussed, including the sampling bias, non-respondent bias, and diagnostic bias. This review emphasizes that controlling bias is the key to ensure quality of sampling survey. Random sampling, sufficient sample quantity, careful examination and taking history, improving examination rate, accurate diagnosis, strict training and preliminary study, as well as quality control can eliminate or minimize biases and improve the sampling survey quality of eye diseases in China

  7. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    Science.gov (United States)

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  8. The importance of measuring and accounting for potential biases in respondent-driven samples.

    Science.gov (United States)

    Rudolph, Abby E; Fuller, Crystal M; Latkin, Carl

    2013-07-01

    Respondent-driven sampling (RDS) is often viewed as a superior method for recruiting hard-to-reach populations disproportionately burdened with poor health outcomes. As an analytic approach, it has been praised for its ability to generate unbiased population estimates via post-stratified weights which account for non-random recruitment. However, population estimates generated with RDSAT (RDS Analysis Tool) are sensitive to variations in degree weights. Several assumptions are implicit in the degree weight and are not routinely assessed. Failure to meet these assumptions could result in inaccurate degree measures and consequently result in biased population estimates. We highlight potential biases associated with violating the assumptions implicit in degree weights for the RDSAT estimator and propose strategies to measure and possibly correct for biases in the analysis.

  9. A Venue-Based Method for Sampling Hard-to-Reach Populations

    National Research Council Canada - National Science Library

    Farzana B. Muhib; Lillian S. Lin; Ann Stueve; Robin L. Miller; Wesley L. Ford; Wayne D. Johnson; Philip J. Smith

    2001-01-01

    .... Traditional sample survey methods, such as random sampling from telephone or mailing lists, can yield low numbers of eligible respondents while non-probability sampling introduces unknown biases...

  10. Bias and imprecision in posture percentile variables estimated from short exposure samples

    Directory of Open Access Journals (Sweden)

    Mathiassen Svend Erik

    2012-03-01

    Full Text Available Abstract Background Upper arm postures are believed to be an important risk determinant for musculoskeletal disorder development in the neck and shoulders. The 10th and 90th percentiles of the angular elevation distribution have been reported in many studies as measures of neutral and extreme postural exposures, and variation has been quantified by the 10th-90th percentile range. Further, the 50th percentile is commonly reported as a measure of "average" exposure. These four variables have been estimated using samples of observed or directly measured postures, typically using sampling durations between 5 and 120 min. Methods The present study examined the statistical properties of estimated full-shift values of the 10th, 50th and 90th percentile and the 10th-90th percentile range of right upper arm elevation obtained from samples of seven different durations, ranging from 5 to 240 min. The sampling strategies were realized by simulation, using a parent data set of 73 full-shift, continuous inclinometer recordings among hairdressers. For each shift, sampling duration and exposure variable, the mean, standard deviation and sample dispersion limits (2.5% and 97.5% of all possible sample estimates obtained at one minute intervals were calculated and compared to the true full-shift exposure value. Results Estimates of the 10th percentile proved to be upward biased with limited sampling, and those of the 90th percentile and the percentile range, downward biased. The 50th percentile was also slightly upwards biased. For all variables, bias was more severe with shorter sampling durations, and it correlated significantly with the true full-shift value for the 10th and 90th percentiles and the percentile range. As expected, shorter samples led to decreased precision of the estimate; sample standard deviations correlated strongly with true full-shift exposure values. Conclusions The documented risk of pronounced bias and low precision of percentile

  11. Anticipating species distributions: Handling sampling effort bias under a Bayesian framework.

    Science.gov (United States)

    Rocchini, Duccio; Garzon-Lopez, Carol X; Marcantonio, Matteo; Amici, Valerio; Bacaro, Giovanni; Bastin, Lucy; Brummitt, Neil; Chiarucci, Alessandro; Foody, Giles M; Hauffe, Heidi C; He, Kate S; Ricotta, Carlo; Rizzoli, Annapaola; Rosà, Roberto

    2017-04-15

    Anticipating species distributions in space and time is necessary for effective biodiversity conservation and for prioritising management interventions. This is especially true when considering invasive species. In such a case, anticipating their spread is important to effectively plan management actions. However, considering uncertainty in the output of species distribution models is critical for correctly interpreting results and avoiding inappropriate decision-making. In particular, when dealing with species inventories, the bias resulting from sampling effort may lead to an over- or under-estimation of the local density of occurrences of a species. In this paper we propose an innovative method to i) map sampling effort bias using cartogram models and ii) explicitly consider such uncertainty in the modeling procedure under a Bayesian framework, which allows the integration of multilevel input data with prior information to improve the anticipation species distributions.

  12. Spanish exit polls. Sampling error or nonresponse bias?

    Directory of Open Access Journals (Sweden)

    Pavía, Jose M.

    2016-09-01

    Full Text Available Countless examples of misleading forecasts on behalf of both pre-election and exit polls can be found all over the world. Non-representative samples due to differential nonresponse have been claimed as being the main reason for inaccurate exit-poll projections. In real inference problems, it is seldom possible to compare estimates and true values. Electoral forecasts are an exception. Comparisons between estimates and final outcomes can be carried out once votes have been tallied. In this paper, we examine the raw data collected in seven exit polls conducted in Spain and test the likelihood that the data collected in each sampled voting location can be considered as a random sample of actual results. Knowing the answer to this is relevant for both electoral analysts and forecasters as, if the hypothesis is rejected, the shortcomings of the collected data would need amending. Analysts could improve the quality of their computations by implementing local correction strategies. We find strong evidence of nonsampling error in Spanish exit polls and evidence that the political context matters. Nonresponse bias is larger in polarized elections and in a climate of fearExiste un gran número de ejemplos de predicciones inexactas obtenidas a partir tanto de encuestas pre-electorales como de encuestas a pie de urna a lo largo del mundo. La presencia de tasas de no-respuesta diferencial entre distintos tipos de electores ha sido la principal razón esgrimida para justificar las proyecciones erróneas en las encuestas a pie de urna. En problemas de inferencia rara vez es posible comparar estimaciones y valores reales. Las predicciones electorales son una excepción. La comparación entre estimaciones y resultados finales puede realizarse una vez los votos han sido contabilizados. En este trabajo, examinamos los datos brutos recogidos en siete encuestas a pie de urna realizadas en España y testamos la hipótesis de que los datos recolectados en cada punto

  13. Respondent-driven sampling bias induced by clustering and community structure in social networks

    CERN Document Server

    Rocha, Luis Enrique Correa; Lambiotte, Renaud; Liljeros, Fredrik

    2015-01-01

    Sampling hidden populations is particularly challenging using standard sampling methods mainly because of the lack of a sampling frame. Respondent-driven sampling (RDS) is an alternative methodology that exploits the social contacts between peers to reach and weight individuals in these hard-to-reach populations. It is a snowball sampling procedure where the weight of the respondents is adjusted for the likelihood of being sampled due to differences in the number of contacts. In RDS, the structure of the social contacts thus defines the sampling process and affects its coverage, for instance by constraining the sampling within a sub-region of the network. In this paper we study the bias induced by network structures such as social triangles, community structure, and heterogeneities in the number of contacts, in the recruitment trees and in the RDS estimator. We simulate different scenarios of network structures and response-rates to study the potential biases one may expect in real settings. We find that the ...

  14. Periodontal research: Basics and beyond - Part II (ethical issues, sampling, outcome measures and bias

    Directory of Open Access Journals (Sweden)

    Haritha Avula

    2013-01-01

    Full Text Available A good research beginning refers to formulating a well-defined research question, developing a hypothesis and choosing an appropriate study design. The first part of the review series has discussed these issues in depth and this paper intends to throw light on other issues pertaining to the implementation of research. These include the various ethical norms and standards in human experimentation, the eligibility criteria for the participants, sampling methods and sample size calculation, various outcome measures that need to be defined and the biases that can be introduced in research.

  15. Displaying bias in sampling effort of data accessed from biodiversity databases using ignorance maps.

    Science.gov (United States)

    Ruete, Alejandro

    2015-01-01

    Open-access biodiversity databases including mainly citizen science data make temporally and spatially extensive species' observation data available to a wide range of users. Such data have limitations however, which include: sampling bias in favour of recorder distribution, lack of survey effort assessment, and lack of coverage of the distribution of all organisms. These limitations are not always recorded, while any technical assessment or scientific research based on such data should include an evaluation of the uncertainty of its source data and researchers should acknowledge this information in their analysis. The here proposed maps of ignorance are a critical and easy way to implement a tool to not only visually explore the quality of the data, but also to filter out unreliable results. I present simple algorithms to display ignorance maps as a tool to report the spatial distribution of the bias and lack of sampling effort across a study region. Ignorance scores are expressed solely based on raw data in order to rely on the fewest assumptions possible. Therefore there is no prediction or estimation involved. The rationale is based on the assumption that it is appropriate to use species groups as a surrogate for sampling effort because it is likely that an entire group of species observed by similar methods will share similar bias. Simple algorithms are then used to transform raw data into ignorance scores scaled 0-1 that are easily comparable and scalable. Because of the need to perform calculations over big datasets, simplicity is crucial for web-based implementations on infrastructures for biodiversity information. With these algorithms, any infrastructure for biodiversity information can offer a quality report of the observations accessed through them. Users can specify a reference taxonomic group and a time frame according to the research question. The potential of this tool lies in the simplicity of its algorithms and in the lack of assumptions made

  16. Jackknife Estimation of Sampling Variance of Ratio Estimators in Complex Samples: Bias and the Coefficient of Variation. Research Report. ETS RR-06-19

    Science.gov (United States)

    Oranje, Andreas

    2006-01-01

    A multitude of methods has been proposed to estimate the sampling variance of ratio estimates in complex samples (Wolter, 1985). Hansen and Tepping (1985) studied some of those variance estimators and found that a high coefficient of variation (CV) of the denominator of a ratio estimate is indicative of a biased estimate of the standard error of a…

  17. A new method to measure galaxy bias by combining the density and weak lensing fields

    Science.gov (United States)

    Pujol, Arnau; Chang, Chihway; Gaztañaga, Enrique; Amara, Adam; Refregier, Alexandre; Bacon, David J.; Carretero, Jorge; Castander, Francisco J.; Crocce, Martin; Fosalba, Pablo; Manera, Marc; Vikram, Vinu

    2016-10-01

    We present a new method to measure redshift-dependent galaxy bias by combining information from the galaxy density field and the weak lensing field. This method is based on the work of Amara et al., who use the galaxy density field to construct a bias-weighted convergence field κg. The main difference between Amara et al.'s work and our new implementation is that here we present another way to measure galaxy bias, using tomography instead of bias parametrizations. The correlation between κg and the true lensing field κ allows us to measure galaxy bias using different zero-lag correlations, such as / or /. Our method measures the linear bias factor on linear scales, under the assumption of no stochasticity between galaxies and matter. We use the Marenostrum Institut de Ciències de l'Espai (MICE) simulation to measure the linear galaxy bias for a flux-limited sample (i < 22.5) in tomographic redshift bins using this method. This article is the first that studies the accuracy and systematic uncertainties associated with the implementation of the method and the regime in which it is consistent with the linear galaxy bias defined by projected two-point correlation functions (2PCF). We find that our method is consistent with a linear bias at the per cent level for scales larger than 30 arcmin, while non-linearities appear at smaller scales. This measurement is a good complement to other measurements of bias, since it does not depend strongly on σ8 as do the 2PCF measurements. We will apply this method to the Dark Energy Survey Science Verification data in a follow-up article.

  18. Semiparametric efficient and robust estimation of an unknown symmetric population under arbitrary sample selection bias

    KAUST Repository

    Ma, Yanyuan

    2013-09-01

    We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.

  19. Correcting Type Ia Supernova Distances for Selection Biases and Contamination in Photometrically Identified Samples

    CERN Document Server

    Kessler, Richard

    2016-01-01

    We present a new technique to create a bin-averaged Hubble Diagram (HD) from photometrically identified SN~Ia data. The resulting HD is corrected for selection biases and contamination from core collapse (CC) SNe, and can be used to infer cosmological parameters. This method, called "Bias Corrected Distances" (BCD), includes two fitting stages. The first BCD fitting stage combines a Bayesian likelihood with a Monte Carlo simulation to bias-correct the fitted SALT-II parameters, and also incorporates CC probabilities determined from a machine learning technique. The BCD fit determines 1) a bin-averaged HD (average distance vs. redshift), and 2) the nuisance parameters alpha and beta, which multiply the stretch and color (respectively) to standardize the SN brightness. In the second stage, the bin-averaged HD is fit to a cosmological model where priors can be imposed. We perform high precision tests of the BCD method by simulating large (150,000 event) data samples corresponding to the Dark Energy Survey Supern...

  20. Is Knowledge Random? Introducing Sampling and Bias through Outdoor Inquiry

    Science.gov (United States)

    Stier, Sam

    2010-01-01

    Sampling, very generally, is the process of learning about something by selecting and assessing representative parts of that population or object. In the inquiry activity described here, students learned about sampling techniques as they estimated the number of trees greater than 12 cm dbh (diameter at breast height) in a wooded, discrete area…

  1. Correcting Type Ia Supernova Distances for Selection Biases and Contamination in Photometrically Identified Samples

    Science.gov (United States)

    Kessler, R.; Scolnic, D.

    2017-02-01

    We present a new technique to create a bin-averaged Hubble diagram (HD) from photometrically identified SN Ia data. The resulting HD is corrected for selection biases and contamination from core-collapse (CC) SNe, and can be used to infer cosmological parameters. This method, called “BEAMS with Bias Corrections” (BBC), includes two fitting stages. The first BBC fitting stage uses a posterior distribution that includes multiple SN likelihoods, a Monte Carlo simulation to bias-correct the fitted SALT-II parameters, and CC probabilities determined from a machine-learning technique. The BBC fit determines (1) a bin-averaged HD (average distance versus redshift), and (2) the nuisance parameters α and β, which multiply the stretch and color (respectively) to standardize the SN brightness. In the second stage, the bin-averaged HD is fit to a cosmological model where priors can be imposed. We perform high-precision tests of the BBC method by simulating large (150,000 event) data samples corresponding to the Dark Energy Survey Supernova Program. Our tests include three models of intrinsic scatter, each with two different CC rates. In the BBC fit, the SALT-II nuisance parameters α and β are recovered to within 1% of their true values. In the cosmology fit, we determine the dark energy equation of state parameter w using a fixed value of {{{Ω }}}{{M}} as a prior: averaging over all six tests based on 6 × 150,000 = 900,000 SNe, there is a small w-bias of 0.006+/- 0.002. Finally, the BBC fitting code is publicly available in the SNANA package.

  2. A Study of Assimilation Bias in Name-Based Sampling of Migrants

    Directory of Open Access Journals (Sweden)

    Schnell Rainer

    2014-06-01

    Full Text Available The use of personal names for screening is an increasingly popular sampling technique for migrant populations. Although this is often an effective sampling procedure, very little is known about the properties of this method. Based on a large German survey, this article compares characteristics of respondents whose names have been correctly classified as belonging to a migrant population with respondentswho aremigrants and whose names have not been classified as belonging to a migrant population. Although significant differences were found for some variables even with some large effect sizes, the overall bias introduced by name-based sampling (NBS is small as long as procedures with small false-negative rates are employed.

  3. Bias correction methods for decadal sea-surface temperature forecasts

    Directory of Open Access Journals (Sweden)

    Balachandrudu Narapusetty

    2014-04-01

    Full Text Available Two traditional bias correction techniques: (1 systematic mean correction (SMC and (2 systematic least-squares correction (SLC are extended and applied on sea-surface temperature (SST decadal forecasts in the North Pacific produced by Climate Forecast System version 2 (CFSv2 to reduce large systematic biases. The bias-corrected forecast anomalies exhibit reduced root-mean-square errors and also significantly improve the anomaly correlations with observations. The spatial pattern of the SST anomalies associated with the Pacific area average (PAA index (spatial average of SST anomalies over 20°–60°N and 120°E–100°W is improved after employing the bias correction methods, particularly SMC. Reliability diagrams show that the bias-corrected forecasts better reproduce the cold and warm events well beyond the 5-yr lead-times over the 10 forecasted years. The comparison between both correction methods indicates that: (1 prediction skill of SST anomalies associated with the PAA index is improved by SMC with respect to SLC and (2 SMC-derived forecasts have a slightly higher reliability than those corrected by SLC.

  4. When POS datasets don’t add up: Combatting sample bias

    DEFF Research Database (Denmark)

    Hovy, Dirk; Plank, Barbara; Søgaard, Anders

    2014-01-01

    Several works in Natural Language Processing have recently looked into part-of-speech (POS) annotation of Twitter data and typically used their own data sets. Since conventions on Twitter change rapidly, models often show sample bias. Training on a combination of the existing data sets should help...... overcome this bias and produce more robust models than any trained on the individual corpora. Unfortunately, combining the existing corpora proves difficult: many of the corpora use proprietary tag sets that have little or no overlap. Even when mapped to a common tag set, the different corpora...... systematically differ in their treatment of various tags and tokens. This includes both preprocessing decisions, as well as default labels for frequent tokens, thus exhibiting data bias and label bias, respectively. Only if we address these biases can we combine the existing data sets to also overcome sample...

  5. An Analytical Method of Identifying Biased Test Items.

    Science.gov (United States)

    Plake, Barbara S.; Hoover, H. D.

    1979-01-01

    A follow-up technique is needed to identify items contributing to items-by-groups interaction when using an ANOVA procedure to examine a test for biased items. The method described includes distribution theory for assessing level of significance and is sensitive to items at all difficulty levels. (Author/GSK)

  6. Sampling Biases in MODIS and SeaWiFS Ocean Chlorophyll Data

    Science.gov (United States)

    Gregg, Watson W.; Casey, Nancy W.

    2007-01-01

    Although modem ocean color sensors, such as MODIS and SeaWiFS are often considered global missions, in reality it takes many days, even months, to sample the ocean surface enough to provide complete global coverage. The irregular temporal sampling of ocean color sensors can produce biases in monthly and annual mean chlorophyll estimates. We quantified the biases due to sampling using data assimilation to create a "truth field", which we then sub-sampled using the observational patterns of MODIS and SeaWiFS. Monthly and annual mean chlorophyll estimates from these sub-sampled, incomplete daily fields were constructed and compared to monthly and annual means from the complete daily fields of the assimilation model, at a spatial resolution of 1.25deg longitude by 0.67deg latitude. The results showed that global annual mean biases were positive, reaching nearly 8% (MODIS) and >5% (SeaWiFS). For perspective the maximum interannual variability in the SeaWiFS chlorophyll record was about 3%. Annual mean sampling biases were low (chlorophyll concentrations occurring here are missed by the data sets. Ocean color sensors selectively sample in locations and times of favorable phytoplankton growth, producing overestimates of chlorophyll. The biases derived from lack of sampling in the high latitudes varied monthly, leading to artifacts in the apparent seasonal cycle from ocean color sensors. A false secondary peak in chlorophyll occurred in May-August, which resulted from lack of sampling in the Antarctic.

  7. SWOT ANALYSIS ON SAMPLING METHOD

    National Research Council Canada - National Science Library

    CHIS ANCA OANA; BELENESI MARIOARA;

    2014-01-01

    .... Our article aims to study audit sampling in audit of financial statements. As an audit technique largely used, in both its statistical and nonstatistical form, the method is very important for auditors...

  8. Biomass and abundance biases in European standard gillnet sampling.

    Directory of Open Access Journals (Sweden)

    Marek Šmejkal

    Full Text Available The European Standard EN 14757 recommends gillnet mesh sizes that range from 5 to 55mm (knot-to-knot for the standard monitoring of fish assemblages and suggests adding gillnets with larger mesh sizes if necessary. Our research showed that the recommended range of mesh sizes did not provide a representative picture of fish sizes for larger species that commonly occur in continental Europe. We developed a novel, large mesh gillnet which consists of mesh sizes 70, 90, 110 and 135mm (knot to knot, 10m panels and assessed its added value for monitoring purposes. From selectivity curves obtained by sampling with single mesh size gillnets (11 mesh sizes 6 - 55mm and large mesh gillnets, we identified the threshold length of bream (Abramis brama above which this widespread large species was underestimated by European standard gillnet catches. We tested the European Standard gillnet by comparing its size composition with that obtained during concurrent pelagic trawling and purse seining in a cyprinid-dominated reservoir and found that the European Standard underestimated fish larger than 292mm by 26 times. The inclusion of large mesh gillnets in the sampling design removed this underestimation. We analysed the length-age relationship of bream in the Římov Reservoir, and concluded that catches of bream larger than 292mm and older than five years were seriously underrepresented in European Standard gillnet catches. The Římov Reservoir is a typical cyprinid-dominated water body where the biomass of bream > 292mm formed 70% of the pelagic trawl and purse seine catch. The species-specific relationships between the large mesh gillnet catch and European Standard catch suggested that the presence of carp (Cyprinus carpio, European catfish (Silurus glanis, tench (Tinca tinca or bream warrants the use of both gillnet types. We suggest extending the gillnet series in the European Standard to avoid misinterpretation of fish community biomass estimates.

  9. Biomass and abundance biases in European standard gillnet sampling.

    Science.gov (United States)

    Šmejkal, Marek; Ricard, Daniel; Prchalová, Marie; Říha, Milan; Muška, Milan; Blabolil, Petr; Čech, Martin; Vašek, Mojmír; Jůza, Tomáš; Monteoliva Herreras, Agustín; Encina, Lourdes; Peterka, Jiří; Kubečka, Jan

    2015-01-01

    The European Standard EN 14757 recommends gillnet mesh sizes that range from 5 to 55mm (knot-to-knot) for the standard monitoring of fish assemblages and suggests adding gillnets with larger mesh sizes if necessary. Our research showed that the recommended range of mesh sizes did not provide a representative picture of fish sizes for larger species that commonly occur in continental Europe. We developed a novel, large mesh gillnet which consists of mesh sizes 70, 90, 110 and 135mm (knot to knot, 10m panels) and assessed its added value for monitoring purposes. From selectivity curves obtained by sampling with single mesh size gillnets (11 mesh sizes 6 - 55mm) and large mesh gillnets, we identified the threshold length of bream (Abramis brama) above which this widespread large species was underestimated by European standard gillnet catches. We tested the European Standard gillnet by comparing its size composition with that obtained during concurrent pelagic trawling and purse seining in a cyprinid-dominated reservoir and found that the European Standard underestimated fish larger than 292mm by 26 times. The inclusion of large mesh gillnets in the sampling design removed this underestimation. We analysed the length-age relationship of bream in the Římov Reservoir, and concluded that catches of bream larger than 292mm and older than five years were seriously underrepresented in European Standard gillnet catches. The Římov Reservoir is a typical cyprinid-dominated water body where the biomass of bream > 292mm formed 70% of the pelagic trawl and purse seine catch. The species-specific relationships between the large mesh gillnet catch and European Standard catch suggested that the presence of carp (Cyprinus carpio), European catfish (Silurus glanis), tench (Tinca tinca) or bream warrants the use of both gillnet types. We suggest extending the gillnet series in the European Standard to avoid misinterpretation of fish community biomass estimates.

  10. Empirical aspects about Heckman Procedure Application: Is there sample selection bias in the Brazilian Industry

    Directory of Open Access Journals (Sweden)

    Flávio Kaue Fiuza-Moura

    2015-12-01

    Full Text Available There are several labor market researches whose main goal is to analyze the probability of employment and the structure of wage determination and, for empirical purposes, most of these researches deploy Heckman sample selection bias hazard detection and correction procedure. However, few Brazilian studies are focused in this procedure applicability, especially concerning specific industries. This paper aims to approach these issues by testing the existence of sample selection bias in Brazilian manufacturing industry, and to analyze the impact of the bias correction procedure over the estimated coefficients of OLS Mincer equations. We found sample selection bias hazard only in manufacturing segments which average wages are lower than market average and only in groups of workers which average wage level is below the market average (women, especially blacks. The analysis and comparison of Mincer equations with and without Heckman’s sample selection bias correction procedure brought up that the estimation’s coefficients related to wage differential for male over female workers and the wage differential for urban over non-urban workers tends to be overestimated in cases which the sample selection bias isn’t corrected.

  11. Improved evaluation of measurement uncertainty from sampling by inclusion of between-sampler bias using sampling proficiency testing.

    Science.gov (United States)

    Ramsey, Michael H; Geelhoed, Bastiaan; Wood, Roger; Damant, Andrew P

    2011-04-01

    A realistic estimate of the uncertainty of a measurement result is essential for its reliable interpretation. Recent methods for such estimation include the contribution to uncertainty from the sampling process, but they only include the random and not the systematic effects. Sampling Proficiency Tests (SPTs) have been used previously to assess the performance of samplers, but the results can also be used to evaluate measurement uncertainty, including the systematic effects. A new SPT conducted on the determination of moisture in fresh butter is used to exemplify how SPT results can be used not only to score samplers but also to estimate uncertainty. The comparison between uncertainty evaluated within- and between-samplers is used to demonstrate that sampling bias is causing the estimates of expanded relative uncertainty to rise by over a factor of two (from 0.39% to 0.87%) in this case. General criteria are given for the experimental design and the sampling target that are required to apply this approach to measurements on any material.

  12. Variation in cell signaling protein expression may introduce sampling bias in primary epithelial ovarian cancer.

    Science.gov (United States)

    Mittermeyer, Gabriele; Malinowsky, Katharina; Beese, Christian; Höfler, Heinz; Schmalfeldt, Barbara; Becker, Karl-Friedrich; Avril, Stefanie

    2013-01-01

    Although the expression of cell signaling proteins is used as prognostic and predictive biomarker, variability of protein levels within tumors is not well studied. We assessed intratumoral heterogeneity of protein expression within primary ovarian cancer. Full-length proteins were extracted from 88 formalin-fixed and paraffin-embedded tissue samples of 13 primary high-grade serous ovarian carcinomas with 5-9 samples each. In addition, 14 samples of normal fallopian tube epithelium served as reference. Quantitative reverse phase protein arrays were used to analyze the expression of 36 cell signaling proteins including HER2, EGFR, PI3K/Akt, and angiogenic pathways as well as 15 activated (phosphorylated) proteins. We found considerable intratumoral heterogeneity in the expression of proteins with a mean coefficient of variation of 25% (range 17-53%). The extent of intratumoral heterogeneity differed between proteins (p<0.005). Interestingly, there were no significant differences in the extent of heterogeneity between phosphorylated and non-phosphorylated proteins. In comparison, we assessed the variation of protein levels amongst tumors from different patients, which revealed a similar mean coefficient of variation of 21% (range 12-48%). Based on hierarchical clustering, samples from the same patient clustered more closely together compared to samples from different patients. However, a clear separation of tumor versus normal tissue by clustering was only achieved when mean expression values of all individual samples per tumor were analyzed. While differential expression of some proteins was detected independently of the sampling method used, the majority of proteins only demonstrated differential expression when mean expression values of multiple samples per tumor were analyzed. Our data indicate that assessment of established and novel cell signaling proteins as diagnostic or prognostic markers may require sampling of serous ovarian cancers at several distinct

  13. Variation in cell signaling protein expression may introduce sampling bias in primary epithelial ovarian cancer.

    Directory of Open Access Journals (Sweden)

    Gabriele Mittermeyer

    Full Text Available Although the expression of cell signaling proteins is used as prognostic and predictive biomarker, variability of protein levels within tumors is not well studied. We assessed intratumoral heterogeneity of protein expression within primary ovarian cancer. Full-length proteins were extracted from 88 formalin-fixed and paraffin-embedded tissue samples of 13 primary high-grade serous ovarian carcinomas with 5-9 samples each. In addition, 14 samples of normal fallopian tube epithelium served as reference. Quantitative reverse phase protein arrays were used to analyze the expression of 36 cell signaling proteins including HER2, EGFR, PI3K/Akt, and angiogenic pathways as well as 15 activated (phosphorylated proteins. We found considerable intratumoral heterogeneity in the expression of proteins with a mean coefficient of variation of 25% (range 17-53%. The extent of intratumoral heterogeneity differed between proteins (p<0.005. Interestingly, there were no significant differences in the extent of heterogeneity between phosphorylated and non-phosphorylated proteins. In comparison, we assessed the variation of protein levels amongst tumors from different patients, which revealed a similar mean coefficient of variation of 21% (range 12-48%. Based on hierarchical clustering, samples from the same patient clustered more closely together compared to samples from different patients. However, a clear separation of tumor versus normal tissue by clustering was only achieved when mean expression values of all individual samples per tumor were analyzed. While differential expression of some proteins was detected independently of the sampling method used, the majority of proteins only demonstrated differential expression when mean expression values of multiple samples per tumor were analyzed. Our data indicate that assessment of established and novel cell signaling proteins as diagnostic or prognostic markers may require sampling of serous ovarian cancers at

  14. SWOT ANALYSIS ON SAMPLING METHOD

    Directory of Open Access Journals (Sweden)

    CHIS ANCA OANA

    2014-07-01

    Full Text Available Audit sampling involves the application of audit procedures to less than 100% of items within an account balance or class of transactions. Our article aims to study audit sampling in audit of financial statements. As an audit technique largely used, in both its statistical and nonstatistical form, the method is very important for auditors. It should be applied correctly for a fair view of financial statements, to satisfy the needs of all financial users. In order to be applied correctly the method must be understood by all its users and mainly by auditors. Otherwise the risk of not applying it correctly would cause loose of reputation and discredit, litigations and even prison. Since there is not a unitary practice and methodology for applying the technique, the risk of incorrectly applying it is pretty high. The SWOT analysis is a technique used that shows the advantages, disadvantages, threats and opportunities. We applied SWOT analysis in studying the sampling method, from the perspective of three players: the audit company, the audited entity and users of financial statements. The study shows that by applying the sampling method the audit company and the audited entity both save time, effort and money. The disadvantages of the method are difficulty in applying and understanding its insight. Being largely used as an audit method and being a factor of a correct audit opinion, the sampling method’s advantages, disadvantages, threats and opportunities must be understood by auditors.

  15. Distance sampling methods and applications

    CERN Document Server

    Buckland, S T; Marques, T A; Oedekoven, C S

    2015-01-01

    In this book, the authors cover the basic methods and advances within distance sampling that are most valuable to practitioners and in ecology more broadly. This is the fourth book dedicated to distance sampling. In the decade since the last book published, there have been a number of new developments. The intervening years have also shown which advances are of most use. This self-contained book covers topics from the previous publications, while also including recent developments in method, software and application. Distance sampling refers to a suite of methods, including line and point transect sampling, in which animal density or abundance is estimated from a sample of distances to detected individuals. The book illustrates these methods through case studies; data sets and computer code are supplied to readers through the book’s accompanying website.  Some of the case studies use the software Distance, while others use R code. The book is in three parts.  The first part addresses basic methods, the ...

  16. Enhanced conformational sampling using replica exchange with concurrent solute scaling and hamiltonian biasing realized in one dimension.

    Science.gov (United States)

    Yang, Mingjun; Huang, Jing; MacKerell, Alexander D

    2015-06-09

    Replica exchange (REX) is a powerful computational tool for overcoming the quasi-ergodic sampling problem of complex molecular systems. Recently, several multidimensional extensions of this method have been developed to realize exchanges in both temperature and biasing potential space or the use of multiple biasing potentials to improve sampling efficiency. However, increased computational cost due to the multidimensionality of exchanges becomes challenging for use on complex systems under explicit solvent conditions. In this study, we develop a one-dimensional (1D) REX algorithm to concurrently combine the advantages of overall enhanced sampling from Hamiltonian solute scaling and the specific enhancement of collective variables using Hamiltonian biasing potentials. In the present Hamiltonian replica exchange method, termed HREST-BP, Hamiltonian solute scaling is applied to the solute subsystem, and its interactions with the environment to enhance overall conformational transitions and biasing potentials are added along selected collective variables associated with specific conformational transitions, thereby balancing the sampling of different hierarchical degrees of freedom. The two enhanced sampling approaches are implemented concurrently allowing for the use of a small number of replicas (e.g., 6 to 8) in 1D, thus greatly reducing the computational cost in complex system simulations. The present method is applied to conformational sampling of two nitrogen-linked glycans (N-glycans) found on the HIV gp120 envelope protein. Considering the general importance of the conformational sampling problem, HREST-BP represents an efficient procedure for the study of complex saccharides, and, more generally, the method is anticipated to be of general utility for the conformational sampling in a wide range of macromolecular systems.

  17. Sampling Methods for Wallenius' and Fisher's Noncentral Hypergeometric Distributions

    DEFF Research Database (Denmark)

    Fog, Agner

    2008-01-01

    the mode, ratio-of-uniforms rejection method, and rejection by sampling in the tau domain. Methods for the multivariate distributions include: simulation of urn experiments, conditional method, Gibbs sampling, and Metropolis-Hastings sampling. These methods are useful for Monte Carlo simulation of models......Several methods for generating variates with univariate and multivariate Wallenius' and Fisher's noncentral hypergeometric distributions are developed. Methods for the univariate distributions include: simulation of urn experiments, inversion by binary search, inversion by chop-down search from...... of biased sampling and models of evolution and for calculating moments and quantiles of the distributions....

  18. RNA preservation agents and nucleic acid extraction method bias perceived bacterial community composition.

    Directory of Open Access Journals (Sweden)

    Ann McCarthy

    Full Text Available Bias is a pervasive problem when characterizing microbial communities. An important source is the difference in lysis efficiencies of different populations, which vary depending on the extraction protocol used. To avoid such biases impacting comparisons between gene and transcript abundances in the environment, the use of one protocol that simultaneously extracts both types of nucleic acids from microbial community samples has gained popularity. However, knowledge regarding tradeoffs to combined nucleic acid extraction protocols is limited, particularly regarding yield and biases in the observed community composition. Here, we evaluated a commercially available protocol for simultaneous extraction of DNA and RNA, which we adapted for freshwater microbial community samples that were collected on filters. DNA and RNA yields were comparable to other commonly used, but independent DNA and RNA extraction protocols. RNA protection agents benefited RNA quality, but decreased DNA yields significantly. Choice of extraction protocol influenced the perceived bacterial community composition, with strong method-dependent biases observed for specific phyla such as the Verrucomicrobia. The combined DNA/RNA extraction protocol detected significantly higher levels of Verrucomicrobia than the other protocols, and those higher numbers were confirmed by microscopic analysis. Use of RNA protection agents as well as independent sequencing runs caused a significant shift in community composition as well, albeit smaller than the shift caused by using different extraction protocols. Despite methodological biases, sample origin was the strongest determinant of community composition. However, when the abundance of specific phylogenetic groups is of interest, researchers need to be aware of the biases their methods introduce. This is particularly relevant if different methods are used for DNA and RNA extraction, in addition to using RNA protection agents only for RNA

  19. Bias correction methods for regional climate model simulations considering the distributional parametric uncertainty underlying the observations

    Science.gov (United States)

    Kim, Kue Bum; Kwon, Hyun-Han; Han, Dawei

    2015-11-01

    In this paper, we present a comparative study of bias correction methods for regional climate model simulations considering the distributional parametric uncertainty underlying the observations/models. In traditional bias correction schemes, the statistics of the simulated model outputs are adjusted to those of the observation data. However, the model output and the observation data are only one case (i.e., realization) out of many possibilities, rather than being sampled from the entire population of a certain distribution due to internal climate variability. This issue has not been considered in the bias correction schemes of the existing climate change studies. Here, three approaches are employed to explore this issue, with the intention of providing a practical tool for bias correction of daily rainfall for use in hydrologic models ((1) conventional method, (2) non-informative Bayesian method, and (3) informative Bayesian method using a Weather Generator (WG) data). The results show some plausible uncertainty ranges of precipitation after correcting for the bias of RCM precipitation. The informative Bayesian approach shows a narrower uncertainty range by approximately 25-45% than the non-informative Bayesian method after bias correction for the baseline period. This indicates that the prior distribution derived from WG may assist in reducing the uncertainty associated with parameters. The implications of our results are of great importance in hydrological impact assessments of climate change because they are related to actions for mitigation and adaptation to climate change. Since this is a proof of concept study that mainly illustrates the logic of the analysis for uncertainty-based bias correction, future research exploring the impacts of uncertainty on climate impact assessments and how to utilize uncertainty while planning mitigation and adaptation strategies is still needed.

  20. A likelihood perspective on tree-ring standardization: eliminating modern sample bias

    Science.gov (United States)

    Cecile, J.; Pagnutti, C.; Anand, M.

    2013-08-01

    It has recently been suggested that non-random sampling and differences in mortality between trees of different growth rates is responsible for a widespread, systematic bias in dendrochronological reconstructions of tree growth known as modern sample bias. This poses a serious challenge for climate reconstruction and the detection of long-term changes in growth. Explicit use of growth models based on regional curve standardization allow us to investigate the effects on growth due to age (the regional curve), year (the standardized chronology or forcing) and a new effect, the productivity of each tree. Including a term for the productivity of each tree accounts for the underlying cause of modern sample bias, allowing for more reliable reconstruction of low-frequency variability in tree growth. This class of models describes a new standardization technique, fixed effects standardization, that contains both classical regional curve standardization and flat detrending. Signal-free standardization accounts for unbalanced experimental design and fits the same growth model as classical least-squares or maximum likelihood regression techniques. As a result, we can use powerful and transparent tools such as R2 and Akaike's Information Criteria to assess the quality of tree ring standardization, allowing for objective decisions between competing techniques. Analyzing 1200 randomly selected published chronologies, we find that regional curve standardization is improved by adding an effect for individual tree productivity in 99% of cases, reflecting widespread differing-contemporaneous-growth rate bias. Furthermore, modern sample bias produced a significant negative bias in estimated tree growth by time in 70.5% of chronologies and a significant positive bias in 29.5% of chronologies. This effect is largely concentrated in the last 300 yr of growth data, posing serious questions about the homogeneity of modern and ancient chronologies using traditional standardization

  1. Personality Traits and Susceptibility to Behavioral Biases among a Sample of Polish Stock Market Investors

    Directory of Open Access Journals (Sweden)

    Rzeszutek Marcin

    2015-09-01

    Full Text Available The aim of this paper is to investigate whether susceptibility to selected behavioral biases (overconfidence, mental accounting and sunk-cost fallacy is correlated with the Eysenck’s [1978] personality traits (impulsivity, venturesomeness, and empathy. This study was conducted on a sample of 90 retail investors frequently investing on the Warsaw Stock Exchange. Participants filled out a survey made up of two parts: 1 three situational exercises, which assessed susceptibility to behavioral biases and 2 an Impulsiveness Questionnaire, which measures impulsivity, venturesomeness, and empathy. The results demonstrated the relationship between venturesomeness and susceptibility to all behavioral biases explored in this study. We find that higher level of venturesomeness was linked with a lower probability of all behavioral biases included in this study.

  2. When POS datasets don’t add up: Combatting sample bias

    DEFF Research Database (Denmark)

    Hovy, Dirk; Plank, Barbara; Søgaard, Anders

    2014-01-01

    bias. We present a systematic study of several Twitter POS data sets, the problems of label and data bias, discuss their effects on model performance, and show how to overcome them to learn models that perform well on various test sets, achieving relative error reduction of up to 21%.......Several works in Natural Language Processing have recently looked into part-of-speech (POS) annotation of Twitter data and typically used their own data sets. Since conventions on Twitter change rapidly, models often show sample bias. Training on a combination of the existing data sets should help...... overcome this bias and produce more robust models than any trained on the individual corpora. Unfortunately, combining the existing corpora proves difficult: many of the corpora use proprietary tag sets that have little or no overlap. Even when mapped to a common tag set, the different corpora...

  3. Evaluation of bias correction methods for wave modeling output

    Science.gov (United States)

    Parker, K.; Hill, D. F.

    2017-02-01

    Models that seek to predict environmental variables invariably demonstrate bias when compared to observations. Bias correction (BC) techniques are common in the climate and hydrological modeling communities, but have seen fewer applications to the field of wave modeling. In particular there has been no investigation as to which BC methodology performs best for wave modeling. This paper introduces and compares a subset of BC methods with the goal of clarifying a "best practice" methodology for application of BC in studies of wave-related processes. Specific focus is paid to comparing parametric vs. empirical methods as well as univariate vs. bivariate methods. The techniques are tested on global WAVEWATCH III historic and future period datasets with comparison to buoy observations at multiple locations. Both wave height and period are considered in order to investigate BC effects on inter-variable correlation. Results show that all methods perform uniformly in terms of correcting statistical moments for individual variables with the exception of a copula based method underperforming for wave period. When comparing parametric and empirical methods, no difference is found. Between bivariate and univariate methods, results show that bivariate methods greatly improve inter-variable correlations. Of the bivariate methods tested the copula based method is found to be not as effective at correcting correlation while a "shuffling" method is unable to handle changes in correlation from historic to future periods. In summary, this study demonstrates that BC methods are effective when applied to wave model data and that it is essential to employ methods that consider dependence between variables.

  4. [Base-rate estimates for negative response bias in a workers' compensation claim sample].

    Science.gov (United States)

    Merten, T; Krahi, G; Krahl, C; Freytag, H W

    2010-09-01

    Against the background of a growing interest in symptom validity assessment in European countries, new data on base rates of negative response bias is presented. A retrospective data analysis of forensic psychological evaluations was performed based on 398 patients with workers' compensation claims. 48 percent of all patients scored below cut-off in at least one symptom validity test (SVT) indicating possible negative response bias. However, different SVTs appear to have differing potential to identify negative response bias. The data point at the necessity to use modern methods to check data validity in civil forensic contexts.

  5. An Iterative Rejection Sampling Method

    CERN Document Server

    Sherstnev, A

    2008-01-01

    In the note we consider an iterative generalisation of the rejection sampling method. In high energy physics, this sampling is frequently used for event generation, i.e. preparation of phase space points distributed according to a matrix element squared $|M|^2$ for a scattering process. In many realistic cases $|M|^2$ is a complicated multi-dimensional function, so, the standard von Neumann procedure has quite low efficiency, even if an error reducing technique, like VEGAS, is applied. As a result of that, many of the $|M|^2$ calculations go to ``waste''. The considered iterative modification of the procedure can extract more ``unweighted'' events, i.e. distributed according to $|M|^2$. In several simple examples we show practical benefits of the technique and obtain more events than the standard von Neumann method, without any extra calculations of $|M|^2$.

  6. Adaptive Biasing Combined with Hamiltonian Replica Exchange to Improve Umbrella Sampling Free Energy Simulations.

    Science.gov (United States)

    Zeller, Fabian; Zacharias, Martin

    2014-02-11

    The accurate calculation of potentials of mean force for ligand-receptor binding is one of the most important applications of molecular simulation techniques. Typically, the separation distance between ligand and receptor is chosen as a reaction coordinate along which a PMF can be calculated with the aid of umbrella sampling (US) techniques. In addition, restraints can be applied on the relative position and orientation of the partner molecules to reduce accessible phase space. An approach combining such phase space reduction with flattening of the free energy landscape and configurational exchanges has been developed, which significantly improves the convergence of PMF calculations in comparison with standard umbrella sampling. The free energy surface along the reaction coordinate is smoothened by iteratively adapting biasing potentials corresponding to previously calculated PMFs. Configurations are allowed to exchange between the umbrella simulation windows via the Hamiltonian replica exchange method. The application to a DNA molecule in complex with a minor groove binding ligand indicates significantly improved convergence and complete reversibility of the sampling along the pathway. The calculated binding free energy is in excellent agreement with experimental results. In contrast, the application of standard US resulted in large differences between PMFs calculated for association and dissociation pathways. The approach could be a useful alternative to standard US for computational studies on biomolecular recognition processes.

  7. Active Learning to Overcome Sample Selection Bias: Application to Photometric Variable Star Classification

    CERN Document Server

    Richards, Joseph W; Brink, Henrik; Miller, Adam A; Bloom, Joshua S; Butler, Nathaniel R; James, J Berian; Long, James P; Rice, John

    2011-01-01

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because a) standard assumptions for machine-learned model selection procedures break down and b) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting (IW), co-training (CT), and active learning (AL). We argue that AL---where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up---i...

  8. Quantifying sample biases of inland lake sampling programs in relation to lake surface area and land use/cover.

    Science.gov (United States)

    Wagner, Tyler; Soranno, Patricia A; Cheruvelil, Kendra Spence; Renwick, William H; Webster, Katherine E; Vaux, Peter; Abbitt, Robbyn J F

    2008-06-01

    We quantified potential biases associated with lakes monitored using non-probability based sampling by six state agencies in the USA (Michigan, Wisconsin, Iowa, Ohio, Maine, and New Hampshire). To identify biases, we compared state-monitored lakes to a census population of lakes derived from the National Hydrography Dataset. We then estimated the probability of lakes being sampled using generalized linear mixed models. Our two research questions were: (1) are there systematic differences in lake area and land use/land cover (LULC) surrounding lakes monitored by state agencies when compared to the entire population of lakes? and (2) after controlling for the effects of lake size, does the probability of sampling vary depending on the surrounding LULC features? We examined the biases associated with surrounding LULC because of the established links between LULC and lake water quality. For all states, we found that larger lakes had a higher probability of being sampled compared to smaller lakes. Significant interactions between lake size and LULC prohibit us from drawing conclusions about the main effects of LULC; however, in general lakes that are most likely to be sampled have either high urban use, high agricultural use, high forest cover, or low wetland cover. Our analyses support the assertion that data derived from non-probability-based surveys must be used with caution when attempting to make generalizations to the entire population of interest, and that probability-based surveys are needed to ensure unbiased, accurate estimates of lake status and trends at regional to national scales.

  9. Capture-recapture method for assessing publication bias

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2010-01-01

    Full Text Available

    Background: Publication bias is an important factor that may result in selection bias and lead to overestimation of the intervention effect. Capture recapture method is considered as a potentially useful procedure for investigating and estimating publication bias.

    Methods: We conducted a systematic review to estimate the duration of protection provided by hepatitis B vaccine by measuring the anamnestic immune response to booster doses of vaccine and retrieved studies from three separate sources, including a electronic databases, b reference lists of the studies, and c conference databases as well as contact with experts and manufacturers. Capture recapture and some conventional methods such as funnel plot, Begg test, Egger test, and trim and fill method were employed for assessing publication bias.

    Results: Based on capture recapture method, completeness of the overall search results was 87.2% [95% CI: 84.6% to 89.0%] and log-linear model suggested 5 [95% CI: 4.2 to 6.2] missing studies. The funnel plot was asymmetric but Begg and Egger tests results were

  10. Spatial recruitment bias in respondent-driven sampling: Implications for HIV prevalence estimation in urban heterosexuals.

    Science.gov (United States)

    Jenness, Samuel M; Neaigus, Alan; Wendel, Travis; Gelpi-Acosta, Camila; Hagan, Holly

    2014-12-01

    Respondent-driven sampling (RDS) is a study design used to investigate populations for which a probabilistic sampling frame cannot be efficiently generated. Biases in parameter estimates may result from systematic non-random recruitment within social networks by geography. We investigate the spatial distribution of RDS recruits relative to an inferred social network among heterosexual adults in New York City in 2010. Mean distances between recruitment dyads are compared to those of network dyads to quantify bias. Spatial regression models are then used to assess the impact of spatial structure on risk and prevalence outcomes. In our primary distance metric, network dyads were an average of 1.34 (95 % CI 0.82–1.86) miles farther dispersed than recruitment dyads, suggesting spatial bias. However, there was no evidence that demographic associations with HIV risk or prevalence were spatially confounded. Therefore, while the spatial structure of recruitment may be biased in heterogeneous urban settings, the impact of this bias on estimates of outcome measures appears minimal.

  11. The second Southern African Bird Atlas Project: Causes and consequences of geographical sampling bias.

    Science.gov (United States)

    Hugo, Sanet; Altwegg, Res

    2017-09-01

    Using the Southern African Bird Atlas Project (SABAP2) as a case study, we examine the possible determinants of spatial bias in volunteer sampling effort and how well such biased data represent environmental gradients across the area covered by the atlas. For each province in South Africa, we used generalized linear mixed models to determine the combination of variables that explain spatial variation in sampling effort (number of visits per 5' × 5' grid cell, or "pentad"). The explanatory variables were distance to major road and exceptional birding locations or "sampling hubs," percentage cover of protected, urban, and cultivated area, and the climate variables mean annual precipitation, winter temperatures, and summer temperatures. Further, we used the climate variables and plant biomes to define subsets of pentads representing environmental zones across South Africa, Lesotho, and Swaziland. For each environmental zone, we quantified sampling intensity, and we assessed sampling completeness with species accumulation curves fitted to the asymptotic Lomolino model. Sampling effort was highest close to sampling hubs, major roads, urban areas, and protected areas. Cultivated area and the climate variables were less important. Further, environmental zones were not evenly represented by current data and the zones varied in the amount of sampling required representing the species that are present. SABAP2 volunteers' preferences in birding locations cause spatial bias in the dataset that should be taken into account when analyzing these data. Large parts of South Africa remain underrepresented, which may restrict the kind of ecological questions that may be addressed. However, sampling bias may be improved by directing volunteers toward undersampled regions while taking into account volunteer preferences.

  12. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    Directory of Open Access Journals (Sweden)

    Anton Kühberger

    Full Text Available The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias.We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values.We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings.The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  13. A Critical Assessment of Bias in Survey Studies Using Location-Based Sampling to Recruit Patrons in Bars.

    Science.gov (United States)

    Morrison, Christopher; Lee, Juliet P; Gruenewald, Paul J; Marzell, Miesha

    2015-01-01

    Location-based sampling is a method to obtain samples of people within ecological contexts relevant to specific public health outcomes. Random selection increases generalizability; however, in some circumstances (such as surveying bar patrons), recruitment conditions increase risks of sample bias. We attempted to recruit representative samples of bars and patrons in six California cities, but low response rates precluded meaningful analysis. A systematic review of 24 similar studies revealed that none addressed the key shortcomings of our study. We recommend steps to improve studies that use location-based sampling: (i) purposively sample places of interest, (ii) use recruitment strategies appropriate to the environment, and (iii) provide full information on response rates at all levels of sampling.

  14. Exchange bias effect in CoCr2O4/NiO system prepared by two-step method

    Science.gov (United States)

    Wang, L. G.; Zhu, C. M.; Chen, L.; Yuan, S. L.

    2017-02-01

    CoCr2O4/NiO has been successfully synthesized through two-step method. X-ray diffraction results present the coexistence of CoCr2O4 and NiO with pure formation. Micrographs measured with scanning electron microscope and transmission electron microscope display the homogeneous and dense morphology with two kinds of nanoparticles. Exchange bias effect is observed in the sample. The exchange bias field is about 872 Oe at 10 K. As measuring temperature increases, exchange bias effect is weakened with decreasing coercive field. In addition, exchange bias field and the shift of magnetization show the linear relationship with increasing cooling field. The exchange bias behavior can be attributed to the exchange coupling at the disordered interfaces in the sample.

  15. Monitoring the aftermath of Flint drinking water contamination crisis: Another case of sampling bias?

    Science.gov (United States)

    Goovaerts, Pierre

    2017-07-15

    The delay in reporting high levels of lead in Flint drinking water, following the city's switch to the Flint River as its water supply, was partially caused by the biased selection of sampling sites away from the lead pipe network. Since Flint returned to its pre-crisis source of drinking water, the State has been monitoring water lead levels (WLL) at selected "sentinel" sites. In a first phase that lasted two months, 739 residences were sampled, most of them bi-weekly, to determine the general health of the distribution system and to track temporal changes in lead levels. During the same period, water samples were also collected through a voluntary program whereby concerned citizens received free testing kits and conducted sampling on their own. State officials relied on the former data to demonstrate the steady improvement in water quality. A recent analysis of data collected by voluntary sampling revealed, however, an opposite trend with lead levels increasing over time. This paper looks at potential sampling bias to explain such differences. Although houses with higher WLL were more likely to be sampled repeatedly, voluntary sampling turned out to reproduce fairly well the main characteristics (i.e. presence of lead service lines (LSL), construction year) of Flint housing stock. State-controlled sampling was less representative; e.g., sentinel sites with LSL were mostly built between 1935 and 1950 in lower poverty areas, which might hamper our ability to disentangle the effects of LSL and premise plumbing (lead fixtures and pipes present within old houses) on WLL. Also, there was no sentinel site with LSL in two of the most impoverished wards, including where the percentage of children with elevated blood lead levels tripled following the switch in water supply. Correcting for sampling bias narrowed the gap between sampling programs, yet overall temporal trends are still opposite.

  16. Personality, Attentional Biases towards Emotional Faces and Symptoms of Mental Disorders in an Adolescent Sample.

    Science.gov (United States)

    O'Leary-Barrett, Maeve; Pihl, Robert O; Artiges, Eric; Banaschewski, Tobias; Bokde, Arun L W; Büchel, Christian; Flor, Herta; Frouin, Vincent; Garavan, Hugh; Heinz, Andreas; Ittermann, Bernd; Mann, Karl; Paillère-Martinot, Marie-Laure; Nees, Frauke; Paus, Tomas; Pausova, Zdenka; Poustka, Luise; Rietschel, Marcella; Robbins, Trevor W; Smolka, Michael N; Ströhle, Andreas; Schumann, Gunter; Conrod, Patricia J

    2015-01-01

    To investigate the role of personality factors and attentional biases towards emotional faces, in establishing concurrent and prospective risk for mental disorder diagnosis in adolescence. Data were obtained as part of the IMAGEN study, conducted across 8 European sites, with a community sample of 2257 adolescents. At 14 years, participants completed an emotional variant of the dot-probe task, as well two personality measures, namely the Substance Use Risk Profile Scale and the revised NEO Personality Inventory. At 14 and 16 years, participants and their parents were interviewed to determine symptoms of mental disorders. Personality traits were general and specific risk indicators for mental disorders at 14 years. Increased specificity was obtained when investigating the likelihood of mental disorders over a 2-year period, with the Substance Use Risk Profile Scale showing incremental validity over the NEO Personality Inventory. Attentional biases to emotional faces did not characterise or predict mental disorders examined in the current sample. Personality traits can indicate concurrent and prospective risk for mental disorders in a community youth sample, and identify at-risk youth beyond the impact of baseline symptoms. This study does not support the hypothesis that attentional biases mediate the relationship between personality and psychopathology in a community sample. Task and sample characteristics that contribute to differing results among studies are discussed.

  17. Coupling methods for multistage sampling

    OpenAIRE

    Chauvet, Guillaume

    2015-01-01

    Multistage sampling is commonly used for household surveys when there exists no sampling frame, or when the population is scattered over a wide area. Multistage sampling usually introduces a complex dependence in the selection of the final units, which makes asymptotic results quite difficult to prove. In this work, we consider multistage sampling with simple random without replacement sampling at the first stage, and with an arbitrary sampling design for further stages. We consider coupling ...

  18. Anticipation or ascertainment bias in schizophrenia? Penrose`s familial mental illness sample

    Energy Technology Data Exchange (ETDEWEB)

    Bassett, A.S. [Univ. of Toronto (Canada)]|[Queen Street Mental health Centre, Toronto (Canada); Husted, J. [Univ. of Waterloo, Ontario (Canada)

    1997-03-01

    Several studies have observed anticipation (earlier age at onset [AAO] in successive generations) in familial schizophrenia. However, whether true anticipation or ascertainment bias is the principal originating mechanism remains unclear. In 1944 L.S. Penrose collected AAO data on a large, representative sample of familial mental illness, using a broad ascertainment strategy. These data allowed examination of anticipation and ascertainment biases in five two-generation samples of affected relative pairs. The median intergenerational difference (MID) in AAO was used to assess anticipation. Results showed significant anticipation in parent-offspring pairs with schizophrenia (n = 137 pairs; MID 15 years; P = .0001) and in a positive control sample with Huntington disease (n = 11; P = .01). Broadening the diagnosis of the schizophrenia sample suggested anticipation of severity of illness. However, other analyses provided evidence for ascertainment bias, especially in later-AAO parents, in parent-offspring pairs. Aunt/uncle-niece/nephew schizophrenia pairs showed anticipation (n = 111; P = .0001), but the MID was 8 years and aunts/uncles had earlier median AAO than parents. Anticipation effects were greatest in pairs with late-AAO parents but remained significant in a subgroup of schizophrenia pairs with early parental AAO (n = 31; P = .03). A small control sample of other diseases had MID of 5 years but no significant anticipation (n = 9; F = .38). These results suggest that, although ascertainment-bias effects were observed in parent-offspring pairs, true anticipation appears to be inherent in the transmission of familial schizophrenia. The findings support investigations of unstable mutations and other mechanisms that may contribute to true anticipation in schizophrenia. 37 refs., 2 tabs.

  19. A large sample of binary quasars: Does quasar bias tracks from Mpc scale to kpc scales?

    Science.gov (United States)

    Eftekharzadeh, Sarah; Myers, Adam D.; Djorgovski, Stanislav G.; Graham, Matthew J.

    2017-01-01

    We present the most precise estimate to date of the bias of quasars on very small scales, based on a measurement of the clustering of 47 spectroscopically confirmed binary quasars with proper transverse separations of ~25 h^{-1} kpc. The quasars in our sample, which is an order-of-magnitude larger than previous samples, are targeted using a Kernel Density Estimation technique (KDE) applied to Sloan Digital Sky Survey (SDSS) imaging over most of the SDSS area. Our sample is "complete," in that all possible pairs of binary quasars across our area of interest have been spectroscopically confirmed from a combination of previous surveys and our own long-slit observational campaign. We determine the projected correlation function of quasars (\\bar W_p) in four bins of proper transverse scale over the range 17.0 \\lesssim R_{prop} \\lesssim 36.2 h^{-1} kpc. Due to our large sample size, our measured projected correlation function in each of these four bins of scale is more than twice as precise as any previous measurement made over our {\\em full} range of scales. We also measure the bias of our quasar sample in four slices of redshift across the range 0.43 \\le z \\le 2.26 and compare our results to similar measurements of how quasar bias evolves on Mpc-scales. This measurement addresses the question of whether it is reasonable to assume that quasar bias evolves with redshift in a similar fashion on both Mpc and kpc scales. Our results can meaningfully constrain the one-halo term of the Halo Occupation Distribution (HOD) of quasars and how it evolves with redshift. This work was partially supported by NSF grant 1515404.

  20. Sources of method bias in social science research and recommendations on how to control it.

    Science.gov (United States)

    Podsakoff, Philip M; MacKenzie, Scott B; Podsakoff, Nathan P

    2012-01-01

    Despite the concern that has been expressed about potential method biases, and the pervasiveness of research settings with the potential to produce them, there is disagreement about whether they really are a problem for researchers in the behavioral sciences. Therefore, the purpose of this review is to explore the current state of knowledge about method biases. First, we explore the meaning of the terms "method" and "method bias" and then we examine whether method biases influence all measures equally. Next, we review the evidence of the effects that method biases have on individual measures and on the covariation between different constructs. Following this, we evaluate the procedural and statistical remedies that have been used to control method biases and provide recommendations for minimizing method bias.

  1. Towards Cost-efficient Sampling Methods

    CERN Document Server

    Peng, Luo; Chong, Wu

    2014-01-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper presents two new sampling methods based on the perspective that a small part of vertices with high node degree can possess the most structure information of a network. The two proposed sampling methods are efficient in sampling the nodes with high degree. The first new sampling method is improved on the basis of the stratified random sampling method and selects the high degree nodes with higher probability by classifying the nodes according to their degree distribution. The second sampling method improves the existing snowball sampling method so that it enables to sample the targeted nodes selectively in every sampling step. Besides, the two proposed sampling methods not only sample the nodes but also pick the edges directly connected to these nodes. In order to demonstrate the two methods' availability and accuracy, we compare them with the existing sampling methods in...

  2. Rovno Amber Ant Assamblage: Bias toward Arboreal Strata or Sampling Effect?

    Directory of Open Access Journals (Sweden)

    Perkovsky E. E.

    2016-06-01

    Full Text Available In 2015 B. Guenard with co-authors indicated that the Rovno amber ant assemblage, as described by G. Dlussky and A. Rasnitsyn (2009, showed modest support for a bias towards arboreal origin comparing the Baltic and Bitterfeld assemblages, although it is not clear whether this reflects a sampling error or a signal of real deviation. Since 2009, the Rovno ant collection has now grown more than twice in volume which makes possible to check if the above inference about the essentially arboreal character of the assemblage is real or due to a sampling error. The comparison provided suggests in favour of the latter reason for the bias revealed by B. Guenard and co-authors. The new and larger data on the Rovno assemblage show that the share of non-arboreal ants is now well comparable with those concerning the Baltic and Bitterfeld assemblages. This holds true for the both total assemblages and subassemblages of worker ants only.

  3. Sample processing device and method

    DEFF Research Database (Denmark)

    2011-01-01

    A sample processing device is disclosed, which sample processing device comprises a first substrate and a second substrate, where the first substrate has a first surface comprising two area types, a first area type with a first contact angle with water and a second area type with a second contact...... a sample liquid comprising the sample and the first preparation system is adapted to receive a receiving liquid. In a particular embodiment, a magnetic sample transport component, such as a permanent magnet or an electromagnet, is arranged to move magnetic beads in between the first and second substrates....

  4. Towards Cost-efficient Sampling Methods

    OpenAIRE

    Peng, Luo; Yongli, Li; Chong, Wu

    2014-01-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper presents two new sampling methods based on the perspective that a small part of vertices with high node degree can possess the most structure information of a network. The two proposed sampling methods are efficient in sampling the nodes with high degree. The first new sampling method is improved on the basis of the stratified random sampling method and...

  5. Multilinear Biased Discriminant Analysis: A Novel Method for Facial Action Unit Representation

    CERN Document Server

    Khademi, Mahmoud; Manzuri-Shalmani, Mohammad T

    2010-01-01

    In this paper a novel efficient method for representation of facial action units by encoding an image sequence as a fourth-order tensor is presented. The multilinear tensor-based extension of the biased discriminant analysis (BDA) algorithm, called multilinear biased discriminant analysis (MBDA), is first proposed. Then, we apply the MBDA and two-dimensional BDA (2DBDA) algorithms, as the dimensionality reduction techniques, to Gabor representations and the geometric features of the input image sequence respectively. The proposed scheme can deal with the asymmetry between positive and negative samples as well as curse of dimensionality dilemma. Extensive experiments on Cohn-Kanade database show the superiority of the proposed method for representation of the subtle changes and the temporal information involved in formation of the facial expressions. As an accurate tool, this representation can be applied to many areas such as recognition of spontaneous and deliberate facial expressions, multi modal/media huma...

  6. Pathogen prevalence, group bias, and collectivism in the standard cross-cultural sample.

    Science.gov (United States)

    Cashdan, Elizabeth; Steele, Matthew

    2013-03-01

    It has been argued that people in areas with high pathogen loads will be more likely to avoid outsiders, to be biased in favor of in-groups, and to hold collectivist and conformist values. Cross-national studies have supported these predictions. In this paper we provide new pathogen codes for the 186 cultures of the Standard Cross-Cultural Sample and use them, together with existing pathogen and ethnographic data, to try to replicate these cross-national findings. In support of the theory, we found that cultures in high pathogen areas were more likely to socialize children toward collectivist values (obedience rather than self-reliance). There was some evidence that pathogens were associated with reduced adult dispersal. However, we found no evidence of an association between pathogens and our measures of group bias (in-group loyalty and xenophobia) or intergroup contact.

  7. Impact of Three Illumina Library Construction Methods on GC Bias and HLA Genotype Calling

    Science.gov (United States)

    Lan, James H; Yin, Yuxin; Reed, Elaine F; Moua, Kevin; Thomas, Kimberly; Zhang, Qiuheng

    2016-01-01

    Next-generation sequencing (NGS) is increasingly recognized for its ability to overcome allele ambiguity and deliver high-resolution typing in the human leukocyte antigen (HLA) system. Using this technology, non-uniform read distribution can impede the reliability of variant detection, which renders high-confidence genotype calling particularly difficult to achieve in the polymorphic HLA complex. Recently, library construction has been implicated as the dominant factor in instigating coverage bias. To study the impact of this phenomenon on HLA genotyping, we performed long-range PCR on 12 samples to amplify HLA-A, -B, -C, -DRB1, and -DQB1, and compared the relative contribution of three Illumina library construction methods (TruSeq Nano, Nextera, Nextera XT) in generating downstream bias. Here, we show high GC% to be a good predictor of low sequencing depth. Compared to standard TruSeq Nano, GC bias was more prominent in transposase-based protocols, particularly Nextera XT, likely through a combination of transposase insertion bias being coupled with a high number of PCR enrichment cycles. Importantly, our findings demonstrate non-uniform read depth can have a direct and negative impact on the robustness of HLA genotyping, which has clinical implications for users when choosing a library construction strategy that aims to balance cost and throughput with data quality. PMID:25543015

  8. CEO emotional bias and investment decision, Bayesian network method

    OpenAIRE

    Jarboui Anis; Mohamed Ali Azouzi

    2012-01-01

    This research examines the determinants of firms’ investment introducing a behavioral perspective that has received little attention in corporate finance literature. The following central hypothesis emerges from a set of recently developed theories: Investment decisions are influenced not only by their fundamentals but also depend on some other factors. One factor is the biasness of any CEO to their investment, biasness depends on the cognition and emotions, because some leaders use them as h...

  9. Neonatal blood gas sampling methods

    African Journals Online (AJOL)

    S. A. J ournal of Child He alth. ARTICLE. Blood gas sampling is part of everyday practice in the care of ... 100 g. He had been ventilated since the first few hours of life, initially ..... Arch Dis Child 1988;63(7):743-747. 3. .... Kim EH, Cohen RS, Ramachandran P. Effect of vascular puncture on blood ... Williamson D, Holt PJ.

  10. Using key informant methods in organizational survey research: assessing for informant bias.

    Science.gov (United States)

    Hughes, L C; Preski, S

    1997-02-01

    Specification of variables that reflect organizational processes can add an important dimension to the investigation of outcomes. However, many contextual variables are conceptualized at a macro unit of analysis and may not be amenable to direct measurement. In these situations, proxy measurement is obtained by treating organizational members as key informants who report about properties of the work group or organization. Potential sources of bias when using key informant methods in organizational survey research are discussed. Statistical procedures for assessment of rater-trait interaction as a type of informant bias are illustrated using data from a study in which multiple key informants were sampled to obtain proxy measurement of the organizational climate for caring among baccalaureate schools of nursing.

  11. Soil sampling kit and a method of sampling therewith

    Science.gov (United States)

    Thompson, Cyril V.

    1991-01-01

    A soil sampling device and a sample containment device for containing a soil sample is disclosed. In addition, a method for taking a soil sample using the soil sampling device and soil sample containment device to minimize the loss of any volatile organic compounds contained in the soil sample prior to analysis is disclosed. The soil sampling device comprises two close fitting, longitudinal tubular members of suitable length, the inner tube having the outward end closed. With the inner closed tube withdrawn a selected distance, the outer tube can be inserted into the ground or other similar soft material to withdraw a sample of material for examination. The inner closed end tube controls the volume of the sample taken and also serves to eject the sample. The soil sample containment device has a sealing member which is adapted to attach to an analytical apparatus which analyzes the volatile organic compounds contained in the sample. The soil sampling device in combination with the soil sample containment device allow an operator to obtain a soil sample containing volatile organic compounds and minimizing the loss of the volatile organic compounds prior to analysis of the soil sample for the volatile organic compounds.

  12. The effects of sampling bias and model complexity on the predictive performance of MaxEnt species distribution models.

    Science.gov (United States)

    Syfert, Mindy M; Smith, Matthew J; Coomes, David A

    2013-01-01

    Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.

  13. A clinical trial alert tool to recruit large patient samples and assess selection bias in general practice research

    Directory of Open Access Journals (Sweden)

    Scheidt-Nave Christa

    2011-02-01

    Full Text Available Abstract Background Many research projects in general practice face problems when recruiting patients, often resulting in low recruitment rates and an unknown selection bias, thus limiting their value for health services research. The objective of the study is to evaluate the recruitment performance of the practice staff in 25 participating general practices when using a clinical trial alert (CTA tool. Methods The CTA tool was developed for an osteoporosis survey of patients at risk for osteoporosis and fractures. The tool used data from electronic patient records (EPRs to automatically identify the population at risk (net sample, to apply eligibility criteria, to contact eligible patients, to enrol and survey at least 200 patients per practice. The effects of the CTA intervention were evaluated on the basis of recruitment efficiency and selection bias. Results The CTA tool identified a net sample of 16,067 patients (range 162 to 1,316 per practice, of which the practice staff reviewed 5,161 (32% cases for eligibility. They excluded 3,248 patients and contacted 1,913 patients. Of these, 1,526 patients (range 4 to 202 per practice were successfully enrolled and surveyed. This made up 9% of the net sample and 80% of the patients contacted. Men and older patients were underrepresented in the study population. Conclusion Although the recruitment target was unreachable for most practices, the practice staff in the participating practices used the CTA tool successfully to identify, document and survey a large patient sample. The tool also helped the research team to precisely determine a slight selection bias.

  14. N3 Bias Field Correction Explained as a Bayesian Modeling Method

    DEFF Research Database (Denmark)

    Larsen, Christian Thode; Iglesias, Juan Eugenio; Van Leemput, Koen

    2014-01-01

    Although N3 is perhaps the most widely used method for MRI bias field correction, its underlying mechanism is in fact not well understood. Specifically, the method relies on a relatively heuristic recipe of alternating iterative steps that does not optimize any particular objective function....... In this paper we explain the successful bias field correction properties of N3 by showing that it implicitly uses the same generative models and computational strategies as expectation maximization (EM) based bias field correction methods. We demonstrate experimentally that purely EM-based methods are capable...... of producing bias field correction results comparable to those of N3 in less computation time....

  15. Method for introducing bias magnetization in ungaped cores

    DEFF Research Database (Denmark)

    Aguilar, Andres Revilla; Munk-Nielsen, Stig

    2014-01-01

    The use of permanent magnets for bias magnetization is a known technique to increase the energy storage capability in DC inductors, resulting in a size reduction or increased current rating. This paper presents a brief introduction on the different permanent magnet inductor’s configurations found...

  16. An interaction energy driven biased sampling technique: A faster route to ionization spectra in condensed phase.

    Science.gov (United States)

    Bose, Samik; Ghosh, Debashree

    2017-10-05

    We introduce a computationally efficient approach for calculating spectroscopic properties, such as ionization energies (IEs) in the condensed phase. Discrete quantum mechanical/molecular mechanical (QM/MM) approaches for spectroscopic properties in a dynamic system, such as aqueous solution, need a large sample space to obtain converged estimates, especially for the cases where particle (electron) number is not conserved, such as IEs or electron affinities (EAs). We devise a biased sampling technique based on an approximate estimate of interaction energy between the solute and solvent, that accelerates the convergence and therefore, reduces the computational cost significantly. The approximate interaction energy also provides a good measure of the spectral width of the chromophores in the condensed phase. This technique has been tested and benchmarked for (i) phenol, (ii) HBDI anion (hydroxybenzylidene dimethyl imidazolinone), and (iii) thymine in water. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. Estimates of the average strength of natural selection are not inflated by sampling error or publication bias.

    Science.gov (United States)

    Knapczyk, Frances N; Conner, Jeffrey K

    2007-10-01

    Kingsolver et al.'s review of phenotypic selection gradients from natural populations provided a glimpse of the form and strength of selection in nature and how selection on different organisms and traits varies. Because this review's underlying database could be a key tool for answering fundamental questions concerning natural selection, it has spawned discussion of potential biases inherent in the review process. Here, we explicitly test for two commonly discussed sources of bias: sampling error and publication bias. We model the relationship between variance among selection gradients and sample size that sampling error produces by subsampling large empirical data sets containing measurements of traits and fitness. We find that this relationship was not mimicked by the review data set and therefore conclude that sampling error does not bias estimations of the average strength of selection. Using graphical tests, we find evidence for bias against publishing weak estimates of selection only among very small studies (N<38). However, this evidence is counteracted by excess weak estimates in larger studies. Thus, estimates of average strength of selection from the review are less biased than is often assumed. Devising and conducting straightforward tests for different biases allows concern to be focused on the most troublesome factors.

  18. Collecting a better water-quality sample: Reducing vertical stratification bias in open and closed channels

    Science.gov (United States)

    Selbig, William R.

    2017-01-01

    Collection of water-quality samples that accurately characterize average particle concentrations and distributions in channels can be complicated by large sources of variability. The U.S. Geological Survey (USGS) developed a fully automated Depth-Integrated Sample Arm (DISA) as a way to reduce bias and improve accuracy in water-quality concentration data. The DISA was designed to integrate with existing autosampler configurations commonly used for the collection of water-quality samples in vertical profile thereby providing a better representation of average suspended sediment and sediment-associated pollutant concentrations and distributions than traditional fixed-point samplers. In controlled laboratory experiments, known concentrations of suspended sediment ranging from 596 to 1,189 mg/L were injected into a 3 foot diameter closed channel (circular pipe) with regulated flows ranging from 1.4 to 27.8 ft3 /s. Median suspended sediment concentrations in water-quality samples collected using the DISA were within 7 percent of the known, injected value compared to 96 percent for traditional fixed-point samplers. Field evaluation of this technology in open channel fluvial systems showed median differences between paired DISA and fixed-point samples to be within 3 percent. The range of particle size measured in the open channel was generally that of clay and silt. Differences between the concentration and distribution measured between the two sampler configurations could potentially be much larger in open channels that transport larger particles, such as sand.

  19. How many dinosaur species were there? Fossil bias and true richness estimated using a Poisson sampling model.

    Science.gov (United States)

    Starrfelt, Jostein; Liow, Lee Hsiang

    2016-04-01

    The fossil record is a rich source of information about biological diversity in the past. However, the fossil record is not only incomplete but has also inherent biases due to geological, physical, chemical and biological factors. Our knowledge of past life is also biased because of differences in academic and amateur interests and sampling efforts. As a result, not all individuals or species that lived in the past are equally likely to be discovered at any point in time or space. To reconstruct temporal dynamics of diversity using the fossil record, biased sampling must be explicitly taken into account. Here, we introduce an approach that uses the variation in the number of times each species is observed in the fossil record to estimate both sampling bias and true richness. We term our technique TRiPS (True Richness estimated using a Poisson Sampling model) and explore its robustness to violation of its assumptions via simulations. We then venture to estimate sampling bias and absolute species richness of dinosaurs in the geological stages of the Mesozoic. Using TRiPS, we estimate that 1936 (1543-2468) species of dinosaurs roamed the Earth during the Mesozoic. We also present improved estimates of species richness trajectories of the three major dinosaur clades: the sauropodomorphs, ornithischians and theropods, casting doubt on the Jurassic-Cretaceous extinction event and demonstrating that all dinosaur groups are subject to considerable sampling bias throughout the Mesozoic.

  20. Method for removing atomic-model bias in macromolecular crystallography

    Science.gov (United States)

    Terwilliger, Thomas C.

    2006-08-01

    Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.

  1. Bias in C IV-based quasar black hole mass scaling relationships from reverberation mapped samples

    CERN Document Server

    Brotherton, Michael S; Shang, Zhaohui; DiPompeo, M A

    2015-01-01

    The masses of the black holes powering quasars represent a fundamental parameter of active galaxies. Estimates of quasar black hole masses using single-epoch spectra are quite uncertain, and require quantitative improvement. We recently identified a correction for C IV $\\lambda$1549-based scaling relationships used to estimate quasar black hole masses that relies on the continuum-subtracted peak flux ratio of the ultraviolet emission-line blend Si IV + OIV] (the $\\lambda$1400 feature) to that of C IV. This parameter correlates with the suite of associated quasar spectral properties collectively known as "Eigenvector 1" (EV1). Here we use a sample of 85 quasars with quasi-simultaneous optical-ultraviolet spectrophotometry to demonstrate how biases in the average EV1 properties can create systematic biases in C IV-based black hole mass scaling relationships. This effect results in nearly an order of magnitude moving from objects with small $$, which have overestimated black hole masses, to objects with large $$...

  2. Comparison of DNA preservation methods for environmental bacterial community samples

    Science.gov (United States)

    Gray, Michael A.; Pratte, Zoe A.; Kellogg, Christina A.

    2013-01-01

    Field collections of environmental samples, for example corals, for molecular microbial analyses present distinct challenges. The lack of laboratory facilities in remote locations is common, and preservation of microbial community DNA for later study is critical. A particular challenge is keeping samples frozen in transit. Five nucleic acid preservation methods that do not require cold storage were compared for effectiveness over time and ease of use. Mixed microbial communities of known composition were created and preserved by DNAgard™, RNAlater®, DMSO–EDTA–salt (DESS), FTA® cards, and FTA Elute® cards. Automated ribosomal intergenic spacer analysis and clone libraries were used to detect specific changes in the faux communities over weeks and months of storage. A previously known bias in FTA® cards that results in lower recovery of pure cultures of Gram-positive bacteria was also detected in mixed community samples. There appears to be a uniform bias across all five preservation methods against microorganisms with high G + C DNA. Overall, the liquid-based preservatives (DNAgard™, RNAlater®, and DESS) outperformed the card-based methods. No single liquid method clearly outperformed the others, leaving method choice to be based on experimental design, field facilities, shipping constraints, and allowable cost.

  3. Biomedical journals lack a consistent method to detect outcome reporting bias: a cross-sectional analysis.

    Science.gov (United States)

    Huan, L N; Tejani, A M; Egan, G

    2014-10-01

    An increasing amount of recently published literature has implicated outcome reporting bias (ORB) as a major contributor to skewing data in both randomized controlled trials and systematic reviews; however, little is known about the current methods in place to detect ORB. This study aims to gain insight into the detection and management of ORB by biomedical journals. This was a cross-sectional analysis involving standardized questions via email or telephone with the top 30 biomedical journals (2012) ranked by impact factor. The Cochrane Database of Systematic Reviews was excluded leaving 29 journals in the sample. Of 29 journals, 24 (83%) responded to our initial inquiry of which 14 (58%) answered our questions and 10 (42%) declined participation. Five (36%) of the responding journals indicated they had a specific method to detect ORB, whereas 9 (64%) did not have a specific method in place. The prevalence of ORB in the review process seemed to differ with 4 (29%) journals indicating ORB was found commonly, whereas 7 (50%) indicated ORB was uncommon or never detected by their journal previously. The majority (n = 10/14, 72%) of journals were unwilling to report or make discrepancies found in manuscripts available to the public. Although the minority, there were some journals (n = 4/14, 29%) which described thorough methods to detect ORB. Many journals seemed to lack a method with which to detect ORB and its estimated prevalence was much lower than that reported in literature suggesting inadequate detection. There exists a potential for overestimation of treatment effects of interventions and unclear risks. Fortunately, there are journals within this sample which appear to utilize comprehensive methods for detection of ORB, but overall, the data suggest improvements at the biomedical journal level for detecting and minimizing the effect of this bias are needed. © 2014 John Wiley & Sons Ltd.

  4. Subrandom methods for multidimensional nonuniform sampling.

    Science.gov (United States)

    Worley, Bradley

    2016-08-01

    Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics.

  5. Subrandom methods for multidimensional nonuniform sampling

    Science.gov (United States)

    Worley, Bradley

    2016-08-01

    Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics.

  6. RCP: a novel probe design bias correction method for Illumina Methylation BeadChip.

    Science.gov (United States)

    Niu, Liang; Xu, Zongli; Taylor, Jack A

    2016-09-01

    The Illumina HumanMethylation450 BeadChip has been extensively utilized in epigenome-wide association studies. This array and its successor, the MethylationEPIC array, use two types of probes-Infinium I (type I) and Infinium II (type II)-in order to increase genome coverage but differences in probe chemistries result in different type I and II distributions of methylation values. Ignoring the difference in distributions between the two probe types may bias downstream analysis. Here, we developed a novel method, called Regression on Correlated Probes (RCP), which uses the existing correlation between pairs of nearby type I and II probes to adjust the beta values of all type II probes. We evaluate the effect of this adjustment on reducing probe design type bias, reducing technical variation in duplicate samples, improving accuracy of measurements against known standards, and retention of biological signal. We find that RCP is statistically significantly better than unadjusted data or adjustment with alternative methods including SWAN and BMIQ. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website (https://www.bioconductor.org/packages/release/bioc/html/ENmix.html). niulg@ucmail.uc.edu Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.

  7. A random spatial sampling method in a rural developing nation.

    Science.gov (United States)

    Kondo, Michelle C; Bream, Kent D W; Barg, Frances K; Branas, Charles C

    2014-04-10

    Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method using geographical information system (GIS) software and global positioning system (GPS) technology for application in a health survey in a rural region of Guatemala, as well as a qualitative study of the enumeration process. This method offers an alternative sampling technique that could reduce opportunities for bias in household selection compared to cluster methods. However, its use is subject to issues surrounding survey preparation, technological limitations and in-the-field household selection. Application of this method in remote areas will raise challenges surrounding the boundary delineation process, use and translation of satellite imagery between GIS and GPS, and household selection at each survey point in varying field conditions. This method favors household selection in denser urban areas and in new residential developments. Random spatial sampling methodology can be used to survey a random sample of population in a remote region of a developing nation. Although this method should be further validated and compared with more established methods to determine its utility in social survey applications, it shows promise for use in developing nations with resource-challenged environments where detailed geographic and human census data are less available.

  8. Forms of Attrition in a Longitudinal Study of Religion and Health in Older Adults and Implications for Sample Bias.

    Science.gov (United States)

    Hayward, R David; Krause, Neal

    2016-02-01

    The use of longitudinal designs in the field of religion and health makes it important to understand how attrition bias may affect findings in this area. This study examines attrition in a 4-wave, 8-year study of older adults. Attrition resulted in a sample biased toward more educated and more religiously involved individuals. Conditional linear growth curve models found that trajectories of change for some variables differed among attrition categories. Ineligibles had worsening depression, declining control, and declining attendance. Mortality was associated with worsening religious coping styles. Refusers experienced worsening depression. Nevertheless, there was no evidence of bias in the key religion and health results.

  9. DriftLess™, an innovative method to estimate and compensate for the biases of inertial sensors

    NARCIS (Netherlands)

    Ruizenaar, M.G.H.; Kemp, R.A.W.

    2014-01-01

    In this paper a method is presented that allows for bias compensation of low-cost MEMS inertial sensors. It is based on the use of two sets of inertial sensors and a rotation mechanism that physically rotates the sensors in an alternating fashion. After signal processing, the biases of both sets of

  10. Analytical recovery of protozoan enumeration methods: have drinking water QMRA models corrected or created bias?

    Science.gov (United States)

    Schmidt, P J; Emelko, M B; Thompson, M E

    2013-05-01

    Quantitative microbial risk assessment (QMRA) is a tool to evaluate the potential implications of pathogens in a water supply or other media and is of increasing interest to regulators. In the case of potentially pathogenic protozoa (e.g. Cryptosporidium oocysts and Giardia cysts), it is well known that the methods used to enumerate (oo)cysts in samples of water and other media can have low and highly variable analytical recovery. In these applications, QMRA has evolved from ignoring analytical recovery to addressing it in point-estimates of risk, and then to addressing variation of analytical recovery in Monte Carlo risk assessments. Often, variation of analytical recovery is addressed in exposure assessment by dividing concentration values that were obtained without consideration of analytical recovery by random beta-distributed recovery values. A simple mathematical proof is provided to demonstrate that this conventional approach to address non-constant analytical recovery in drinking water QMRA will lead to overestimation of mean pathogen concentrations. The bias, which can exceed an order of magnitude, is greatest when low analytical recovery values are common. A simulated dataset is analyzed using a diverse set of approaches to obtain distributions representing temporal variation in the oocyst concentration, and mean annual risk is then computed from each concentration distribution using a simple risk model. This illustrative example demonstrates that the bias associated with mishandling non-constant analytical recovery and non-detect samples can cause drinking water systems to be erroneously classified as surpassing risk thresholds.

  11. Mixed Methods Sampling: A Typology with Examples

    Science.gov (United States)

    Teddlie, Charles; Yu, Fen

    2007-01-01

    This article presents a discussion of mixed methods (MM) sampling techniques. MM sampling involves combining well-established qualitative and quantitative techniques in creative ways to answer research questions posed by MM research designs. Several issues germane to MM sampling are presented including the differences between probability and…

  12. Mixed Methods Sampling: A Typology with Examples

    Science.gov (United States)

    Teddlie, Charles; Yu, Fen

    2007-01-01

    This article presents a discussion of mixed methods (MM) sampling techniques. MM sampling involves combining well-established qualitative and quantitative techniques in creative ways to answer research questions posed by MM research designs. Several issues germane to MM sampling are presented including the differences between probability and…

  13. Evaluating vegetation effects on animal demographics: the role of plant phenology and sampling bias.

    Science.gov (United States)

    Gibson, Daniel; Blomberg, Erik J; Sedinger, James S

    2016-04-24

    Plant phenological processes produce temporal variation in the height and cover of vegetation. Key aspects of animal life cycles, such as reproduction, often coincide with the growing season and therefore may inherently covary with plant growth. When evaluating the influence of vegetation variables on demographic rates, the decision about when to measure vegetation relative to the timing of demographic events is important to avoid confounding between the demographic rate of interest and vegetation covariates. Such confounding could bias estimated effect sizes or produce results that are entirely spurious. We investigated how the timing of vegetation sampling affected the modeled relationship between vegetation structure and nest survival of greater sage-grouse (Centrocercus urophasianus), using both simulated and observational data. We used the height of live grasses surrounding nests as an explanatory covariate, and analyzed its effect on daily nest survival. We compared results between models that included grass height measured at the time of nest fate (hatch or failure) with models where grass height was measured on a standardized date - that of predicted hatch date. Parameters linking grass height to nest survival based on measurements at nest fate produced more competitive models, but slope coefficients of grass height effects were biased high relative to truth in simulated scenarios. In contrast, measurements taken at predicted hatch date accurately predicted the influence of grass height on nest survival. Observational data produced similar results. Our results demonstrate the importance of properly considering confounding between demographic traits and plant phenology. Not doing so can produce results that are plausible, but ultimately inaccurate.

  14. The effects of social desirability response bias on STAXI-2 profiles in a clinical forensic sample.

    Science.gov (United States)

    McEwan, Troy E; Davis, Michael R; MacKenzie, Rachel; Mullen, Paul E

    2009-11-01

    This study investigated the proposition that the 'State-trait anger expression inventory' (2nd ed.; STAXI-2) is susceptible to impression management (IM) and Self-Deceptive Enhancement (SDE) in clinical forensic populations. It was hypothesized that individuals engaging in IM would report significantly lower levels of trait anger, external expression of anger, and internal expression of anger on the STAXI-2. Those reporting above average SDE were predicted to claim higher levels of anger control. A between-groups design was used, comparing STAXI-2 scores of individuals who reported high levels of IM and SDE to those who did not. One-hundred and fifty-nine male patients of a community forensic mental health service, referred for assessment of stalking behaviours, completed the STAXI-2 and Paulhus Deception Scales (PDS). Individuals engaging in high levels of IM and SDE were compared to low scorers in regard to STAXI-2 scales using Mann-Whitney U tests. Individuals engaging in IM had significantly lower levels of reported trait anger, outward expression of anger, and inward expression of anger, and higher levels of anger control. Similar results were found with the SDE scale, although the magnitude of the effect was smaller and not apparent on all subscales. The STAXI-2 was vulnerable to social desirability response bias in this sample of forensic clients. Where the STAXI-2 is used as a basis for treatment recommendations and decisions, it should be administered and interpreted in conjunction with a recognized measure of such bias to improve validity.

  15. The persistent sampling bias in developmental psychology: A call to action.

    Science.gov (United States)

    Nielsen, Mark; Haun, Daniel; Kärtner, Joscha; Legare, Cristine H

    2017-10-01

    Psychology must confront the bias in its broad literature toward the study of participants developing in environments unrepresentative of the vast majority of the world's population. Here, we focus on the implications of addressing this challenge, highlight the need to address overreliance on a narrow participant pool, and emphasize the value and necessity of conducting research with diverse populations. We show that high-impact-factor developmental journals are heavily skewed toward publishing articles with data from WEIRD (Western, educated, industrialized, rich, and democratic) populations. Most critically, despite calls for change and supposed widespread awareness of this problem, there is a habitual dependence on convenience sampling and little evidence that the discipline is making any meaningful movement toward drawing from diverse samples. Failure to confront the possibility that culturally specific findings are being misattributed as universal traits has broad implications for the construction of scientifically defensible theories and for the reliable public dissemination of study findings. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Uniform sampling table method and its applications: establishment of a uniform sampling method.

    Science.gov (United States)

    Chen, Yibin; Chen, Jiaxi; Wang, Wei

    2013-01-01

    A novel uniform sampling method is proposed in this paper. According to the requirements of uniform sampling, we propose the properties that must be met by analyzing the distribution of samples. Based on this, the proposed uniform sampling method is demonstrated and evaluated strictly by mathematical means such as inference. The uniform sampling tables with respect to Cn(t2) and Cn(t3) are established. Furthermore, a one-dimension uniform sampling method and a multidimension method are proposed. The proposed novel uniform sampling method, which is guided by uniform design theory, enjoys the advantages of simplified use and good representativeness of the whole sample.

  17. Self-calibration method of the bias of a space electrostatic accelerometer

    Science.gov (United States)

    Qu, Shao-Bo; Xia, Xiao-Mei; Bai, Yan-Zheng; Wu, Shu-Chao; Zhou, Ze-Bing

    2016-11-01

    The high precision space electrostatic accelerometer is an instrument to measure the non-gravitational forces acting on a spacecraft. It is one of the key payloads for satellite gravity measurements and space fundamental physics experiments. The measurement error of the accelerometer directly affects the precision of gravity field recovery for the earth. This paper analyzes the sources of the bias according to the operating principle and structural constitution of the space electrostatic accelerometer. Models of bias due to the asymmetry of the displacement sensing system, including the mechanical sensor head and the capacitance sensing circuit, and the asymmetry of the feedback control actuator circuit are described separately. According to the two models, a method of bias self-calibration by using only the accelerometer data is proposed, based on the feedback voltage data of the accelerometer before and after modulating the DC biasing voltage (Vb) applied on its test mass. Two types of accelerometer biases are evaluated separately using in-orbit measurement data of a space electrostatic accelerometer. Based on the preliminary analysis, the bias of the accelerometer onboard of an experiment satellite is evaluated to be around 10-4 m/s2, about 4 orders of magnitude greater than the noise limit. Finally, considering the two asymmetries, a comprehensive bias model is analyzed. A modified method to directly calibrate the accelerometer comprehensive bias is proposed.

  18. A Method for Estimating BeiDou Inter-frequency Satellite Clock Bias

    Directory of Open Access Journals (Sweden)

    LI Haojun

    2016-02-01

    Full Text Available A new method for estimating the BeiDou inter-frequency satellite clock bias is proposed, considering the shortage of the current methods. The constant and variable parts of the inter-frequency satellite clock bias are considered in the new method. The data from 10 observation stations are processed to validate the new method. The characterizations of the BeiDou inter-frequency satellite clock bias are also analyzed using the computed results. The results of the BeiDou inter-frequency satellite clock bias indicate that it is stable in the short term. The estimated BeiDou inter-frequency satellite clock bias results are molded. The model results show that the 10 parameters of model for each satellite can express the BeiDou inter-frequency satellite clock bias well and the accuracy reaches cm level. When the model parameters of the first day are used to compute the BeiDou inter-frequency satellite clock bias of the second day, the accuracy also reaches cm level. Based on the stability and modeling, a strategy for the BeiDou satellite clock service is presented to provide the reference of our BeiDou.

  19. Evaluation of bias-correction methods for ensemble streamflow volume forecasts

    Directory of Open Access Journals (Sweden)

    T. Hashino

    2007-01-01

    Full Text Available Ensemble prediction systems are used operationally to make probabilistic streamflow forecasts for seasonal time scales. However, hydrological models used for ensemble streamflow prediction often have simulation biases that degrade forecast quality and limit the operational usefulness of the forecasts. This study evaluates three bias-correction methods for ensemble streamflow volume forecasts. All three adjust the ensemble traces using a transformation derived with simulated and observed flows from a historical simulation. The quality of probabilistic forecasts issued when using the three bias-correction methods is evaluated using a distributions-oriented verification approach. Comparisons are made of retrospective forecasts of monthly flow volumes for a north-central United States basin (Des Moines River, Iowa, issued sequentially for each month over a 48-year record. The results show that all three bias-correction methods significantly improve forecast quality by eliminating unconditional biases and enhancing the potential skill. Still, subtle differences in the attributes of the bias-corrected forecasts have important implications for their use in operational decision-making. Diagnostic verification distinguishes these attributes in a context meaningful for decision-making, providing criteria to choose among bias-correction methods with comparable skill.

  20. Sample bias from different recruitment strategies in a randomised controlled trial for alcohol dependence.

    Science.gov (United States)

    Morley, Kirsten C; Teesson, Maree; Sannibale, Claudia; Haber, Paul S

    2009-05-01

    Participants may be recruited from diverse sources for randomised controlled trials (RCT) of treatments for alcohol dependence. A mixed recruitment strategy might facilitate recruitment and increase generalisability at the expense of introducing systematic selection bias. The current study aims to compare the effects of recruitment method on socio-demographics, baseline illness characteristics, treatment retention and treatment outcome measures. A secondary analysis from a previous 12 week RCT of naltrexone, acamprosate and placebo for alcohol dependence was conducted. Participants (n = 169) were obtained via four channels of recruitment including in-patient and outpatient referral, live media and print media solicitation. Baseline parameters, retention in treatment and treatment outcomes were compared in these groups. Relative to in-patient subjects, those recruited via live and print media had significantly lower scores on taking steps, less in-patient rehabilitation admissions and less previous abstinence before entering the trial. Subjects recruited via print media had significantly lower scores of alcohol dependence relative to all other modes recruitment. There were no differences between recruitment strategies on treatment retention or compliance. At outcome, no significant effect of recruitment method was detected. These results suggest that different recruitment methods may be sourcing subjects with different baseline characteristics of illness. Nonetheless, these differences did not significantly impact on treatment retention or outcome, suggesting that in this population it was appropriate to recruit subjects from mixed sources.

  1. Verification of a depth-integrated sample arm as a means to reduce solids stratification bias in urban stormwater sampling.

    Science.gov (United States)

    Selbig, William R; Cox, Amanda; Bannerman, Roger T

    2012-04-01

    A new water sample collection system was developed to improve representation of solids entrained in urban stormwater by integrating water-quality samples from the entire water column, rather than a single, fixed point. The depth-integrated sample arm (DISA) was better able to characterize suspended-sediment concentration and particle size distribution compared to fixed-point methods when tested in a controlled laboratory environment. Median suspended-sediment concentrations overestimated the actual concentration by 49 and 7% when sampling the water column at 3- and 4-points spaced vertically throughout the water column, respectively. Comparatively, sampling only at the bottom of the pipe, the fixed-point overestimated the actual concentration by 96%. The fixed-point sampler also showed a coarser particle size distribution compared to the DISA which was better able to reproduce the average distribution of particles in the water column over a range of hydraulic conditions. These results emphasize the need for a water sample collection system that integrates the entire water column, rather than a single, fixed point to properly characterize the concentration and distribution of particles entrained in stormwater pipe flow.

  2. Application of Sampling Methods to Geological Survey

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    @@There are two kinds of research methods in geological observation study. One is the remote-sensing observation. The other is the partial sampling method extensively used in every stage of the geological work, for example, in arranging the lines and points of geologic survey, and in arranging the exploration engineering. Three problems may occur in practical application of the sampling method: (1) Though we use the partial sampling method in geological work, we must make use of many labor powers, materials and money to accomplish the geological task. Is the method we use appropriate to some special geological task? (2) How many samples or observation points should be appropriate to the geological research?

  3. Study on Transformer Magnetic Biasing Control Method for AC Power Supplies

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper presents a novel transformer magnetic biasing control method for high-power high-performance AC power supplies. Serious consequences due to magnetic biasing and several methods to overcome magnetic biasing are first discussed. The causes of the transformer magnetic biasing are then analyzed in detail. The proposed method is based on a high-pass filter inserted in the forward path and the feedforward control. Without testing magnetic biasing of transformer, this method can eliminate magnetic biasing of transformer completely in real-time waveform feedback control systems though the aero error of the Hall effect sensors varies with time and temperature. The method has already been employed in a 90KVA AC power supply. It is shown that it offers improved performance over existing ones. In this method, no sensors are used such that the zero error of the Hall effect sensors has not any influence on the system. It is simple to design and implement. Furthermore, the method is suitable for various power applications.

  4. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    Science.gov (United States)

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  5. Dynamic Method for Identifying Collected Sample Mass

    Science.gov (United States)

    Carson, John

    2008-01-01

    G-Sample is designed for sample collection missions to identify the presence and quantity of sample material gathered by spacecraft equipped with end effectors. The software method uses a maximum-likelihood estimator to identify the collected sample's mass based on onboard force-sensor measurements, thruster firings, and a dynamics model of the spacecraft. This makes sample mass identification a computation rather than a process requiring additional hardware. Simulation examples of G-Sample are provided for spacecraft model configurations with a sample collection device mounted on the end of an extended boom. In the absence of thrust knowledge errors, the results indicate that G-Sample can identify the amount of collected sample mass to within 10 grams (with 95-percent confidence) by using a force sensor with a noise and quantization floor of 50 micrometers. These results hold even in the presence of realistic parametric uncertainty in actual spacecraft inertia, center-of-mass offset, and first flexibility modes. Thrust profile knowledge is shown to be a dominant sensitivity for G-Sample, entering in a nearly one-to-one relationship with the final mass estimation error. This means thrust profiles should be well characterized with onboard accelerometers prior to sample collection. An overall sample-mass estimation error budget has been developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.

  6. Comparison of Parametric and Nonparametric Methods for Analyzing the Bias of a Numerical Model

    Directory of Open Access Journals (Sweden)

    Isaac Mugume

    2016-01-01

    Full Text Available Numerical models are presently applied in many fields for simulation and prediction, operation, or research. The output from these models normally has both systematic and random errors. The study compared January 2015 temperature data for Uganda as simulated using the Weather Research and Forecast model with actual observed station temperature data to analyze the bias using parametric (the root mean square error (RMSE, the mean absolute error (MAE, mean error (ME, skewness, and the bias easy estimate (BES and nonparametric (the sign test, STM methods. The RMSE normally overestimates the error compared to MAE. The RMSE and MAE are not sensitive to direction of bias. The ME gives both direction and magnitude of bias but can be distorted by extreme values while the BES is insensitive to extreme values. The STM is robust for giving the direction of bias; it is not sensitive to extreme values but it does not give the magnitude of bias. The graphical tools (such as time series and cumulative curves show the performance of the model with time. It is recommended to integrate parametric and nonparametric methods along with graphical methods for a comprehensive analysis of bias of a numerical model.

  7. DOE methods for evaluating environmental and waste management samples

    Energy Technology Data Exchange (ETDEWEB)

    Goheen, S.C.; McCulloch, M.; Thomas, B.L.; Riley, R.G.; Sklarew, D.S.; Mong, G.M.; Fadeff, S.K. [eds.

    1994-10-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) is a resource intended to support sampling and analytical activities for the evaluation of environmental and waste management samples from U.S. Department of Energy (DOE) sites. DOE Methods is the result of extensive cooperation from all DOE analytical laboratories. All of these laboratories have contributed key information and provided technical reviews as well as significant moral support leading to the success of this document. DOE Methods is designed to encompass methods for collecting representative samples and for determining the radioisotope activity and organic and inorganic composition of a sample. These determinations will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the U.S. Environmental Protection Agency, or others. The development of DOE Methods is supported by the Analytical Services Division of DOE. Unique methods or methods consolidated from similar procedures in the DOE Procedures Database are selected for potential inclusion in this document. Initial selection is based largely on DOE needs and procedure applicability and completeness. Methods appearing in this document are one of two types, {open_quotes}Draft{close_quotes} or {open_quotes}Verified{close_quotes}. {open_quotes}Draft{close_quotes} methods that have been reviewed internally and show potential for eventual verification are included in this document, but they have not been reviewed externally, and their precision and bias may not be known. {open_quotes}Verified{close_quotes} methods in DOE Methods have been reviewed by volunteers from various DOE sites and private corporations. These methods have delineated measures of precision and accuracy.

  8. Systems and methods for sample analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cooks, Robert Graham; Li, Guangtao; Li, Xin; Ouyang, Zheng

    2015-10-20

    The invention generally relates to systems and methods for sample analysis. In certain embodiments, the invention provides a system for analyzing a sample that includes a probe including a material connected to a high voltage source, a device for generating a heated gas, and a mass analyzer.

  9. New methods for sampling sparse populations

    Science.gov (United States)

    Anna Ringvall

    2007-01-01

    To improve surveys of sparse objects, methods that use auxiliary information have been suggested. Guided transect sampling uses prior information, e.g., from aerial photographs, for the layout of survey strips. Instead of being laid out straight, the strips will wind between potentially more interesting areas. 3P sampling (probability proportional to prediction) uses...

  10. Sample preparation method for scanning force microscopy

    CERN Document Server

    Jankov, I R; Szente, R N; Carreno, M N P; Swart, J W; Landers, R

    2001-01-01

    We present a method of sample preparation for studies of ion implantation on metal surfaces. The method, employing a mechanical mask, is specially adapted for samples analysed by Scanning Force Microscopy. It was successfully tested on polycrystalline copper substrates implanted with phosphorus ions at an acceleration voltage of 39 keV. The changes of the electrical properties of the surface were measured by Kelvin Probe Force Microscopy and the surface composition was analysed by Auger Electron Spectroscopy.

  11. The rank product method with two samples.

    Science.gov (United States)

    Koziol, James A

    2010-11-05

    Breitling et al. (2004) introduced a statistical technique, the rank product method, for detecting differentially regulated genes in replicated microarray experiments. The technique has achieved widespread acceptance and is now used more broadly, in such diverse fields as RNAi analysis, proteomics, and machine learning. In this note, we extend the rank product method to the two sample setting, provide distribution theory attending the rank product method in this setting, and give numerical details for implementing the method.

  12. Estimation of time-delayed mutual information and bias for irregularly and sparsely sampled time-series

    CERN Document Server

    Albers, DJ

    2011-01-01

    A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database.

  13. MLE's bias pathology, Model Updated Maximum Likelihood Estimates and Wallace's Minimum Message Length method

    OpenAIRE

    Yatracos, Yannis G.

    2013-01-01

    The inherent bias pathology of the maximum likelihood (ML) estimation method is confirmed for models with unknown parameters $\\theta$ and $\\psi$ when MLE $\\hat \\psi$ is function of MLE $\\hat \\theta.$ To reduce $\\hat \\psi$'s bias the likelihood equation to be solved for $\\psi$ is updated using the model for the data $Y$ in it. Model updated (MU) MLE, $\\hat \\psi_{MU},$ often reduces either totally or partially $\\hat \\psi$'s bias when estimating shape parameter $\\psi.$ For the Pareto model $\\hat...

  14. A method for evaluating bias in global measurements of CO2 total columns from space

    Directory of Open Access Journals (Sweden)

    R. J. Salawitch

    2011-12-01

    Full Text Available We describe a method of evaluating systematic errors in measurements of total column dry-air mole fractions of CO2 (XCO2 from space, and we illustrate the method by applying it to the v2.8 Atmospheric CO2 Observations from Space retrievals of the Greenhouse Gases Observing Satellite (ACOS-GOSAT measurements over land. The approach exploits the lack of large gradients in XCO2 south of 25° S to identify large-scale offsets and other biases in the ACOS-GOSAT data with several retrieval parameters and errors in instrument calibration. We demonstrate the effectiveness of the method by comparing the ACOS-GOSAT data in the Northern Hemisphere with ground truth provided by the Total Carbon Column Observing Network (TCCON. We use the observed correlation between free-tropospheric potential temperature and XCO2 in the Northern Hemisphere to define a dynamically informed coincidence criterion between the ground-based TCCON measurements and the ACOS-GOSAT measurements. We illustrate that this approach provides larger sample sizes, hence giving a more robust comparison than one that simply uses time, latitude and longitude criteria. Our results show that the agreement with the TCCON data improves after accounting for the systematic errors, but that extrapolation to conditions found outside the region south of 25° S may be problematic (e.g., high airmasses, large surface pressure biases, M-gain, measurements made over ocean. A preliminary evaluation of the improved v2.9 ACOS-GOSAT data is also discussed.

  15. Characterization of (100)-orientated diamond film grown by HFCVD method with a positive DC bias voltage

    Institute of Scientific and Technical Information of China (English)

    MA Ying; WANG Lin-jun; LIU Jian-min; SU Qing-feng; XU Run; PENG Hong-yan; SHI Wei-min; XIA Yi-ben

    2006-01-01

    The (100)-orientated diamond film was deposited by hot-filament chemical vapor deposition (HFCVD) technology with a positive DC bias voltage. The morphology,X-ray diffraction (XRD),RAMAN spectrum and dark current versus applied voltage characteristics analysis show that the positive dc bias can increase the nucleation density and (100)-orientated growth,making the growth of the high quality diamond film easier and cheaper than using other methods.

  16. RELIC: a novel dye-bias correction method for Illumina Methylation BeadChip.

    Science.gov (United States)

    Xu, Zongli; Langie, Sabine A S; De Boever, Patrick; Taylor, Jack A; Niu, Liang

    2017-01-03

    The Illumina Infinium HumanMethylation450 BeadChip and its successor, Infinium MethylationEPIC BeadChip, have been extensively utilized in epigenome-wide association studies. Both arrays use two fluorescent dyes (Cy3-green/Cy5-red) to measure methylation level at CpG sites. However, performance difference between dyes can result in biased estimates of methylation levels. Here we describe a novel method, called REgression on Logarithm of Internal Control probes (RELIC) to correct for dye bias on whole array by utilizing the intensity values of paired internal control probes that monitor the two color channels. We evaluate the method in several datasets against other widely used dye-bias correction methods. Results on data quality improvement showed that RELIC correction statistically significantly outperforms alternative dye-bias correction methods. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website ( https://www.bioconductor.org/packages/release/bioc/html/ENmix.html ). RELIC is an efficient and robust method to correct for dye-bias in Illumina Methylation BeadChip data. It outperforms other alternative methods and conveniently implemented in R package ENmix to facilitate DNA methylation studies.

  17. Correcting for bias of molecular confinement parameters induced by small-time-series sample sizes in single-molecule trajectories containing measurement noise

    Science.gov (United States)

    Calderon, Christopher P.

    2013-07-01

    Several single-molecule studies aim to reliably extract parameters characterizing molecular confinement or transient kinetic trapping from experimental observations. Pioneering works from single-particle tracking (SPT) in membrane diffusion studies [Kusumi , Biophys. J.BIOJAU0006-349510.1016/S0006-3495(93)81253-0 65, 2021 (1993)] appealed to mean square displacement (MSD) tools for extracting diffusivity and other parameters quantifying the degree of confinement. More recently, the practical utility of systematically treating multiple noise sources (including noise induced by random photon counts) through likelihood techniques has been more broadly realized in the SPT community. However, bias induced by finite-time-series sample sizes (unavoidable in practice) has not received great attention. Mitigating parameter bias induced by finite sampling is important to any scientific endeavor aiming for high accuracy, but correcting for bias is also often an important step in the construction of optimal parameter estimates. In this article, it is demonstrated how a popular model of confinement can be corrected for finite-sample bias in situations where the underlying data exhibit Brownian diffusion and observations are measured with non-negligible experimental noise (e.g., noise induced by finite photon counts). The work of Tang and Chen [J. Econometrics0304-407610.1016/j.jeconom.2008.11.001 149, 65 (2009)] is extended to correct for bias in the estimated “corral radius” (a parameter commonly used to quantify confinement in SPT studies) in the presence of measurement noise. It is shown that the approach presented is capable of reliably extracting the corral radius using only hundreds of discretely sampled observations in situations where other methods (including MSD and Bayesian techniques) would encounter serious difficulties. The ability to accurately statistically characterize transient confinement suggests additional techniques for quantifying confined and/or hop

  18. Sampling Soil CO2 for Isotopic Flux Partitioning: Non Steady State Effects and Methodological Biases

    Science.gov (United States)

    Snell, H. S. K.; Robinson, D.; Midwood, A. J.

    2014-12-01

    Measurements of δ13C of soil CO2 are used to partition the surface flux into autotrophic and heterotrophic components. Models predict that the δ13CO2 of the soil efflux is perturbed by non-steady state (NSS) diffusive conditions. These could be large enough to render δ13CO2 unsuitable for accurate flux partitioning. Field studies sometimes find correlations between efflux δ13CO2 and flux or temperature, or that efflux δ13CO2 is not correlated as expected with biological drivers. We tested whether NSS effects in semi-natural soil were comparable with those predicted. We compared chamber designs and their sensitivity to changes in efflux δ13CO2. In a natural soil mesocosm, we controlled temperature to generate NSS conditions of CO2 production. We measured the δ13C of soil CO2 using in situ probes to sample the subsurface, and dynamic and forced-diffusion chambers to sample the surface efflux. Over eight hours we raised soil temperature by 4.5 OC to increase microbial respiration. Subsurface CO2 concentration doubled, surface efflux became 13C-depleted by 1 ‰ and subsurface CO2 became 13C-enriched by around 2 ‰. Opposite changes occurred when temperature was lowered and CO2 production was decreasing. Different chamber designs had inherent biases but all detected similar changes in efflux δ13CO2, which were comparable to those predicted. Measurements using dynamic chambers were more 13C-enriched than expected, probably due to advection of CO2 into the chamber. In the mesocosm soil, δ13CO2 of both efflux and subsurface was determined by physical processes of CO2 production and diffusion. Steady state conditions are unlikely to prevail in the field, so spot measurements of δ13CO2 and assumptions based on the theoretical 4.4 ‰ diffusive fractionation will not be accurate for estimating source δ13CO2. Continuous measurements could be integrated over a period suitable to reduce the influence of transient NSS conditions. It will be difficult to disentangle

  19. Evaluation of ACCMIP ozone simulations and ozonesonde sampling biases using a satellite-based multi-constituent chemical reanalysis

    Science.gov (United States)

    Miyazaki, Kazuyuki; Bowman, Kevin

    2017-07-01

    The Atmospheric Chemistry Climate Model Intercomparison Project (ACCMIP) ensemble ozone simulations for the present day from the 2000 decade simulation results are evaluated by a state-of-the-art multi-constituent atmospheric chemical reanalysis that ingests multiple satellite data including the Tropospheric Emission Spectrometer (TES), the Microwave Limb Sounder (MLS), the Ozone Monitoring Instrument (OMI), and the Measurement of Pollution in the Troposphere (MOPITT) for 2005-2009. Validation of the chemical reanalysis against global ozonesondes shows good agreement throughout the free troposphere and lower stratosphere for both seasonal and year-to-year variations, with an annual mean bias of less than 0.9 ppb in the middle and upper troposphere at the tropics and mid-latitudes. The reanalysis provides comprehensive spatiotemporal evaluation of chemistry-model performance that compliments direct ozonesonde comparisons, which are shown to suffer from significant sampling bias. The reanalysis reveals that the ACCMIP ensemble mean overestimates ozone in the northern extratropics by 6-11 ppb while underestimating by up to 18 ppb in the southern tropics over the Atlantic in the lower troposphere. Most models underestimate the spatial variability of the annual mean lower tropospheric concentrations in the extratropics of both hemispheres by up to 70 %. The ensemble mean also overestimates the seasonal amplitude by 25-70 % in the northern extratropics and overestimates the inter-hemispheric gradient by about 30 % in the lower and middle troposphere. A part of the discrepancies can be attributed to the 5-year reanalysis data for the decadal model simulations. However, these differences are less evident with the current sonde network. To estimate ozonesonde sampling biases, we computed model bias separately for global coverage and the ozonesonde network. The ozonesonde sampling bias in the evaluated model bias for the seasonal mean concentration relative to global

  20. Comparison of spatial interpolation methods for gridded bias removal in surface temperature forecasts

    Science.gov (United States)

    Mohammadi, Seyedeh Atefeh; Azadi, Majid; Rahmani, Morteza

    2017-08-01

    All numerical weather prediction (NWP) models inherently have substantial biases, especially in the forecast of near-surface weather variables. Statistical methods can be used to remove the systematic error based on historical bias data at observation stations. However, many end users of weather forecasts need bias corrected forecasts at locations that scarcely have any historical bias data. To circumvent this limitation, the bias of surface temperature forecasts on a regular grid covering Iran is removed, by using the information available at observation stations in the vicinity of any given grid point. To this end, the running mean error method is first used to correct the forecasts at observation stations, then four interpolation methods including inverse distance squared weighting with constant lapse rate (IDSW-CLR), Kriging with constant lapse rate (Kriging-CLR), gradient inverse distance squared with linear lapse rate (GIDS-LR), and gradient inverse distance squared with lapse rate determined by classification and regression tree (GIDS-CART), are employed to interpolate the bias corrected forecasts at neighboring observation stations to any given location. The results show that all four interpolation methods used do reduce the model error significantly, but Kriging-CLR has better performance than the other methods. For Kriging-CLR, root mean square error (RMSE) and mean absolute error (MAE) were decreased by 26% and 29%, respectively, as compared to the raw forecasts. It is found also, that after applying any of the proposed methods, unlike the raw forecasts, the bias corrected forecasts do not show spatial or temporal dependency.

  1. MOMENT-METHOD ESTIMATION BASED ON CENSORED SAMPLE

    Institute of Scientific and Technical Information of China (English)

    NI Zhongxin; FEI Heliang

    2005-01-01

    In reliability theory and survival analysis,the problem of point estimation based on the censored sample has been discussed in many literatures.However,most of them are focused on MLE,BLUE etc;little work has been done on the moment-method estimation in censoring case.To make the method of moment estimation systematic and unifiable,in this paper,the moment-method estimators(abbr.MEs) and modified momentmethod estimators(abbr.MMEs) of the parameters based on type I and type Ⅱ censored samples are put forward involving mean residual lifetime. The strong consistency and other properties are proved. To be worth mentioning,in the exponential distribution,the proposed moment-method estimators are exactly MLEs. By a simulation study,in the view point of bias and mean square of error,we show that the MEs and MMEs are better than MLEs and the "pseudo complete sample" technique introduced in Whitten et al.(1988).And the superiority of the MEs is especially conspicuous,when the sample is heavily censored.

  2. Characterizing lentic freshwater fish assemblages using multiple sampling methods

    Science.gov (United States)

    Fischer, Jesse R.; Quist, Michael

    2014-01-01

    Characterizing fish assemblages in lentic ecosystems is difficult, and multiple sampling methods are almost always necessary to gain reliable estimates of indices such as species richness. However, most research focused on lentic fish sampling methodology has targeted recreationally important species, and little to no information is available regarding the influence of multiple methods and timing (i.e., temporal variation) on characterizing entire fish assemblages. Therefore, six lakes and impoundments (48–1,557 ha surface area) were sampled seasonally with seven gear types to evaluate the combined influence of sampling methods and timing on the number of species and individuals sampled. Probabilities of detection for species indicated strong selectivities and seasonal trends that provide guidance on optimal seasons to use gears when targeting multiple species. The evaluation of species richness and number of individuals sampled using multiple gear combinations demonstrated that appreciable benefits over relatively few gears (e.g., to four) used in optimal seasons were not present. Specifically, over 90 % of the species encountered with all gear types and season combinations (N = 19) from six lakes and reservoirs were sampled with nighttime boat electrofishing in the fall and benthic trawling, modified-fyke, and mini-fyke netting during the summer. Our results indicated that the characterization of lentic fish assemblages was highly influenced by the selection of sampling gears and seasons, but did not appear to be influenced by waterbody type (i.e., natural lake, impoundment). The standardization of data collected with multiple methods and seasons to account for bias is imperative to monitoring of lentic ecosystems and will provide researchers with increased reliability in their interpretations and decisions made using information on lentic fish assemblages.

  3. Development of a low bias method for characterizing viral populations using next generation sequencing technology.

    Directory of Open Access Journals (Sweden)

    Stephanie M Willerth

    Full Text Available BACKGROUND: With an estimated 38 million people worldwide currently infected with human immunodeficiency virus (HIV, and an additional 4.1 million people becoming infected each year, it is important to understand how this virus mutates and develops resistance in order to design successful therapies. METHODOLOGY/PRINCIPAL FINDINGS: We report a novel experimental method for amplifying full-length HIV genomes without the use of sequence-specific primers for high throughput DNA sequencing, followed by assembly of full length viral genome sequences from the resulting large dataset. Illumina was chosen for sequencing due to its ability to provide greater coverage of the HIV genome compared to prior methods, allowing for more comprehensive characterization of the heterogeneity present in the HIV samples analyzed. Our novel amplification method in combination with Illumina sequencing was used to analyze two HIV populations: a homogenous HIV population based on the canonical NL4-3 strain and a heterogeneous viral population obtained from a HIV patient's infected T cells. In addition, the resulting sequence was analyzed using a new computational approach to obtain a consensus sequence and several metrics of diversity. SIGNIFICANCE: This study demonstrates how a lower bias amplification method in combination with next generation DNA sequencing provides in-depth, complete coverage of the HIV genome, enabling a stronger characterization of the quasispecies present in a clinically relevant HIV population as well as future study of how HIV mutates in response to a selective pressure.

  4. Using stochastic sampling of parametric uncertainties to quantify relationships between CAM3.1 bias and climate sensitivity

    Science.gov (United States)

    Jackson, C. S.; Tobis, M.

    2011-12-01

    It is an untested assumption in climate model evaluation that climate model biases affect its credibility. Models with the smaller biases are often regarded as being more plausible than models with larger biases. However not all biases affect predictions. It is only those biases that are involved with feedback mechanisms can lead to scatter in its predictions of change. To date no metric of model skill has been defined that can predict a model's sensitivity greenhouse gas forcing. Being able to do so will be an important step to how we can use observations to define a model's credibility. We shall present results of a calculation in which we attempt to isolate the contribution of errors in particular regions and fields to uncertainties in CAM3.1 equilibrium sensitivity to a doubling of CO2 forcing. In this calculation, observations, Bayesian inference, and stochastic sampling are used to identify a large ensemble of CAM3.1 configurations that represent uncertainties in selecting 15 model parameters important to clouds, convection, and radiation. A slab ocean configuration of CAM3.1 is then used to estimate the effects of these parametric uncertainties on projections of global warming through its equilibrium response to 2 x CO2 forcing. We then correlate the scatter in the control climate at each grid point and field to the scatter in climate sensitivities. The presentation will focus on the analysis of these results.

  5. Effect of Sample Configuration on Droplet-Particles of TiN Films Deposited by Pulse Biased Arc Ion Plating

    Institute of Scientific and Technical Information of China (English)

    Yanhui Zhao; Guoqiang Lin; Jinquan Xiao; Chuang Dong; Lishi Wen

    2009-01-01

    Orthogonal experiments are used to design the pulsed bias related parameters, including bias magnitude, duty cycle and pulse frequency, during arc ion deposition of TiN films on stainless steel substrates in the case of samples placing normal to the plasma flux. The effect of these parameters on the amount and the size distribution of droplet-particles are investigated, and the results have provided sufficient evidence for the physical model, in which particles reduction is due to the case that the particles are negatively charged and repulsed from negative pulse electric field. The effect of sample configuration on amount and size distribution of the particles are analyzed. The results of the amount and size distribution of the particles are compared to those in the case of samples placing parallel to the plasma flux.

  6. Soybean yield modeling using bootstrap methods for small samples

    Energy Technology Data Exchange (ETDEWEB)

    Dalposso, G.A.; Uribe-Opazo, M.A.; Johann, J.A.

    2016-11-01

    One of the problems that occur when working with regression models is regarding the sample size; once the statistical methods used in inferential analyzes are asymptotic if the sample is small the analysis may be compromised because the estimates will be biased. An alternative is to use the bootstrap methodology, which in its non-parametric version does not need to guess or know the probability distribution that generated the original sample. In this work we used a set of soybean yield data and physical and chemical soil properties formed with fewer samples to determine a multiple linear regression model. Bootstrap methods were used for variable selection, identification of influential points and for determination of confidence intervals of the model parameters. The results showed that the bootstrap methods enabled us to select the physical and chemical soil properties, which were significant in the construction of the soybean yield regression model, construct the confidence intervals of the parameters and identify the points that had great influence on the estimated parameters. (Author)

  7. A new method to measure galaxy bias by combining the density and weak lensing fields

    CERN Document Server

    Pujol, Arnau; Gaztañaga, Enrique; Amara, Adam; Refregier, Alexandre; Bacon, David J; Carretero, Jorge; Castander, Francisco J; Crocce, Martin; Fosalba, Pablo; Manera, Marc; Vikram, Vinu

    2016-01-01

    We present a new method to measure the redshift-dependent galaxy bias by combining information from the galaxy density field and the weak lensing field. This method is based on Amara et al. (2012), where they use the galaxy density field to construct a bias-weighted convergence field kg. The main difference between Amara et al. (2012) and our new implementation is that here we present another way to measure galaxy bias using tomography instead of bias parameterizations. The correlation between kg and the true lensing field k allows us to measure galaxy bias using different zero-lag correlations, such as / or /. This paper is the first that studies and systematically tests the robustness of this method in simulations. We use the MICE simulation suite, which includes a set of self-consistent N-body simulations, lensing maps, and mock galaxy catalogues. We study the accuracy and systematic uncertainties associated with the implementation of the method, and the regime where it is consistent with the linear galaxy...

  8. Turbidity threshold sampling: Methods and instrumentation

    Science.gov (United States)

    Rand Eads; Jack Lewis

    2001-01-01

    Traditional methods for determining the frequency of suspended sediment sample collection often rely on measurements, such as water discharge, that are not well correlated to sediment concentration. Stream power is generally not a good predictor of sediment concentration for rivers that transport the bulk of their load as fines, due to the highly variable routing of...

  9. Eating disorder symptoms and autobiographical memory bias in an analogue sample

    NARCIS (Netherlands)

    Wessel, Ineke; Huntjens, Rafaële

    2016-01-01

    Cognitive theories hold that dysfunctional cognitive schemas and associated information-processing biases are involved in the maintenance of psychopathology. In eating disorders (ED), these schemas would consist of self-evaluative representations, in which the importance of controlling eating, shape

  10. The Sensitivity of Respondent-driven Sampling Method

    CERN Document Server

    Lu, Xin; Britton, Tom; Camitz, Martin; Kim, Beom Jun; Thorson, Anna; Liljeros, Fredrik

    2012-01-01

    Researchers in many scientific fields make inferences from individuals to larger groups. For many groups however, there is no list of members from which to take a random sample. Respondent-driven sampling (RDS) is a relatively new sampling methodology that circumvents this difficulty by using the social networks of the groups under study. The RDS method has been shown to provide unbiased estimates of population proportions given certain conditions. The method is now widely used in the study of HIV-related high-risk populations globally. In this paper, we test the RDS methodology by simulating RDS studies on the social networks of a large LGBT web community. The robustness of the RDS method is tested by violating, one by one, the conditions under which the method provides unbiased estimates. Results reveal that the risk of bias is large if networks are directed, or respondents choose to invite persons based on characteristics that are correlated with the study outcomes. If these two problems are absent, the RD...

  11. Plant Disease Severity Assessment-How Rater Bias, Assessment Method, and Experimental Design Affect Hypothesis Testing and Resource Use Efficiency.

    Science.gov (United States)

    Chiang, Kuo-Szu; Bock, Clive H; Lee, I-Hsuan; El Jarroudi, Moussa; Delfosse, Philippe

    2016-12-01

    The effect of rater bias and assessment method on hypothesis testing was studied for representative experimental designs for plant disease assessment using balanced and unbalanced data sets. Data sets with the same number of replicate estimates for each of two treatments are termed "balanced" and those with unequal numbers of replicate estimates are termed "unbalanced". The three assessment methods considered were nearest percent estimates (NPEs), an amended 10% incremental scale, and the Horsfall-Barratt (H-B) scale. Estimates of severity of Septoria leaf blotch on leaves of winter wheat were used to develop distributions for a simulation model. The experimental designs are presented here in the context of simulation experiments which consider the optimal design for the number of specimens (individual units sampled) and the number of replicate estimates per specimen for a fixed total number of observations (total sample size for the treatments being compared). The criterion used to gauge each method was the power of the hypothesis test. As expected, at a given fixed number of observations, the balanced experimental designs invariably resulted in a higher power compared with the unbalanced designs at different disease severity means, mean differences, and variances. Based on these results, with unbiased estimates using NPE, the recommended number of replicate estimates taken per specimen is 2 (from a sample of specimens of at least 30), because this conserves resources. Furthermore, for biased estimates, an apparent difference in the power of the hypothesis test was observed between assessment methods and between experimental designs. Results indicated that, regardless of experimental design or rater bias, an amended 10% incremental scale has slightly less power compared with NPEs, and that the H-B scale is more likely than the others to cause a type II error. These results suggest that choice of assessment method, optimizing sample number and number of replicate

  12. Topological Bias in Distance-Based Phylogenetic Methods: Problems with Over- and Underestimated Genetic Distances

    Directory of Open Access Journals (Sweden)

    Xuhua Xia

    2006-01-01

    Full Text Available I show several types of topological biases in distance-based methods that use the least-squares method to evaluate branch lengths and the minimum evolution (ME or the Fitch-Margoliash (FM criterion to choose the best tree. For a 6-species tree, there are two tree shapes, one with three cherries (a cherry is a pair of adjacent leaves descending from the most recent common ancestor, and the other with two. When genetic distances are underestimated, the 3-cherry tree shape is favored with either the ME or FM criterion. When the genetic distances are overestimated, the ME criterion favors the 2-cherry tree, but the direction of bias with the FM criterion depends on whether negative branches are allowed, i.e. allowing negative branches favors the 3-cherry tree shape but disallowing negative branches favors the 2-cherry tree shape. The extent of the bias is explored by computer simulation of sequence evolution.

  13. Screening for psychotic experiences: social desirability biases in a non-clinical sample.

    Science.gov (United States)

    DeVylder, Jordan E; Hilimire, Matthew R

    2015-08-01

    Subthreshold psychotic experiences are common in the population and may be clinically significant. Reporting of psychotic experiences through self-report screens may be subject to threats to validity, including social desirability biases. This study examines the influence of social desirability on the reporting of psychotic experiences. College students (n = 686) completed a psychosis screen and the Marlowe-Crowne social desirability scale as part of a self-report survey battery. Associations between psychosis and social desirability were tested using logistic regression models. With the exception of auditory hallucinations, all other measures of psychotic experiences were subject to social desirability biases. Respondents who gave more socially desirable answers were less likely to report psychotic experiences. Respondent's tendency to underreport psychotic experiences should be accounted for when screening for these symptoms clinically. Findings also suggest that population figures based on self-report may underestimate the prevalence of subthreshold delusions but not hallucinations. © 2014 Wiley Publishing Asia Pty Ltd.

  14. Non-Boltzmann sampling and Bennett's acceptance ratio method: how to profit from bending the rules.

    Science.gov (United States)

    König, Gerhard; Boresch, Stefan

    2011-04-30

    The exact computation of free energy differences requires adequate sampling of all relevant low energy conformations. Especially in systems with rugged energy surfaces, adequate sampling can only be achieved by biasing the exploration process, thus yielding non-Boltzmann probability distributions. To obtain correct free energy differences from such simulations, it is necessary to account for the effects of the bias in the postproduction analysis. We demonstrate that this can be accomplished quite simply with a slight modification of Bennett's Acceptance Ratio method, referring to this technique as Non-Boltzmann Bennett. We illustrate the method by several examples and show how a creative choice of the biased state(s) used during sampling can also improve the efficiency of free energy simulations.

  15. Constrained sampling method for analytic continuation

    Science.gov (United States)

    Sandvik, Anders W.

    2016-12-01

    A method for analytic continuation of imaginary-time correlation functions (here obtained in quantum Monte Carlo simulations) to real-frequency spectral functions is proposed. Stochastically sampling a spectrum parametrized by a large number of δ functions, treated as a statistical-mechanics problem, it avoids distortions caused by (as demonstrated here) configurational entropy in previous sampling methods. The key development is the suppression of entropy by constraining the spectral weight to within identifiable optimal bounds and imposing a set number of peaks. As a test case, the dynamic structure factor of the S =1 /2 Heisenberg chain is computed. Very good agreement is found with Bethe ansatz results in the ground state (including a sharp edge) and with exact diagonalization of small systems at elevated temperatures.

  16. SOIL AND SEDIMENT SAMPLING METHODS | Science ...

    Science.gov (United States)

    The EPA Office of Solid Waste and Emergency Response's (OSWER) Office of Superfund Remediation and Technology Innovation (OSRTI) needs innovative methods and techniques to solve new and difficult sampling and analytical problems found at the numerous Superfund sites throughout the United States. Inadequate site characterization and a lack of knowledge of surface and subsurface contaminant distributions hinders EPA's ability to make the best decisions on remediation options and to conduct the most effective cleanup efforts. To assist OSWER, NERL conducts research to improve their capability to more accurately, precisely, and efficiently characterize Superfund, RCRA, LUST, oil spills, and brownfield sites and to improve their risk-based decision making capabilities, research is being conducted on improving soil and sediment sampling techniques and improving the sampling and handling of volatile organic compound (VOC) contaminated soils, among the many research programs and tasks being performed at ESD-LV.Under this task, improved sampling approaches and devices will be developed for characterizing the concentration of VOCs in soils. Current approaches and devices used today can lose up to 99% of the VOCs present in the sample due inherent weaknesses in the device and improper/inadequate collection techniques. This error generally causes decision makers to markedly underestimate the soil VOC concentrations and, therefore, to greatly underestimate the ecological

  17. Flow cytometric detection method for DNA samples

    Energy Technology Data Exchange (ETDEWEB)

    Nasarabadi,Shanavaz (Livermore, CA); Langlois, Richard G. (Livermore, CA); Venkateswaran, Kodumudi S. (Round Rock, TX)

    2011-07-05

    Disclosed herein are two methods for rapid multiplex analysis to determine the presence and identity of target DNA sequences within a DNA sample. Both methods use reporting DNA sequences, e.g., modified conventional Taqman.RTM. probes, to combine multiplex PCR amplification with microsphere-based hybridization using flow cytometry means of detection. Real-time PCR detection can also be incorporated. The first method uses a cyanine dye, such as, Cy3.TM., as the reporter linked to the 5' end of a reporting DNA sequence. The second method positions a reporter dye, e.g., FAM.TM. on the 3' end of the reporting DNA sequence and a quencher dye, e.g., TAMRA.TM., on the 5' end.

  18. Internal correction of hafnium oxide spectral interferences and mass bias in the determination of platinum in environmental samples using isotope dilution analysis.

    Science.gov (United States)

    Rodríguez-Castrillón, José Angel; Moldovan, Mariella; García Alonso, J Ignacio

    2009-05-01

    A method has been developed for the accurate determination of platinum by isotope dilution analysis, using enriched (194)Pt, in environmental samples containing comparatively high levels of hafnium without any chemical separation. The method is based on the computation of the contribution of hafnium oxide as an independent factor in the observed isotope pattern of platinum in the spiked sample. Under these conditions, the ratio of molar fractions between natural abundance and isotopically enriched platinum was independent of the amount of hafnium present in the sample. Additionally, mass bias was corrected by an internal procedure in which the regression variance was minimised. This was possible as the mass bias factor for hafnium oxide was very close to that of platinum. The final procedure required the measurement of three platinum isotope ratios (192/194, 195/194 and 196/194) to calculate the concentration of platinum in the sample. The methodology has been validated using the reference material "BCR-723 road dust" and has been applied to different environmental matrices (road dust, air particles, bulk wet deposition and epiphytic lichens) collected in the Aspe Valley (Pyrenees Mountains). A full uncertainty budget, using Kragten's spreadsheet method, showed that the total uncertainty was limited only by the uncertainty in the measured isotope ratios and not by the uncertainties of the isotopic composition of platinum and hafnium.

  19. Implementing a generic method for bias correction in statistical models using random effects, with spatial and population dynamics examples

    DEFF Research Database (Denmark)

    Thorson, James T.; Kristensen, Kasper

    2016-01-01

    Statistical models play an important role in fisheries science when reconciling ecological theory with available data for wild populations or experimental studies. Ecological models increasingly include both fixed and random effects, and are often estimated using maximum likelihood techniques...... abundance relative to the conventional plug-in estimator, and also gives essentially identical estimates to a sample-based bias-correction estimator. The epsilon-method has been implemented by us as a generic option in the open-source Template Model Builder software, and could be adapted within other....... Quantities of biological or management interest ("derived quantities") are then often calculated as nonlinear functions of fixed and random effect estimates. However, the conventional "plug-in" estimator for a derived quantity in a maximum likelihood mixed-effects model will be biased whenever the estimator...

  20. Sampling trace organic compounds in water: a comparison of a continuous active sampler to continuous passive and discrete sampling methods

    Science.gov (United States)

    Coes, Alissa L.; Paretti, Nicholas V.; Foreman, William T.; Iverson, Jana L.; Alvarez, David A.

    2014-01-01

    A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19–23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method.

  1. Sampling trace organic compounds in water: a comparison of a continuous active sampler to continuous passive and discrete sampling methods.

    Science.gov (United States)

    Coes, Alissa L; Paretti, Nicholas V; Foreman, William T; Iverson, Jana L; Alvarez, David A

    2014-03-01

    A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19-23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method.

  2. Are Teacher Course Evaluations Biased against Faculty That Teach Quantitative Methods Courses?

    Science.gov (United States)

    Royal, Kenneth D.; Stockdale, Myrah R.

    2015-01-01

    The present study investigated graduate students' responses to teacher/course evaluations (TCE) to determine if students' responses were inherently biased against faculty who teach quantitative methods courses. Item response theory (IRT) and Differential Item Functioning (DIF) techniques were utilized for data analysis. Results indicate students…

  3. Towards reducing the cloud-induced sampling biases in MODIS LST data: a case study from Greenland

    Science.gov (United States)

    Karami, M.; Hansen, B. U.

    2016-12-01

    Satellite-driven Land Surface Temperature (LST) datasets are essential for characterizing climate change impacts on terrestrial ecosystems, as well as a wide range of surface-atmosphere studies. In the past one and a half decade, NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) has provided the scientific community with LST estimates on a global scale with reasonable spatial resolution and revisit time. However, the use of MODIS LST for climate studies is complicated by the simple fact that the observations can only be made under clear-sky conditions. In regions with frequent overcast skies, this can result in the calculated climatic variables deviating from the actual surface conditions. In the present study, we propose and validate a framework based on model-driven downwelling radiation data from ERA-Interim and instantenous LST observations from both MODIS Terra and Aqua, in order to minimize the clear-sky sampling bias. The framework is validated on a cloud-affected MODIS scene covering parts of Greenland (h15v02), and by incorporating in-situ data from a number of monitoring stations in the area. The results indicate that the proposed method is able to increase the number of daily LST estimates by a factor of 2.07 and reduce the skewnewss of monthly distribution of the successful estimates by a factor of 0.22. Considering that these improvements are achieved mainly through introducing data from partially overcast days, the estimated climatic variables show better agreement with the ground truth. The overall accuracy of the model in estimating in-situ mean daily LST remained satisfactory even after incoprporating the daily downweling radiation from ERA-interim (RMSE=0.41 °K, R-squared=0.992). Nonetheless, since technical constraints are expected to continue limiting the use of high temporal resolution satellites in high latitudes, more research is required to quantify and deal with various types of cloud-induced biases present in the data from

  4. Well purge and sample apparatus and method

    Science.gov (United States)

    Schalla, Ronald; Smith, Ronald M.; Hall, Stephen H.; Smart, John E.; Gustafson, Gregg S.

    1995-01-01

    The present invention specifically permits purging and/or sampling of a well but only removing, at most, about 25% of the fluid volume compared to conventional methods and, at a minimum, removing none of the fluid volume from the well. The invention is an isolation assembly with a packer, pump and exhaust, that is inserted into the well. The isolation assembly is designed so that only a volume of fluid between the outside diameter of the isolation assembly and the inside diameter of the well over a fluid column height from the bottom of the well to the top of the active portion (lower annulus) is removed. The packer is positioned above the active portion thereby sealing the well and preventing any mixing or contamination of inlet fluid with fluid above the packer. Ports in the wall of the isolation assembly permit purging and sampling of the lower annulus along the height of the active portion.

  5. Different methods for volatile sampling in mammals.

    Science.gov (United States)

    Kücklich, Marlen; Möller, Manfred; Marcillo, Andrea; Einspanier, Almuth; Weiß, Brigitte M; Birkemeyer, Claudia; Widdig, Anja

    2017-01-01

    Previous studies showed that olfactory cues are important for mammalian communication. However, many specific compounds that convey information between conspecifics are still unknown. To understand mechanisms and functions of olfactory cues, olfactory signals such as volatile compounds emitted from individuals need to be assessed. Sampling of animals with and without scent glands was typically conducted using cotton swabs rubbed over the skin or fur and analysed by gas chromatography-mass spectrometry (GC-MS). However, this method has various drawbacks, including a high level of contaminations. Thus, we adapted two methods of volatile sampling from other research fields and compared them to sampling with cotton swabs. To do so we assessed the body odor of common marmosets (Callithrix jacchus) using cotton swabs, thermal desorption (TD) tubes and, alternatively, a mobile GC-MS device containing a thermal desorption trap. Overall, TD tubes comprised most compounds (N = 113), with half of those compounds being volatile (N = 52). The mobile GC-MS captured the fewest compounds (N = 35), of which all were volatile. Cotton swabs contained an intermediate number of compounds (N = 55), but very few volatiles (N = 10). Almost all compounds found with the mobile GC-MS were also captured with TD tubes (94%). Hence, we recommend TD tubes for state of the art sampling of body odor of mammals or other vertebrates, particularly for field studies, as they can be easily transported, stored and analysed with high performance instruments in the lab. Nevertheless, cotton swabs capture compounds which still may contribute to the body odor, e.g. after bacterial fermentation, while profiles from mobile GC-MS include only the most abundant volatiles of the body odor.

  6. Different methods for volatile sampling in mammals

    Science.gov (United States)

    Möller, Manfred; Marcillo, Andrea; Einspanier, Almuth; Weiß, Brigitte M.

    2017-01-01

    Previous studies showed that olfactory cues are important for mammalian communication. However, many specific compounds that convey information between conspecifics are still unknown. To understand mechanisms and functions of olfactory cues, olfactory signals such as volatile compounds emitted from individuals need to be assessed. Sampling of animals with and without scent glands was typically conducted using cotton swabs rubbed over the skin or fur and analysed by gas chromatography-mass spectrometry (GC-MS). However, this method has various drawbacks, including a high level of contaminations. Thus, we adapted two methods of volatile sampling from other research fields and compared them to sampling with cotton swabs. To do so we assessed the body odor of common marmosets (Callithrix jacchus) using cotton swabs, thermal desorption (TD) tubes and, alternatively, a mobile GC-MS device containing a thermal desorption trap. Overall, TD tubes comprised most compounds (N = 113), with half of those compounds being volatile (N = 52). The mobile GC-MS captured the fewest compounds (N = 35), of which all were volatile. Cotton swabs contained an intermediate number of compounds (N = 55), but very few volatiles (N = 10). Almost all compounds found with the mobile GC-MS were also captured with TD tubes (94%). Hence, we recommend TD tubes for state of the art sampling of body odor of mammals or other vertebrates, particularly for field studies, as they can be easily transported, stored and analysed with high performance instruments in the lab. Nevertheless, cotton swabs capture compounds which still may contribute to the body odor, e.g. after bacterial fermentation, while profiles from mobile GC-MS include only the most abundant volatiles of the body odor. PMID:28841690

  7. Efficient and Minimal Method to Bias Molecular Simulations with Experimental Data.

    Science.gov (United States)

    White, Andrew D; Voth, Gregory A

    2014-08-12

    A primary goal in molecular simulations is to modify the potential energy of a system so that properties of the simulation match experimental data. This is traditionally done through iterative cycles of simulation and reparameterization. An alternative approach is to bias the potential energy so that the system matches experimental data. This can be done while minimally changing the underlying free energy of the molecular simulation. Current minimal biasing methods require replicas, which can lead to unphysical dynamics and introduces new complexity: the choice of replica number and their properties. Here, we describe a new method, called experiment directed simulation that does not require replicas, converges rapidly, can match many data simultaneously, and minimally modifies the potential. The experiment directed simulation method is demonstrated on model systems and a three-component electrolyte simulation. The theory used to derive the method also provides insight into how changing a molecular force-field impacts the expected value of observables in simulation.

  8. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size

    National Research Council Canada - National Science Library

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    .... We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values...

  9. Comparisons of methods for generating conditional Poisson samples and Sampford samples

    OpenAIRE

    Grafström, Anton

    2005-01-01

    Methods for conditional Poisson sampling (CP-sampling) and Sampford sampling are compared and the focus is on the efficiency of the methods. The efficiency is investigated by simulation in different sampling situations. It was of interest to compare methods since new methods for both CP-sampling and Sampford sampling were introduced by Bondesson, Traat & Lundqvist in 2004. The new methods are acceptance rejection methods that use the efficient Pareto sampling method. They are found to be ...

  10. On the efficiency of biased sampling of the multiple state path ensemble

    NARCIS (Netherlands)

    Rogal, J.; Bolhuis, P.G.

    2010-01-01

    Developed for complex systems undergoing rare events involving many (meta)stable states, the multiple state transition path sampling aims to sample from an extended path ensemble including all possible trajectories between any pair of (meta)stable states. The key issue for an efficient sampling of t

  11. An Exploration Based Cognitive Bias Test for Mice: Effects of Handling Method and Stereotypic Behaviour.

    Directory of Open Access Journals (Sweden)

    Janja Novak

    Full Text Available Behavioural tests to assess affective states are widely used in human research and have recently been extended to animals. These tests assume that affective state influences cognitive processing, and that animals in a negative affective state interpret ambiguous information as expecting a negative outcome (displaying a negative cognitive bias. Most of these tests however, require long discrimination training. The aim of the study was to validate an exploration based cognitive bias test, using two different handling methods, as previous studies have shown that standard tail handling of mice increases physiological and behavioural measures of anxiety compared to cupped handling. Therefore, we hypothesised that tail handled mice would display a negative cognitive bias. We handled 28 female CD-1 mice for 16 weeks using either tail handling or cupped handling. The mice were then trained in an eight arm radial maze, where two adjacent arms predicted a positive outcome (darkness and food, while the two opposite arms predicted a negative outcome (no food, white noise and light. After six days of training, the mice were also given access to the four previously unavailable intermediate ambiguous arms of the radial maze and tested for cognitive bias. We were unable to validate this test, as mice from both handling groups displayed a similar pattern of exploration. Furthermore, we examined whether maze exploration is affected by the expression of stereotypic behaviour in the home cage. Mice with higher levels of stereotypic behaviour spent more time in positive arms and avoided ambiguous arms, displaying a negative cognitive bias. While this test needs further validation, our results indicate that it may allow the assessment of affective state in mice with minimal training-a major confound in current cognitive bias paradigms.

  12. A Novel Bias Correction Method for Soil Moisture and Ocean Salinity (SMOS Soil Moisture: Retrieval Ensembles

    Directory of Open Access Journals (Sweden)

    Ju Hyoung Lee

    2015-12-01

    Full Text Available Bias correction is a very important pre-processing step in satellite data assimilation analysis, as data assimilation itself cannot circumvent satellite biases. We introduce a retrieval algorithm-specific and spatially heterogeneous Instantaneous Field of View (IFOV bias correction method for Soil Moisture and Ocean Salinity (SMOS soil moisture. To the best of our knowledge, this is the first paper to present the probabilistic presentation of SMOS soil moisture using retrieval ensembles. We illustrate that retrieval ensembles effectively mitigated the overestimation problem of SMOS soil moisture arising from brightness temperature errors over West Africa in a computationally efficient way (ensemble size: 12, no time-integration. In contrast, the existing method of Cumulative Distribution Function (CDF matching considerably increased the SMOS biases, due to the limitations of relying on the imperfect reference data. From the validation at two semi-arid sites, Benin (moderately wet and vegetated area and Niger (dry and sandy bare soils, it was shown that the SMOS errors arising from rain and vegetation attenuation were appropriately corrected by ensemble approaches. In Benin, the Root Mean Square Errors (RMSEs decreased from 0.1248 m3/m3 for CDF matching to 0.0678 m3/m3 for the proposed ensemble approach. In Niger, the RMSEs decreased from 0.14 m3/m3 for CDF matching to 0.045 m3/m3 for the ensemble approach.

  13. Bias due to methods of parasite detection when estimating prevalence of infection of Triatoma infestans by Trypanosoma cruzi

    OpenAIRE

    Lardeux, Frédéric; Aliaga, C.; Depickère, Stéphanie

    2016-01-01

    The study aimed to quantify the bias from parasite detection methods in the estimation of the prevalence of infection of Triatoma infestans by Trypanosoma cruzi, the agent of Chagas disease. Three common protocols that detect T. cruzi in a sample of 640 wild-caught T. infestans were compared: (1) the microscopic observation of insect fecal droplets, (2) a PCR protocol targeting mini-exon genes of T. cruzi (MeM-PCR), and (3) a PCR protocol targeting a satellite repeated unit of the parasite. A...

  14. Quantifying Next Generation Sequencing Sample Pre-Processing Bias in HIV-1 Complete Genome Sequencing.

    Science.gov (United States)

    Vrancken, Bram; Trovão, Nídia Sequeira; Baele, Guy; van Wijngaerden, Eric; Vandamme, Anne-Mieke; van Laethem, Kristel; Lemey, Philippe

    2016-01-07

    Genetic analyses play a central role in infectious disease research. Massively parallelized "mechanical cloning" and sequencing technologies were quickly adopted by HIV researchers in order to broaden the understanding of the clinical importance of minor drug-resistant variants. These efforts have, however, remained largely limited to small genomic regions. The growing need to monitor multiple genome regions for drug resistance testing, as well as the obvious benefit for studying evolutionary and epidemic processes makes complete genome sequencing an important goal in viral research. In addition, a major drawback for NGS applications to RNA viruses is the need for large quantities of input DNA. Here, we use a generic overlapping amplicon-based near full-genome amplification protocol to compare low-input enzymatic fragmentation (Nextera™) with conventional mechanical shearing for Roche 454 sequencing. We find that the fragmentation method has only a modest impact on the characterization of the population composition and that for reliable results, the variation introduced at all steps of the procedure--from nucleic acid extraction to sequencing--should be taken into account, a finding that is also relevant for NGS technologies that are now more commonly used. Furthermore, by applying our protocol to deep sequence a number of pre-therapy plasma and PBMC samples, we illustrate the potential benefits of a near complete genome sequencing approach in routine genotyping.

  15. Quantifying Next Generation Sequencing Sample Pre-Processing Bias in HIV-1 Complete Genome Sequencing

    Directory of Open Access Journals (Sweden)

    Bram Vrancken

    2016-01-01

    Full Text Available Genetic analyses play a central role in infectious disease research. Massively parallelized “mechanical cloning” and sequencing technologies were quickly adopted by HIV researchers in order to broaden the understanding of the clinical importance of minor drug-resistant variants. These efforts have, however, remained largely limited to small genomic regions. The growing need to monitor multiple genome regions for drug resistance testing, as well as the obvious benefit for studying evolutionary and epidemic processes makes complete genome sequencing an important goal in viral research. In addition, a major drawback for NGS applications to RNA viruses is the need for large quantities of input DNA. Here, we use a generic overlapping amplicon-based near full-genome amplification protocol to compare low-input enzymatic fragmentation (Nextera™ with conventional mechanical shearing for Roche 454 sequencing. We find that the fragmentation method has only a modest impact on the characterization of the population composition and that for reliable results, the variation introduced at all steps of the procedure—from nucleic acid extraction to sequencing—should be taken into account, a finding that is also relevant for NGS technologies that are now more commonly used. Furthermore, by applying our protocol to deep sequence a number of pre-therapy plasma and PBMC samples, we illustrate the potential benefits of a near complete genome sequencing approach in routine genotyping.

  16. Methods of Reducing Bias in Combined Thermal/Epithermal Neutron (CTEN) Assays of Heterogeneous Waste

    Energy Technology Data Exchange (ETDEWEB)

    Estep, R.J.; Melton, S.; Miko, D.

    1998-11-17

    We examined the effectiveness of two different methods for correcting CTEN passive and active assays for bias due to variations in the source position in different drum types. Both use the same drum-averaged correction determined from a neural network trained to active flux monitor ratios as a starting point. One method then uses a neural network to obtain a spatial correction factor sensitive to the source location. The other method uses emission tomography. Both methods were found to give significantly improved assay accuracy over the drum-averaged correction, although more study is needed to determine which method works better.

  17. A combined statistical bias correction and stochastic downscaling method for precipitation

    Science.gov (United States)

    Volosciuk, Claudia; Maraun, Douglas; Vrac, Mathieu; Widmann, Martin

    2017-03-01

    Much of our knowledge about future changes in precipitation relies on global (GCMs) and/or regional climate models (RCMs) that have resolutions which are much coarser than typical spatial scales of precipitation, particularly extremes. The major problems with these projections are both climate model biases and the gap between gridbox and point scale. Wong et al. (2014) developed a model to jointly bias correct and downscale precipitation at daily scales. This approach, however, relied on pairwise correspondence between predictor and predictand for calibration, and, thus, on nudged simulations which are rarely available. Here we present an extension of this approach that separates the downscaling from the bias correction and in principle is applicable to free-running GCMs/RCMs. In a first step, we bias correct RCM-simulated precipitation against gridded observations at the same scale using a parametric quantile mapping (QMgrid) approach. In a second step, we bridge the scale gap: we predict local variance employing a regression-based model with coarse-scale precipitation as a predictor. The regression model is calibrated between gridded and point-scale (station) observations. For this concept we present one specific implementation, although the optimal model may differ for each studied location. To correct the whole distribution including extreme tails we apply a mixture distribution of a gamma distribution for the precipitation mass and a generalized Pareto distribution for the extreme tail in the first step. For the second step a vector generalized linear gamma model is employed. For evaluation we adopt the perfect predictor experimental setup of VALUE. We also compare our method to the classical QM as it is usually applied, i.e., between RCM and point scale (QMpoint). Precipitation is in most cases improved by (parts of) our method across different European climates. The method generally performs better in summer than in winter and in winter best in the

  18. Bias in diet determination: incorporating traditional methods in Bayesian mixing models.

    Science.gov (United States)

    Franco-Trecu, Valentina; Drago, Massimiliano; Riet-Sapriza, Federico G; Parnell, Andrew; Frau, Rosina; Inchausti, Pablo

    2013-01-01

    There are not "universal methods" to determine diet composition of predators. Most traditional methods are biased because of their reliance on differential digestibility and the recovery of hard items. By relying on assimilated food, stable isotope and Bayesian mixing models (SIMMs) resolve many biases of traditional methods. SIMMs can incorporate prior information (i.e. proportional diet composition) that may improve the precision in the estimated dietary composition. However few studies have assessed the performance of traditional methods and SIMMs with and without informative priors to study the predators' diets. Here we compare the diet compositions of the South American fur seal and sea lions obtained by scats analysis and by SIMMs-UP (uninformative priors) and assess whether informative priors (SIMMs-IP) from the scat analysis improved the estimated diet composition compared to SIMMs-UP. According to the SIMM-UP, while pelagic species dominated the fur seal's diet the sea lion's did not have a clear dominance of any prey. In contrast, SIMM-IP's diets compositions were dominated by the same preys as in scat analyses. When prior information influenced SIMMs' estimates, incorporating informative priors improved the precision in the estimated diet composition at the risk of inducing biases in the estimates. If preys isotopic data allow discriminating preys' contributions to diets, informative priors should lead to more precise but unbiased estimated diet composition. Just as estimates of diet composition obtained from traditional methods are critically interpreted because of their biases, care must be exercised when interpreting diet composition obtained by SIMMs-IP. The best approach to obtain a near-complete view of predators' diet composition should involve the simultaneous consideration of different sources of partial evidence (traditional methods, SIMM-UP and SIMM-IP) in the light of natural history of the predator species so as to reliably ascertain and

  19. Eigenvector method for umbrella sampling enables error analysis.

    Science.gov (United States)

    Thiede, Erik H; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R

    2016-08-28

    Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence.

  20. The bias of the unbiased estimator: a study of the iterative application of the BLUE method

    CERN Document Server

    Lista, Luca

    2014-01-01

    The best linear unbiased estimator (BLUE) is a popular statistical method adopted to combine multiple measurements of the same observable, taking into account individual uncertainties and their correlation. The method is unbiased by construction if the true uncertainties and their correlation are known, but it may exhibit a bias if uncertainty estimates are used in place of the true ones, in particular if those uncertainties depend on the true value of the measured quantity. This is the case for instance when contributions to the total uncertainty are known as relative uncertainties. In those cases, an iterative application of the BLUE method may reduce the bias of the combined measurement. The impact of the iterative approach compared to the standard BLUE application is studied for a wide range of possible values of uncertainties and their correlation in the case of the combination of two measurements.

  1. Hydrological modeling as an evaluation tool of EURO-CORDEX climate projections and bias correction methods

    Science.gov (United States)

    Hakala, Kirsti; Addor, Nans; Seibert, Jan

    2017-04-01

    Streamflow stemming from Switzerland's mountainous landscape will be influenced by climate change, which will pose significant challenges to the water management and policy sector. In climate change impact research, the determination of future streamflow is impeded by different sources of uncertainty, which propagate through the model chain. In this research, we explicitly considered the following sources of uncertainty: (1) climate models, (2) downscaling of the climate projections to the catchment scale, (3) bias correction method and (4) parameterization of the hydrological model. We utilize climate projections at the 0.11 degree 12.5 km resolution from the EURO-CORDEX project, which are the most recent climate projections for the European domain. EURO-CORDEX is comprised of regional climate model (RCM) simulations, which have been downscaled from global climate models (GCMs) from the CMIP5 archive, using both dynamical and statistical techniques. Uncertainties are explored by applying a modeling chain involving 14 GCM-RCMs to ten Swiss catchments. We utilize the rainfall-runoff model HBV Light, which has been widely used in operational hydrological forecasting. The Lindström measure, a combination of model efficiency and volume error, was used as an objective function to calibrate HBV Light. Ten best sets of parameters are then achieved by calibrating using the genetic algorithm and Powell optimization (GAP) method. The GAP optimization method is based on the evolution of parameter sets, which works by selecting and recombining high performing parameter sets with each other. Once HBV is calibrated, we then perform a quantitative comparison of the influence of biases inherited from climate model simulations to the biases stemming from the hydrological model. The evaluation is conducted over two time periods: i) 1980-2009 to characterize the simulation realism under the current climate and ii) 2070-2099 to identify the magnitude of the projected change of

  2. Determine the galaxy bias factors on large scales using bispectrum method

    CERN Document Server

    Guo, Hong

    2009-01-01

    We study whether the bias factors of galaxies can be unbiasedly recovered from their power spectra and bispectra. We use a set of numerical N-body simulations and construct large mock galaxy catalogs based upon the semi-analytical model of Croton et al. (2006). We measure the reduced bispectra for galaxies of different luminosity, and determine the linear and first nonlinear bias factors from their bispectra. We find that on large scales down to that of the wavenumber k=0.1h/Mpc, the bias factors b1 and b2 are nearly constant, and b1 obtained with the bispectrum method agrees very well with the expected value. The nonlinear bias factor b2 is negative, except for the most luminous galaxies with M<-23 which have a positive b2. The behavior of b2 of galaxies is consistent with the b2 mass dependence of their host halos. We show that it is essential to have an accurate estimation of the dark matter bispectrum in order to have an unbiased measurement of b1 and b2. We also test the analytical approach of incorpo...

  3. Methods for adjusting for bias due to crossover in oncology trials.

    Science.gov (United States)

    Ishak, K Jack; Proskorovsky, Irina; Korytowsky, Beata; Sandin, Rickard; Faivre, Sandrine; Valle, Juan

    2014-06-01

    Trials of new oncology treatments often involve a crossover element in their design that allows patients receiving the control treatment to crossover to receive the experimental treatment at disease progression or when sufficient evidence about the efficacy of the new treatment is achieved. Crossover leads to contamination of the initial randomized groups due to a mixing of the effects of the control and experimental treatments in the reference group. This is further complicated by the fact that crossover is often a very selective process whereby patients who switch treatment have a different prognosis than those who do not. Standard statistical techniques, including those that attempt to account for the treatment switch, cannot fully adjust for the bias introduced by crossover. Specialized methods such as rank-preserving structural failure time (RPSFT) models and inverse probability of censoring weighted (IPCW) analyses are designed to deal with selective treatment switching and have been increasingly applied to adjust for crossover. We provide an overview of the crossover problem and highlight circumstances under which it is likely to cause bias. We then describe the RPSFT and IPCW methods and explain how these methods adjust for the bias, highlighting the assumptions invoked in the process. Our aim is to facilitate understanding of these complex methods using a case study to support explanations. We also discuss the implications of crossover adjustment on cost-effectiveness results.

  4. Can Memory Bias be Modified? The Effects of an Explicit Cued-Recall Training in Two Independent Samples

    NARCIS (Netherlands)

    Vrijsen, J.N.; Becker, E.S.; Rinck, M.; Oostrom, I.I.H. van; Speckens, A.E.M.; Whitmer, A.; Gotlib, I.H.

    2014-01-01

    Cognitive bias modification (CBM) has been found to be effective in modifying information-processing biases and in reducing emotional reactivity to stress. Although modification of attention and interpretation biases has frequently been studied, it is not clear whether memory bias can be manipulated

  5. Improved transition path sampling methods for simulation of rare events.

    Science.gov (United States)

    Chopra, Manan; Malshe, Rohit; Reddy, Allam S; de Pablo, J J

    2008-04-14

    The free energy surfaces of a wide variety of systems encountered in physics, chemistry, and biology are characterized by the existence of deep minima separated by numerous barriers. One of the central aims of recent research in computational chemistry and physics has been to determine how transitions occur between deep local minima on rugged free energy landscapes, and transition path sampling (TPS) Monte-Carlo methods have emerged as an effective means for numerical investigation of such transitions. Many of the shortcomings of TPS-like approaches generally stem from their high computational demands. Two new algorithms are presented in this work that improve the efficiency of TPS simulations. The first algorithm uses biased shooting moves to render the sampling of reactive trajectories more efficient. The second algorithm is shown to substantially improve the accuracy of the transition state ensemble by introducing a subset of local transition path simulations in the transition state. The system considered in this work consists of a two-dimensional rough energy surface that is representative of numerous systems encountered in applications. When taken together, these algorithms provide gains in efficiency of over two orders of magnitude when compared to traditional TPS simulations.

  6. Species-genetic diversity correlations in habitat fragmentation can be biased by small sample sizes.

    Science.gov (United States)

    Nazareno, Alison G; Jump, Alistair S

    2012-06-01

    Predicted parallel impacts of habitat fragmentation on genes and species lie at the core of conservation biology, yet tests of this rule are rare. In a recent article in Ecology Letters, Struebig et al. (2011) report that declining genetic diversity accompanies declining species diversity in tropical forest fragments. However, this study estimates diversity in many populations through extrapolation from very small sample sizes. Using the data of this recent work, we show that results estimated from the smallest sample sizes drive the species-genetic diversity correlation (SGDC), owing to a false-positive association between habitat fragmentation and loss of genetic diversity. Small sample sizes are a persistent problem in habitat fragmentation studies, the results of which often do not fit simple theoretical models. It is essential, therefore, that data assessing the proposed SGDC are sufficient in order that conclusions be robust.

  7. Gate Bias Effects on Samples with Edge Gates in the Quantum Hall Regime

    OpenAIRE

    若林 淳一; 風間 重雄; 長嶋 登志夫

    2001-01-01

    We have fabricated GaAs/AlGaAs heterostructure Hall samples that have edge gate with several widths along both sides of the sample. The gate width dependence of an effect of the gate voltage to the Hall resistance was measured at the middle of a transition region between the adjacent quantum Hall plateaus. The results have been analyzed based on two model functions of current distribution;an exponential type and the modified Beenakker type. The results of the former have shown qualitative agr...

  8. Log sampling methods and software for stand and landscape analyses.

    Science.gov (United States)

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe methods for efficient, accurate sampling of logs at landscape and stand scales to estimate density, total length, cover, volume, and weight. Our methods focus on optimizing the sampling effort by choosing an appropriate sampling method and transect length for specific forest conditions and objectives. Sampling methods include the line-intersect method and...

  9. Bias in estimating animal travel distance : the effect of sampling frequency

    NARCIS (Netherlands)

    Rowcliffe, J. Marcus; Carbone, Chris; Kays, Roland; Kranstauber, Bart; Jansen, Patrick A.

    2012-01-01

    1. The distance travelled by animals is an important ecological variable that links behaviour, energetics and demography. It is usually measured by summing straight-line distances between intermittently sampled locations along continuous animal movement paths. The extent to which this approach under

  10. Bias in estimating animal travel distance: the effect of sampling frequency

    NARCIS (Netherlands)

    Rowcliffe, J.M.; Carbone, C.; Kays, R.; Kranstauber, B.; Jansen, P.A.

    2012-01-01

    1. The distance travelled by animals is an important ecological variable that links behaviour, energetics and demography. It is usually measured by summing straight-line distances between intermittently sampled locations along continuous animal movement paths. The extent to which this approach under

  11. Influence of bias on properties of carbon films deposited by MCECR plasma sputtering method

    Institute of Scientific and Technical Information of China (English)

    CAI Chang-long; DIAO Dong-feng; S.Miyake; T.Matsumoto

    2004-01-01

    The mirror-confinement-type electron cyclotron resonance(MCECR) plasma source has high plasma density and high electron temperature. It is quite useful in many plasma processing, and has been used for etching and thin-film deposition. The carbon films with 40 nm thickness were deposited by MCECR plasma sputtering method on Si, and the influence of substrate bias on the properties of carbon films was studied. The bonding structure of the film was analyzed by the X-ray photoelectron spectroscopy(XPS), the tribological properties were measured by the pin-on-disk(POD) tribometer, the nanohardness of the films was measured by the nanoindenter, and the deposition speed and the refractive index were measured by the ellipse meter. The better substrate bias was obtained, and the better properties of carbon films were obtained.

  12. Constrained Broyden Dimer Method with Bias Potential for Exploring Potential Energy Surface of Multistep Reaction Process.

    Science.gov (United States)

    Shang, Cheng; Liu, Zhi-Pan

    2012-07-10

    To predict the chemical activity of new matter is an ultimate goal in chemistry. The identification of reaction pathways using modern quantum mechanics calculations, however, often requires a high demand in computational power and good chemical intuition on the reaction. Here, a new reaction path searching method is developed by combining our recently developed transition state (TS) location method, namely, the constrained Broyden dimer method, with a basin-filling method via bias potentials, which allows the system to walk out from the energy traps at a given reaction direction. In the new method, the reaction path searching starts from an initial state without the need for preguessing the TS-like or final state structure and can proceed iteratively to the final state by locating all related TSs and intermediates. In each elementary reaction step, a reaction direction, such as a bond breaking, needs to be specified, the information of which is refined and preserved as a normal mode through biased dimer rotation. The method is tested successfully on the Baker reaction system (50 elementary reactions) with good efficiency and stability and is also applied to the potential energy surface exploration of multistep reaction processes in the gas phase and on the surface. The new method can be applied for the computational screening of new catalytic materials with a minimum requirement of chemical intuition.

  13. Sampling for Patient Exit Interviews: Assessment of Methods Using Mathematical Derivation and Computer Simulations.

    Science.gov (United States)

    Geldsetzer, Pascal; Fink, Günther; Vaikath, Maria; Bärnighausen, Till

    2016-11-24

    (1) To evaluate the operational efficiency of various sampling methods for patient exit interviews; (2) to discuss under what circumstances each method yields an unbiased sample; and (3) to propose a new, operationally efficient, and unbiased sampling method. Literature review, mathematical derivation, and Monte Carlo simulations. Our simulations show that in patient exit interviews it is most operationally efficient if the interviewer, after completing an interview, selects the next patient exiting the clinical consultation. We demonstrate mathematically that this method yields a biased sample: patients who spend a longer time with the clinician are overrepresented. This bias can be removed by selecting the next patient who enters, rather than exits, the consultation room. We show that this sampling method is operationally more efficient than alternative methods (systematic and simple random sampling) in most primary health care settings. Under the assumption that the order in which patients enter the consultation room is unrelated to the length of time spent with the clinician and the interviewer, selecting the next patient entering the consultation room tends to be the operationally most efficient unbiased sampling method for patient exit interviews. © 2016 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.

  14. Volunteer Bias in Recruitment, Retention, and Blood Sample Donation in a Randomised Controlled Trial Involving Mothers and Their Children at Six Months and Two Years: A Longitudinal Analysis

    Science.gov (United States)

    Jordan, Sue; Watkins, Alan; Storey, Mel; Allen, Steven J.; Brooks, Caroline J.; Garaiova, Iveta; Heaven, Martin L.; Jones, Ruth; Plummer, Sue F.; Russell, Ian T.; Thornton, Catherine A.; Morgan, Gareth

    2013-01-01

    Background The vulnerability of clinical trials to volunteer bias is under-reported. Volunteer bias is systematic error due to differences between those who choose to participate in studies and those who do not. Methods and Results This paper extends the applications of the concept of volunteer bias by using data from a trial of probiotic supplementation for childhood atopy in healthy dyads to explore 1) differences between a) trial participants and aggregated data from publicly available databases b) participants and non-participants as the trial progressed 2) impact on trial findings of weighting data according to deprivation (Townsend) fifths in the sample and target populations. 1) a) Recruits (n = 454) were less deprived than the target population, matched for area of residence and delivery dates (n = 6,893) (mean [SD] deprivation scores 0.09[4.21] and 0.79[4.08], t = 3.44, df = 511, pprobiotics or research or reporting infants’ adverse events or rashes were more likely to attend research clinics and consent to skin-prick testing. Mothers participating to help children were more likely to consent to infant blood sample donation. 2) In one trial outcome, atopic eczema, the intervention had a positive effect only in the over-represented, least deprived group. Here, data weighting attenuated risk reduction from 6.9%(0.9–13.1%) to 4.6%(−1.4–+10.5%), and OR from 0.40(0.18–0.91) to 0.56(0.26–1.21). Other findings were unchanged. Conclusions Potential for volunteer bias intensified during the trial, due to non-participation of the most deprived and smokers. However, these were not the only predictors of non-participation. Data weighting quantified volunteer bias and modified one important trial outcome. Trial Registration This randomised, double blind, parallel group, placebo controlled trial is registered with the International Standard Randomised Controlled Trials Register, Number (ISRCTN) 26287422. Registered title: Probiotics in the

  15. 40 CFR Appendix I to Part 261 - Representative Sampling Methods

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Representative Sampling Methods I...—Representative Sampling Methods The methods and equipment used for sampling waste materials will vary with the form and consistency of the waste materials to be sampled. Samples collected using the...

  16. The random card sort method and respondent certainty in contingent valuation: an exploratory investigation of range bias.

    Science.gov (United States)

    Shackley, Phil; Dixon, Simon

    2014-10-01

    Willingness to pay (WTP) values derived from contingent valuation surveys are prone to a number of biases. Range bias occurs when the range of money values presented to respondents in a payment card affects their stated WTP values. This paper reports the results of an exploratory study whose aim was to investigate whether the effects of range bias can be reduced through the use of an alternative to the standard payment card method, namely, a random card sort method. The results suggest that the random card sort method is prone to range bias but that this bias may be mitigated by restricting the analysis to the WTP values of those respondents who indicate they are 'definitely sure' they would pay their stated WTP.

  17. Statistical sampling methods for soils monitoring

    Science.gov (United States)

    Ann M. Abbott

    2010-01-01

    Development of the best sampling design to answer a research question should be an interactive venture between the land manager or researcher and statisticians, and is the result of answering various questions. A series of questions that can be asked to guide the researcher in making decisions that will arrive at an effective sampling plan are described, and a case...

  18. Exponentially Biased Ground-State Sampling of Quantum Annealing Machines with Transverse-Field Driving Hamiltonians

    Science.gov (United States)

    Mandrà, Salvatore; Zhu, Zheng; Katzgraber, Helmut G.

    2017-02-01

    We study the performance of the D-Wave 2X quantum annealing machine on systems with well-controlled ground-state degeneracy. While obtaining the ground state of a spin-glass benchmark instance represents a difficult task, the gold standard for any optimization algorithm or machine is to sample all solutions that minimize the Hamiltonian with more or less equal probability. Our results show that while naive transverse-field quantum annealing on the D-Wave 2X device can find the ground-state energy of the problems, it is not well suited in identifying all degenerate ground-state configurations associated with a particular instance. Even worse, some states are exponentially suppressed, in agreement with previous studies on toy model problems [New J. Phys. 11, 073021 (2009), 10.1088/1367-2630/11/7/073021]. These results suggest that more complex driving Hamiltonians are needed in future quantum annealing machines to ensure a fair sampling of the ground-state manifold.

  19. Adiabatic bias molecular dynamics: A method to navigate the conformational space of complex molecular systems

    Science.gov (United States)

    Marchi, Massimo; Ballone, Pietro

    1999-02-01

    This study deals with a novel molecular simulation technique, named adiabatic bias molecular dynamics (MD), which provides a simple and reasonably inexpensive route to generate MD trajectories joining points in conformational space separated by activation barriers. Because of the judicious way the biasing potential is updated during the MD runs, the technique allows with some additional effort the computation of the free energy change experienced during the trajectory. The adiabatic bias method has been applied to a nontrivial problem: The unfolding of an atomistic model of lysozyme. Here, the radius of gyration (Rg) was used as a convenient reaction coordinate. For changes in Rg between 19.7 and 28 Å, we observe a net loss of the native tertiary structure of lysozyme. At the same time, secondary structure elements such as α-helices are retained although some of the original order is diminished. The calculated free energy profile for the unfolding transition shows a monotonous increase with Rg and depends crucially on the nonbonded cutoff used in the potential model.

  20. A Comparison of Methods for a Priori Bias Correction in Soil Moisture Data Assimilation

    Science.gov (United States)

    Kumar, Sujay V.; Reichle, Rolf H.; Harrison, Kenneth W.; Peters-Lidard, Christa D.; Yatheendradas, Soni; Santanello, Joseph A.

    2011-01-01

    Data assimilation is being increasingly used to merge remotely sensed land surface variables such as soil moisture, snow and skin temperature with estimates from land models. Its success, however, depends on unbiased model predictions and unbiased observations. Here, a suite of continental-scale, synthetic soil moisture assimilation experiments is used to compare two approaches that address typical biases in soil moisture prior to data assimilation: (i) parameter estimation to calibrate the land model to the climatology of the soil moisture observations, and (ii) scaling of the observations to the model s soil moisture climatology. To enable this research, an optimization infrastructure was added to the NASA Land Information System (LIS) that includes gradient-based optimization methods and global, heuristic search algorithms. The land model calibration eliminates the bias but does not necessarily result in more realistic model parameters. Nevertheless, the experiments confirm that model calibration yields assimilation estimates of surface and root zone soil moisture that are as skillful as those obtained through scaling of the observations to the model s climatology. Analysis of innovation diagnostics underlines the importance of addressing bias in soil moisture assimilation and confirms that both approaches adequately address the issue.

  1. Bias properties of extragalactic distance indicators. 3: Analysis of Tully-Fisher distances for the Mathewson-Ford-Buchhorn sample of 1355 galaxies

    Science.gov (United States)

    Federspiel, Martin; Sandage, Allan; Tammann, G. A.

    1994-01-01

    The observational selection bias properties of the large Mathewson-Ford-Buchhorn (MFB) sample of axies are demonstrated by showing that the apparent Hubble constant incorrectly increases outward when determined using Tully-Fisher (TF) photometric distances that are uncorreted for bias. It is further shown that the value of H(sub 0) so determined is also multivlaued at a given redshift when it is calculated by the TF method using galaxies with differenct line widths. The method of removing this unphysical contradiction is developed following the model of the bias set out in Paper II. The model developed further here shows that the appropriate TF magnitude of a galaxy that is drawn from a flux-limited catalog not only is a function of line width but, even in the most idealistic cases, requires a triple-entry correction depending on line width, apparent magnitude, and catalog limit. Using the distance-limited subset of the data, it is shown that the mean intrinsic dispersion of a bias-free TF relation is high. The dispersion depends on line width, decreasing from sigma(M) = 0.7 mag for galaxies with rotational velocities less than 100 km s(exp-1) to sigma(M) = 0.4 mag for galaxies with rotational velocities greater than 250 km s(exp-1). These dispersions are so large that the random errors of the bias-free TF distances are too gross to detect any peculiar motions of individual galaxies, but taken together the data show again the offset of 500 km s(exp-1) fond both by Dressler & Faber and by MFB for galaxies in the direction of the putative Great Attractor but described now in a different way. The maximum amplitude of the bulk streaming motion at the Local Group is approximately 500 km s(exp-1) but the perturbation dies out, approaching the Machian frame defined by the CMB at a distance of approximately 80 Mpc (v is approximately 4000 km s(exp -1)). This decay to zero perturbation at v is approximately 4000 km s(exp -1) argues against existing models with a single

  2. Evaluation of Sampling Methods for Bacillus Spore ...

    Science.gov (United States)

    Journal Article Following a wide area release of biological materials, mapping the extent of contamination is essential for orderly response and decontamination operations. HVAC filters process large volumes of air and therefore collect highly representative particulate samples in buildings. HVAC filter extraction may have great utility in rapidly estimating the extent of building contamination following a large-scale incident. However, until now, no studies have been conducted comparing the two most appropriate sampling approaches for HVAC filter materials: direct extraction and vacuum-based sampling.

  3. Novel temperature modeling and compensation method for bias of ring laser gyroscope based on least-squares support vector machine

    Institute of Scientific and Technical Information of China (English)

    Xudong Yu; Yu Wang; Guo Wei; Pengfei Zhang; Xingwu Long

    2011-01-01

    Bias of ring-laser-gyroscope (RLG) changes with temperature in a nonlinear way. This is an important restraining factor for improving the accuracy of RLG. Considering the limitations of least-squares regression and neural network, we propose a new method of temperature compensation of RLG bias-building function regression model using least-squares support vector machine (LS-SVM). Static and dynamic temperature experiments of RLG bias are carried out to validate the effectiveness of the proposed method. Moreover,the traditional least-squares regression method is compared with the LS-SVM-based method. The results show the maximum error of RLG bias drops by almost two orders of magnitude after static temperature compensation, while bias stability of RLG improves by one order of magnitude after dynamic temperature compensation. Thus, the proposed method reduces the influence of temperature variation on the bias of the RLG effectively and improves the accuracy of the gyro scope considerably.%@@ Bias of ring-laser-gyroscope (RLG) changes with temperature in a nonlinear way.This is an important restraining factor for improving the accuracy of RLG.Considering the limitations of least-squares regression and neural network, we propose a new method of temperature compensation of RLG bias-building function regression model using least-squares support vector machine (LS-SVM).Static and dynamic temperature experiments of RLG bias are carried out to validate the effectiveness of the proposed method.Moreover,the traditional least-squares regression method is compared with the LS-SVM-based method.

  4. Experimental determination of isotope enrichment factors – bias from mass removal by repetitive sampling

    DEFF Research Database (Denmark)

    Buchner, Daniel; Jin, Biao; Ebert, Karin

    2017-01-01

    Application of compound-specific stable isotope approaches often involves comparisons of isotope enrichment factors (ε). Experimental determination of ε-values is based on the Rayleigh equation, which relates the change in measured isotope ratios to the decreasing substrate fractions and is valid...... to account for mass removal and for volatilization into the headspace. In this study we use both synthetic and experimental data to demonstrate that the determination of ε-values according to current correction methods is prone to considerable systematic errors even in well-designed experimental setups....... In response, we present novel, adequate methods to eliminate systematic errors in data evaluation. A model-based sensitivity analysis serves to reveal the most crucial experimental parameters and can be used for future experimental design to obtain correct ε-values allowing mechanistic interpretations....

  5. Adaptive Sample Bias for Rapidly-exploring Random Trees with Applications to Test Generation

    Science.gov (United States)

    2005-06-01

    Lecture Notes in Computer Science , pages 130–144. Springer Verlag, 2000. [2] E...Asarin, T. Dang, and O. Maler. The d/dt tool for verification of hybrid systems. In Computer Aided Verification, volume 2404 of Lecture Notes in Computer Science , pages...Incremental search methods for reachability analysis of continuous and hybrid systems. In HSCC, volume 2993 of Lecture Notes in Computer Science , pages

  6. A venue-based method for sampling hard-to-reach populations.

    Science.gov (United States)

    Muhib, F B; Lin, L S; Stueve, A; Miller, R L; Ford, W L; Johnson, W D; Smith, P J

    2001-01-01

    Constructing scientifically sound samples of hard-to-reach populations, also known as hidden populations, is a challenge for many research projects. Traditional sample survey methods, such as random sampling from telephone or mailing lists, can yield low numbers of eligible respondents while non-probability sampling introduces unknown biases. The authors describe a venue-based application of time-space sampling (TSS) that addresses the challenges of accessing hard-to-reach populations. The method entails identifying days and times when the target population gathers at specific venues, constructing a sampling frame of venue, day-time units (VDTs), randomly selecting and visiting VDTs (the primary sampling units), and systematically intercepting and collecting information from consenting members of the target population. This allows researchers to construct a sample with known properties, make statistical inference to the larger population of venue visitors, and theorize about the introduction of biases that may limit generalization of results to the target population. The authors describe their use of TSS in the ongoing Community Intervention Trial for Youth (CITY) project to generate a systematic sample of young men who have sex with men. The project is an ongoing community level HIV prevention intervention trial funded by the Centers for Disease Control and Prevention. The TSS method is reproducible and can be adapted to hard-to-reach populations in other situations, environments, and cultures.

  7. Active Search on Carcasses versus Pitfall Traps: a Comparison of Sampling Methods.

    Science.gov (United States)

    Zanetti, N I; Camina, R; Visciarelli, E C; Centeno, N D

    2016-04-01

    The study of insect succession in cadavers and the classification of arthropods have mostly been done by placing a carcass in a cage, protected from vertebrate scavengers, which is then visited periodically. An alternative is to use specific traps. Few studies on carrion ecology and forensic entomology involving the carcasses of large vertebrates have employed pitfall traps. The aims of this study were to compare both sampling methods (active search on a carcass and pitfall trapping) for each coleopteran family, and to establish whether there is a discrepancy (underestimation and/or overestimation) in the presence of each family by either method. A great discrepancy was found for almost all families with some of them being more abundant in samples obtained through active search on carcasses and others in samples from traps, whereas two families did not show any bias towards a given sampling method. The fact that families may be underestimated or overestimated by the type of sampling technique highlights the importance of combining both methods, active search on carcasses and pitfall traps, in order to obtain more complete information on decomposition, carrion habitat and cadaveric families or species. Furthermore, a hypothesis advanced on the reasons for the underestimation by either sampling method showing biases towards certain families. Information about the sampling techniques indicating which would be more appropriate to detect or find a particular family is provided.

  8. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey.

  9. Systematic errors in detecting biased agonism: Analysis of current methods and development of a new model-free approach

    Science.gov (United States)

    Onaran, H. Ongun; Ambrosio, Caterina; Uğur, Özlem; Madaras Koncz, Erzsebet; Grò, Maria Cristina; Vezzi, Vanessa; Rajagopal, Sudarshan; Costa, Tommaso

    2017-01-01

    Discovering biased agonists requires a method that can reliably distinguish the bias in signalling due to unbalanced activation of diverse transduction proteins from that of differential amplification inherent to the system being studied, which invariably results from the non-linear nature of biological signalling networks and their measurement. We have systematically compared the performance of seven methods of bias diagnostics, all of which are based on the analysis of concentration-response curves of ligands according to classical receptor theory. We computed bias factors for a number of β-adrenergic agonists by comparing BRET assays of receptor-transducer interactions with Gs, Gi and arrestin. Using the same ligands, we also compared responses at signalling steps originated from the same receptor-transducer interaction, among which no biased efficacy is theoretically possible. In either case, we found a high level of false positive results and a general lack of correlation among methods. Altogether this analysis shows that all tested methods, including some of the most widely used in the literature, fail to distinguish true ligand bias from “system bias” with confidence. We also propose two novel semi quantitative methods of bias diagnostics that appear to be more robust and reliable than currently available strategies. PMID:28290478

  10. Comparison of some biased estimation methods (including ordinary subset regression) in the linear model

    Science.gov (United States)

    Sidik, S. M.

    1975-01-01

    Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method.

  11. Estimation of bias errors in measured airplane responses using maximum likelihood method

    Science.gov (United States)

    Klein, Vladiaslav; Morgan, Dan R.

    1987-01-01

    A maximum likelihood method is used for estimation of unknown bias errors in measured airplane responses. The mathematical model of an airplane is represented by six-degrees-of-freedom kinematic equations. In these equations the input variables are replaced by their measured values which are assumed to be without random errors. The resulting algorithm is verified with a simulation and flight test data. The maximum likelihood estimates from in-flight measured data are compared with those obtained by using a nonlinear-fixed-interval-smoother and an extended Kalmar filter.

  12. On Angular Sampling Methods for 3-D Spatial Channel Models

    DEFF Research Database (Denmark)

    Fan, Wei; Jämsä, Tommi; Nielsen, Jesper Ødum

    2015-01-01

    This paper discusses generating three dimensional (3D) spatial channel models with emphasis on the angular sampling methods. Three angular sampling methods, i.e. modified uniform power sampling, modified uniform angular sampling, and random pairing methods are proposed and investigated in detail....

  13. Worry or craving? A selective review of evidence for food-related attention biases in obese individuals, eating-disorder patients, restrained eaters and healthy samples.

    Science.gov (United States)

    Werthmann, Jessica; Jansen, Anita; Roefs, Anne

    2015-05-01

    Living in an 'obesogenic' environment poses a serious challenge for weight maintenance. However, many people are able to maintain a healthy weight indicating that not everybody is equally susceptible to the temptations of this food environment. The way in which someone perceives and reacts to food cues, that is, cognitive processes, could underlie differences in susceptibility. An attention bias for food could be such a cognitive factor that contributes to overeating. However, an attention bias for food has also been implicated with restrained eating and eating-disorder symptomatology. The primary aim of the present review was to determine whether an attention bias for food is specifically related to obesity while also reviewing evidence for attention biases in eating-disorder patients, restrained eaters and healthy-weight individuals. Another aim was to systematically examine how selective attention for food relates (causally) to eating behaviour. Current empirical evidence on attention bias for food within obese samples, eating-disorder patients, and, even though to a lesser extent, in restrained eaters is contradictory. However, present experimental studies provide relatively consistent evidence that an attention bias for food contributes to subsequent food intake. This review highlights the need to distinguish not only between different (temporal) attention bias components, but also to take different motivations (craving v. worry) and their impact on attentional processing into account. Overall, the current state of research suggests that biased attention could be one important cognitive mechanism by which the food environment tempts us into overeating.

  14. Cluster Sampling Bias in Government-Sponsored Evaluations: A Correlational Study of Employment and Welfare Pilots in England.

    Science.gov (United States)

    Vaganay, Arnaud

    2016-01-01

    For pilot or experimental employment programme results to apply beyond their test bed, researchers must select 'clusters' (i.e. the job centres delivering the new intervention) that are reasonably representative of the whole territory. More specifically, this requirement must account for conditions that could artificially inflate the effect of a programme, such as the fluidity of the local labour market or the performance of the local job centre. Failure to achieve representativeness results in Cluster Sampling Bias (CSB). This paper makes three contributions to the literature. Theoretically, it approaches the notion of CSB as a human behaviour. It offers a comprehensive theory, whereby researchers with limited resources and conflicting priorities tend to oversample 'effect-enhancing' clusters when piloting a new intervention. Methodologically, it advocates for a 'narrow and deep' scope, as opposed to the 'wide and shallow' scope, which has prevailed so far. The PILOT-2 dataset was developed to test this idea. Empirically, it provides evidence on the prevalence of CSB. In conditions similar to the PILOT-2 case study, investigators (1) do not sample clusters with a view to maximise generalisability; (2) do not oversample 'effect-enhancing' clusters; (3) consistently oversample some clusters, including those with higher-than-average client caseloads; and (4) report their sampling decisions in an inconsistent and generally poor manner. In conclusion, although CSB is prevalent, it is still unclear whether it is intentional and meant to mislead stakeholders about the expected effect of the intervention or due to higher-level constraints or other considerations.

  15. Sampling pig farms at the abattoir in a cross-sectional study - Evaluation of a sampling method.

    Science.gov (United States)

    Birkegård, Anna Camilla; Halasa, Tariq; Toft, Nils

    2017-09-15

    A cross-sectional study design is relatively inexpensive, fast and easy to conduct when compared to other study designs. Careful planning is essential to obtaining a representative sample of the population, and the recommended approach is to use simple random sampling from an exhaustive list of units in the target population. This approach is rarely feasible in practice, and other sampling procedures must often be adopted. For example, when slaughter pigs are the target population, sampling the pigs on the slaughter line may be an alternative to on-site sampling at a list of farms. However, it is difficult to sample a large number of farms from an exact predefined list, due to the logistics and workflow of an abattoir. Therefore, it is necessary to have a systematic sampling procedure and to evaluate the obtained sample with respect to the study objective. We propose a method for 1) planning, 2) conducting, and 3) evaluating the representativeness and reproducibility of a cross-sectional study when simple random sampling is not possible. We used an example of a cross-sectional study with the aim of quantifying the association of antimicrobial resistance and antimicrobial consumption in Danish slaughter pigs. It was not possible to visit farms within the designated timeframe. Therefore, it was decided to use convenience sampling at the abattoir. Our approach was carried out in three steps: 1) planning: using data from meat inspection to plan at which abattoirs and how many farms to sample; 2) conducting: sampling was carried out at five abattoirs; 3) evaluation: representativeness was evaluated by comparing sampled and non-sampled farms, and the reproducibility of the study was assessed through simulated sampling based on meat inspection data from the period where the actual data collection was carried out. In the cross-sectional study samples were taken from 681 Danish pig farms, during five weeks from February to March 2015. The evaluation showed that the sampling

  16. Perpendicular distance sampling: an alternative method for sampling downed coarse woody debris

    Science.gov (United States)

    Michael S. Williams; Jeffrey H. Gove

    2003-01-01

    Coarse woody debris (CWD) plays an important role in many forest ecosystem processes. In recent years, a number of new methods have been proposed to sample CWD. These methods select individual logs into the sample using some form of unequal probability sampling. One concern with most of these methods is the difficulty in estimating the volume of each log. A new method...

  17. [The importance of memory bias in obtaining age of menarche by recall method in Brazilian adolescents].

    Science.gov (United States)

    Castilho, Silvia Diez; Nucci, Luciana Bertoldi; Assuino, Samanta Ramos; Hansen, Lucca Ortolan

    2014-06-01

    To compare the age at menarche obtained by recall method according to the time elapsed since the event, in order to verify the importance of the recall bias. Were evaluated 1,671 girls (7-18 years) at schools in Campinas-SP regarding the occurrence of menarche by the status quo method (menarche: yes or no) and the recall method (date of menarche, for those who mentioned it). The age at menarche obtained by the status quo method was calculated by logit, which considers the whole group, and the age obtained by the recall method was calculated as the average of the mentioned age at menarche. In this group, the age at menarche was obtained by the difference between the date of the event and the date of birth. Girls who reported menarche (883, 52.8%) were divided into four groups according to the time elapsed since the event. To analyze the results, we used ANOVA and logistic regression for the analysis, with a significance level of 0.05. The age at menarche calculated by logit was 12.14 y/o (95% CI 12.08 to 12.20). Mean ages obtained by recall were: for those who experienced menarche within the previous year 12.26 y/o (±1.14), between > 1-2 years before, 12.29 y (±1.22); between > 2-3 years before, 12.23 y/o (±1.27); and more than 3 years before, 11.55y/o (±1.24), p recall method was similar for girls who menstruated within the previous 3 years (and approaches the age calculated by logit); when more than 3 years have passed, the recall bias was significant.

  18. Assessing the potential for racial bias in hair analysis for cocaine: examining the relative risk of positive outcomes when comparing urine samples to hair samples.

    Science.gov (United States)

    Mieczkowski, Tom

    2011-03-20

    This article examines the conjecture that hair analysis, performed to detect cocaine use or exposure, is biased against African Americans. It does so by comparing the outcomes of 33,928 hair and 105,792 urine samples collected from both African American and white subjects. In making this comparison the analysis seeks to determine if there is a departure in rates of positive and negative outcomes when comparing the results of hair analysis for cocaine to the results from urinalysis for cocaine by racial group. It treats urine as an unbiased test. It compares both the relative ratios of positive outcomes when comparing the two groups and it calculates the relative risk of outcomes for each group for having positive or negative outcomes. The findings show that the ratios of each racial group are effectively same for hair and urine assays, and they also show that the relative risk and risk estimates for positive and negative outcomes are the same for both racial groups. Considering all samples, the cocaine positive risk estimate for the hair samples comparing the two racial groups is 3.28 and for urinalysis the risk estimate is 3.10 (Breslow-Day χ(2) .250, 1 df, p = 0.617) a non-significant difference in risk. For pre-employment samples, the cocaine positive risk estimate for the hair samples comparing the two racial groups is 3.10 and for urinalysis the risk estimate is 2.90 (Breslow-Day χ(2) .281, df = 1, p = 0.595), also a non-significant difference in risk.

  19. A Comparison of EPI Sampling, Probability Sampling, and Compact Segment Sampling Methods for Micro and Small Enterprises.

    Science.gov (United States)

    Chao, Li-Wei; Szrek, Helena; Peltzer, Karl; Ramlagan, Shandir; Fleming, Peter; Leite, Rui; Magerman, Jesswill; Ngwenya, Godfrey B; Pereira, Nuno Sousa; Behrman, Jere

    2012-05-01

    Finding an efficient method for sampling micro- and small-enterprises (MSEs) for research and statistical reporting purposes is a challenge in developing countries, where registries of MSEs are often nonexistent or outdated. This lack of a sampling frame creates an obstacle in finding a representative sample of MSEs. This study uses computer simulations to draw samples from a census of businesses and non-businesses in the Tshwane Municipality of South Africa, using three different sampling methods: the traditional probability sampling method, the compact segment sampling method, and the World Health Organization's Expanded Programme on Immunization (EPI) sampling method. Three mechanisms by which the methods could differ are tested, the proximity selection of respondents, the at-home selection of respondents, and the use of inaccurate probability weights. The results highlight the importance of revisits and accurate probability weights, but the lesser effect of proximity selection on the samples' statistical properties.

  20. The proportionator: unbiased stereological estimation using biased automatic image analysis and non-uniform probability proportional to size sampling.

    Science.gov (United States)

    Gardi, J E; Nyengaard, J R; Gundersen, H J G

    2008-03-01

    The proportionator is a novel and radically different approach to sampling with microscopes based on the well-known statistical theory (probability proportional to size-PPS sampling). It uses automatic image analysis, with a large range of options, to assign to every field of view in the section a weight proportional to some characteristic of the structure under study. A typical and very simple example, examined here, is the amount of color characteristic for the structure, marked with a stain with known properties. The color may be specific or not. In the recorded list of weights in all fields, the desired number of fields is sampled automatically with probability proportional to the weight and presented to the expert observer. Using any known stereological probe and estimator, the correct count in these fields leads to a simple, unbiased estimate of the total amount of structure in the sections examined, which in turn leads to any of the known stereological estimates including size distributions and spatial distributions. The unbiasedness is not a function of the assumed relation between the weight and the structure, which is in practice always a biased relation from a stereological (integral geometric) point of view. The efficiency of the proportionator depends, however, directly on this relation to be positive. The sampling and estimation procedure is simulated in sections with characteristics and various kinds of noises in possibly realistic ranges. In all cases examined, the proportionator is 2-15-fold more efficient than the common systematic, uniformly random sampling. The simulations also indicate that the lack of a simple predictor of the coefficient of error (CE) due to field-to-field variation is a more severe problem for uniform sampling strategies than anticipated. Because of its entirely different sampling strategy, based on known but non-uniform sampling probabilities, the proportionator for the first time allows the real CE at the section level to

  1. A method for sampling waste corn

    Science.gov (United States)

    Frederick, R.B.; Klaas, E.E.; Baldassarre, G.A.; Reinecke, K.J.

    1984-01-01

    Corn had become one of the most important wildlife food in the United States. It is eaten by a wide variety of animals, including white-tailed deer (Odocoileus virginianus ), raccoon (Procyon lotor ), ring-necked pheasant (Phasianus colchicus , wild turkey (Meleagris gallopavo ), and many species of aquatic birds. Damage to unharvested crops had been documented, but many birds and mammals eat waste grain after harvest and do not conflict with agriculture. A good method for measuring waste-corn availability can be essential to studies concerning food density and food and feeding habits of field-feeding wildlife. Previous methods were developed primarily for approximating losses due to harvest machinery. In this paper, a method is described for estimating the amount of waste corn potentially available to wildlife. Detection of temporal changes in food availability and differences caused by agricultural operations (e.g., recently harvested stubble fields vs. plowed fields) are discussed.

  2. Gene sampling can bias multi-gene phylogenetic inferences: the relationship between red algae and green plants as a case study.

    Science.gov (United States)

    Inagaki, Yuji; Nakajima, Yoshihiro; Sato, Mitsuhisa; Sakaguchi, Miako; Hashimoto, Tetsuo

    2009-05-01

    The monophyly of Plantae including glaucophytes, red algae, and green plants (green algae plus land plants) has been recovered in recent phylogenetic analyses of large multi-gene data sets (e.g., those including >30,000 amino acid [aa] positions). On the other hand, Plantae monophyly has not been stably reconstructed in inferences from multi-gene data sets with fewer than 10,000 aa positions. An analysis of 5,216 aa positions in Nozaki et al. (Nozaki H, Iseki M, Hasegawa M, Misawa K, Nakada T, Sasaki N, Watanabe M. 2007. Phylogeny of primary photosynthetic eukaryotes as deduced from slowly evolving nuclear genes. Mol Biol Evol. 24:1592-1595.) strongly rejected the monophyly of Plantae, whereas Hackett et al. (Hackett JD, Yoon HS, Li S, Reyes-Prieto A, Rummele SE, Bhattacharya D. 2007. Phylogenomic analysis supports the monophyly of cryptophytes and haptophytes and the association of rhizaria with chromalveolates. Mol Biol Evol. 24:1702-1713.) robustly recovered the Plantae clade in an analysis of 6,735 aa positions. We suspected that the significant incongruity observed between the two studies was attributable to a bias generally overlooked in multi-gene phylogenetic estimation, rather than data size, taxon sampling, or methods for tree reconstruction. Although glaucophytes were excluded from our analyses due to a shortage of sequence data, we found that the recovery of a sister-group relationship between red algae and green plants primarily depends on gene sampling in phylogenetic inferences from <10,000 aa positions. Phylogenetic analyses of data sets with fewer than 10,000 aa positions, which can be prepared without large-scale sequencing (e.g., expressed sequence tag analyses), are practical in challenging various unresolved issues in eukaryotic evolution. However, our results indicate that severe biases can arise from gene sampling in multi-gene inferences from <10,000 aa positions. We also address the validity of fast-evolving gene exclusion in multi

  3. Using Data-Dependent Priors to Mitigate Small Sample Bias in Latent Growth Models: A Discussion and Illustration Using M"plus"

    Science.gov (United States)

    McNeish, Daniel M.

    2016-01-01

    Mixed-effects models (MEMs) and latent growth models (LGMs) are often considered interchangeable save the discipline-specific nomenclature. Software implementations of these models, however, are not interchangeable, particularly with small sample sizes. Restricted maximum likelihood estimation that mitigates small sample bias in MEMs has not been…

  4. 19 CFR 151.70 - Method of sampling by Customs.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Method of sampling by Customs. 151.70 Section 151.70 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF... Method of sampling by Customs. A general sample shall be taken from each sampling unit, unless it is...

  5. A NEW METHOD FOR INCREASING PRECISIONIN SURVEY SAMPLING

    Institute of Scientific and Technical Information of China (English)

    冯士雍; 邹国华

    2001-01-01

    This paper proposes a new method for increasing the precision in survey sampling, i.e., a method combining sampling with prediction. The two cases where auxiliary information is or not available are considered. A numerical example is given.

  6. A Comparison of EPI Sampling, Probability Sampling, and Compact Segment Sampling Methods for Micro and Small Enterprises

    Science.gov (United States)

    Chao, Li-Wei; Szrek, Helena; Peltzer, Karl; Ramlagan, Shandir; Fleming, Peter; Leite, Rui; Magerman, Jesswill; Ngwenya, Godfrey B.; Pereira, Nuno Sousa; Behrman, Jere

    2011-01-01

    Finding an efficient method for sampling micro- and small-enterprises (MSEs) for research and statistical reporting purposes is a challenge in developing countries, where registries of MSEs are often nonexistent or outdated. This lack of a sampling frame creates an obstacle in finding a representative sample of MSEs. This study uses computer simulations to draw samples from a census of businesses and non-businesses in the Tshwane Municipality of South Africa, using three different sampling methods: the traditional probability sampling method, the compact segment sampling method, and the World Health Organization’s Expanded Programme on Immunization (EPI) sampling method. Three mechanisms by which the methods could differ are tested, the proximity selection of respondents, the at-home selection of respondents, and the use of inaccurate probability weights. The results highlight the importance of revisits and accurate probability weights, but the lesser effect of proximity selection on the samples’ statistical properties. PMID:22582004

  7. Long-Time Convergence of an Adaptive Biasing Force Method: The Bi-Channel Case

    Science.gov (United States)

    Lelièvre, T.; Minoukadeh, K.

    2011-10-01

    We present convergence results for an adaptive algorithm to compute free energies, namely the adaptive biasing force (ABF) method (D arve and P ohorille in J Chem Phys 115(20):9169-9183, 2001; H énin and C hipot in J Chem Phys 121:2904, 2004). The free energy is the effective potential associated to a so-called reaction coordinate ξ( q), where q = ( q 1, … , q 3 N ) is the position vector of an N-particle system. Computing free energy differences remains an important challenge in molecular dynamics due to the presence of metastable regions in the potential energy surface. The ABF method uses an on-the-fly estimate of the free energy to bias dynamics and overcome metastability. Using entropy arguments and logarithmic Sobolev inequalities, previous results have shown that the rate of convergence of the ABF method is limited by the metastable features of the canonical measures conditioned to being at fixed values of ξ (L elièvre et al. in Nonlinearity 21(6):1155-1181, 2008). In this paper, we present an improvement on the existing results in the presence of such metastabilities, which is a generic case encountered in practice. More precisely, we study the so-called bi-channel case, where two channels along the reaction coordinate direction exist between an initial and final state, the channels being separated from each other by a region of very low probability. With hypotheses made on `channel-dependent' conditional measures, we show on a bi-channel model, which we introduce, that the convergence of the ABF method is, in fact, not limited by metastabilities in directions orthogonal to ξ under two crucial assumptions: (i) exchange between the two channels is possible for some values of ξ and (ii) the free energy is a good bias in each channel. This theoretical result supports recent numerical experiments (M inoukadeh et al. in J Chem Theory Comput 6:1008-1017, 2010), where the efficiency of the ABF approach is demonstrated for such a multiple-channel situation.

  8. Sampling Males Who Inject Drugs in Haiphong, Vietnam: Comparison of Time-Location and Respondent-Driven Sampling Methods.

    Science.gov (United States)

    Tran, Hoang Vu; Le, Linh-Vi N; Johnston, Lisa Grazina; Nadol, Patrick; Van Do, Anh; Tran, Ha Thi Thanh; Nguyen, Tuan Anh

    2015-08-01

    Accurate measurements of HIV prevalence and associated risk factors among hidden and high-risk groups are vital for program planning and implementation. However, only two sampling methods are purported to provide representative estimates for populations without sampling frames: time-location sampling (TLS) and respondent-driven sampling (RDS). Each method is subject to potential biases and questionable reliability. In this paper, we evaluate surveys designed to estimate HIV prevalence and associated risk factors among people who inject drugs (PWID) sampled through TLS versus RDS. In 2012, males aged ≥16 years who reported injecting drugs in the previous month and living in Haiphong, Vietnam, were sampled using TLS or RDS. Data from each survey were analyzed to compare HIV prevalence, related risk factors, socio-demographic characteristics, refusal estimates, and time and expenditures for field implementation. TLS (n = 432) and RDS (n = 415) produced similarly high estimates for HIV prevalence. Significantly lower proportions of PWID sampled through RDS received methadone treatment or met an outreach worker. Refusal estimates were lower for TLS than for RDS. Total expenditures per sample collected and number of person-days of staff effort were higher for TLS than for RDS. Both survey methods were successful in recruiting a diverse sample of PWID in Haiphong. In Vietnam, surveys of PWID are conducted throughout the country; although the refusal estimate was calculated to be much higher for RDS than TLS, RDS in Haiphong appeared to sample PWID with less exposure to services and required fewer financial and staff resources compared with TLS.

  9. Comparison of two bias correction methods for precipitation simulated with a regional climate model

    Science.gov (United States)

    Tschöke, Gabriele Vanessa; Kruk, Nadiane Smaha; de Queiroz, Paulo Ivo Braga; Chou, Sin Chan; de Sousa Junior, Wilson Cabral

    2017-02-01

    This study evaluates the performance of two bias correction techniques—power transformation and gamma distribution adjustment—for Eta regional climate model (RCM) precipitation simulations. For the gamma distribution adjustment, the number of dry days is not taken as a fixed parameter; rather, we propose a new methodology for handling dry days. We consider two cases: the first case is defined as having a greater number of simulated dry days than the observed number, and the second case is defined as the opposite. The present climate period was divided into calibration and validation sets. We evaluate the results of the two bias correction techniques using the Kolmogorov-Smirnov nonparametric test and the sum of the differences between the cumulative distribution curves. These tests show that both correction techniques were effective in reducing errors and consequently improving the reliability of the simulations. However, the gamma distribution correction method proved to be more efficient, particularly in reducing the error in the number of dry days.

  10. Evaluation of Sampling Methods and Development of Sample Plans for Estimating Predator Densities in Cotton

    Science.gov (United States)

    The cost-reliability of five sampling methods (visual search, drop cloth, beat bucket, shake bucket and sweep net) was determined for predatory arthropods on cotton plants. The beat bucket sample method was the most cost-reliable while the visual sample method was the least cost-reliable. The beat ...

  11. RCM skill assessment applying precipitation, temperature and hydrological performance measures: comparing different RCM resolutions and bias correction methods

    Science.gov (United States)

    Pasten-Zapata, Ernesto; Jones, Julie; Moggridge, Helen; Widmann, Martin

    2017-04-01

    Global Climate Models (GCMs) are the main tool to assess futures changes in climate and their impacts. Due to their coarse resolution, GCMs fail to accurately simulate observed climate variables at the catchment scale. Therefore, climate researchers have focused on increasing model resolution by nesting Regional Climate Models (RCMs) into the GCMs for regional areas, a process known as dynamical downscaling. Commonly, RCMs also have simulation biases at the catchment scale and therefore statistical techniques, known as bias correction methods, are used to reduce such biases. In this project the skill to simulate precipitation and temperature from five reanalysis-driven Euro-CORDEX RCMs is evaluated. Furthermore, RCM precipitation and temperature outputs are coupled with a hydrological model (the HEC-HMS model) to simulate river flow at the catchment scale. Precipitation, temperature and hydrological biases are assessed using a range of metrics combining mean, extremes, time series and distribution measures. In order to evaluate the dynamical downscaling effect, the RCMs are analyzed at two resolutions: 0.44° and 0.11°. Additionally, both resolutions are bias-corrected employing the parametric quantile-mapping method: a) temperature is bias-corrected using the normal distribution, and b) precipitation is bias-corrected using the gamma and double-gamma distributions. Four catchments across England and Wales covering different climate conditions and topographical characteristics are used as study sites. The results from this study provide an overview of the skill of current state-of-the-art RCMs and their suitability for hydrological impact analysis at the catchment scale. Furthermore, for precipitation the study analyses the performance of the commonly-used gamma distribution quantile-mapping bias-correction method comparing it to the double-gamma distribution method considering their implications towards the simulation of hydrological impacts.

  12. Volunteer bias in recruitment, retention, and blood sample donation in a randomised controlled trial involving mothers and their children at six months and two years: a longitudinal analysis.

    Directory of Open Access Journals (Sweden)

    Sue Jordan

    Full Text Available BACKGROUND: The vulnerability of clinical trials to volunteer bias is under-reported. Volunteer bias is systematic error due to differences between those who choose to participate in studies and those who do not. METHODS AND RESULTS: This paper extends the applications of the concept of volunteer bias by using data from a trial of probiotic supplementation for childhood atopy in healthy dyads to explore 1 differences between a trial participants and aggregated data from publicly available databases b participants and non-participants as the trial progressed 2 impact on trial findings of weighting data according to deprivation (Townsend fifths in the sample and target populations. 1 a Recruits (n = 454 were less deprived than the target population, matched for area of residence and delivery dates (n = 6,893 (mean [SD] deprivation scores 0.09[4.21] and 0.79[4.08], t = 3.44, df = 511, p<0.001. b i As the trial progressed, representation of the most deprived decreased. These participants and smokers were less likely to be retained at 6 months (n = 430[95%] (OR 0.29,0.13-0.67 and 0.20,0.09-0.46, and 2 years (n = 380[84%] (aOR 0.68,0.50-0.93 and 0.55,0.28-1.09, and consent to infant blood sample donation (n = 220[48%] (aOR 0.72,0.57-0.92 and 0.43,0.22-0.83. ii Mothers interested in probiotics or research or reporting infants' adverse events or rashes were more likely to attend research clinics and consent to skin-prick testing. Mothers participating to help children were more likely to consent to infant blood sample donation. 2 In one trial outcome, atopic eczema, the intervention had a positive effect only in the over-represented, least deprived group. Here, data weighting attenuated risk reduction from 6.9%(0.9-13.1% to 4.6%(-1.4-+10.5%, and OR from 0.40(0.18-0.91 to 0.56(0.26-1.21. Other findings were unchanged. CONCLUSIONS: Potential for volunteer bias intensified during the trial, due to non-participation of the most

  13. Comparison of blood chemistry values for samples collected from juvenile chinook salmon by three methods

    Science.gov (United States)

    Congleton, J.L.; LaVoie, W.J.

    2001-01-01

    Thirteen blood chemistry indices were compared for samples collected by three commonly used methods: caudal transection, heart puncture, and caudal vessel puncture. Apparent biases in blood chemistry values for samples obtained by caudal transection were consistent with dilution with tissue fluids: alanine aminotransferase (ALT), aspartate aminotransferase (AST), lactate dehydrogenase (LDH), creatine kinase (CK), triglyceride, and K+ were increased and Na+ and Cl- were decreased relative to values for samples obtained by caudal vessel puncture. Some enzyme activities (ALT, AST, LDH) and K+ concentrations were also greater in samples taken by heart puncture than in samples taken by caudal vessel puncture. Of the methods tested, caudal vessel puncture had the least effect on blood chemistry values and should be preferred for blood chemistry studies on juvenile salmonids.

  14. Selection biases in empirical p(z) methods for weak lensing

    CERN Document Server

    Gruen, Daniel

    2016-01-01

    To measure the mass of foreground objects with weak gravitational lensing, one needs to estimate the redshift distribution of lensed background sources. This is commonly done in an empirical fashion, i.e. with a reference sample of galaxies of known spectroscopic redshift, matched to the source population. In this work, we develop a simple decision tree framework that, under the ideal conditions of a large, purely magnitude-limited reference sample, allows an unbiased recovery of the source redshift probability density function p(z), as a function of magnitude and color. We use this framework to quantify biases in empirically estimated p(z) caused by selection effects present in realistic reference and weak lensing source catalogs, namely (1) complex selection of reference objects by the targeting strategy and success rate of existing spectroscopic surveys and (2) selection of background sources by the success of object detection and shape measurement at low signal-to-noise. For intermediate-to-high redshift ...

  15. Galaxy Zoo: comparing the demographics of spiral arm number and a new method for correcting redshift bias

    Science.gov (United States)

    Hart, Ross E.; Bamford, Steven P.; Willett, Kyle W.; Masters, Karen L.; Cardamone, Carolin; Lintott, Chris J.; Mackay, Robert J.; Nichol, Robert C.; Rosslowe, Christopher K.; Simmons, Brooke D.; Smethurst, Rebecca J.

    2016-10-01

    The majority of galaxies in the local Universe exhibit spiral structure with a variety of forms. Many galaxies possess two prominent spiral arms, some have more, while others display a many-armed flocculent appearance. Spiral arms are associated with enhanced gas content and star formation in the discs of low-redshift galaxies, so are important in the understanding of star formation in the local universe. As both the visual appearance of spiral structure, and the mechanisms responsible for it vary from galaxy to galaxy, a reliable method for defining spiral samples with different visual morphologies is required. In this paper, we develop a new debiasing method to reliably correct for redshift-dependent bias in Galaxy Zoo 2, and release the new set of debiased classifications. Using these, a luminosity-limited sample of ˜18 000 Sloan Digital Sky Survey spiral galaxies is defined, which are then further sub-categorized by spiral arm number. In order to explore how different spiral galaxies form, the demographics of spiral galaxies with different spiral arm numbers are compared. It is found that whilst all spiral galaxies occupy similar ranges of stellar mass and environment, many-armed galaxies display much bluer colours than their two-armed counterparts. We conclude that two-armed structure is ubiquitous in star-forming discs, whereas many-armed spiral structure appears to be a short-lived phase, associated with more recent, stochastic star-formation activity.

  16. Galaxy Zoo: comparing the demographics of spiral arm number and a new method for correcting redshift bias

    CERN Document Server

    Hart, Ross E; Willett, Kyle W; Masters, Karen L; Cardamone, Carolin; Lintott, Chris J; Mackay, Robert J; Nichol, Robert C; Rosslowe, Christopher K; Simmons, Brooke D; Smethurst, Rebecca J

    2016-01-01

    The majority of galaxies in the local Universe exhibit spiral structure with a variety of forms. Many galaxies possess two prominent spiral arms, some have more, while others display a many-armed flocculent appearance. Spiral arms are associated with enhanced gas content and star-formation in the disks of low-redshift galaxies, so are important in the understanding of star-formation in the local universe. As both the visual appearance of spiral structure, and the mechanisms responsible for it vary from galaxy to galaxy, a reliable method for defining spiral samples with different visual morphologies is required. In this paper, we develop a new debiasing method to reliably correct for redshift-dependent bias in Galaxy Zoo 2, and release the new set of debiased classifications. Using these, a luminosity-limited sample of ~18,000 Sloan Digital Sky Survey spiral galaxies is defined, which are then further sub-categorised by spiral arm number. In order to explore how different spiral galaxies form, the demographic...

  17. Sampling Methods in Cardiovascular Nursing Research: An Overview.

    Science.gov (United States)

    Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie

    2014-01-01

    Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.

  18. A Comparison of EPI Sampling, Probability Sampling, and Compact Segment Sampling Methods for Micro and Small Enterprises

    OpenAIRE

    Chao, Li-Wei; Szrek, Helena; Peltzer, Karl; Ramlagan, Shandir; Fleming, Peter; Leite, Rui; Magerman, Jesswill; Ngwenya, Godfrey B.; Pereira, Nuno Sousa; Behrman, Jere

    2011-01-01

    Finding an efficient method for sampling micro- and small-enterprises (MSEs) for research and statistical reporting purposes is a challenge in developing countries, where registries of MSEs are often nonexistent or outdated. This lack of a sampling frame creates an obstacle in finding a representative sample of MSEs. This study uses computer simulations to draw samples from a census of businesses and non-businesses in the Tshwane Municipality of South Africa, using three different sampling me...

  19. 19 CFR 151.83 - Method of sampling.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Method of sampling. 151.83 Section 151.83 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Cotton § 151.83 Method of sampling....

  20. 7 CFR 29.110 - Method of sampling.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Method of sampling. 29.110 Section 29.110 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Regulations Inspectors, Samplers, and Weighers § 29.110 Method of sampling. In sampling...

  1. A method for additive bias correction in cross-cultural surveys

    DEFF Research Database (Denmark)

    Scholderer, Joachim; Grunert, Klaus G.; Brunsø, Karen

    2001-01-01

    Measurement bias in cross-cultural surveys can seriously threaten the validity of hypothesis tests. Direct comparisons of means depend on the assumption that differences in observed variables reflect differences in the underlying constructs, and not an additive bias that may be caused by cultural...

  2. Awareness Reduces Racial Bias

    OpenAIRE

    2014-01-01

    Can raising awareness of racial bias subsequently reduce that bias? We address this question by exploiting the widespread media attention highlighting racial bias among professional basketball referees that occurred in May 2007 following the release of an academic study. Using new data, we confirm that racial bias persisted in the years after the study's original sample, but prior to the media coverage. Subsequent to the media coverage though, the bias completely disappeared. We examine poten...

  3. The risk of bias and sample size of trials of spinal manipulative therapy for low back and neck pain: analysis and recommendations.

    Science.gov (United States)

    Rubinstein, Sidney M; van Eekelen, Rik; Oosterhuis, Teddy; de Boer, Michiel R; Ostelo, Raymond W J G; van Tulder, Maurits W

    2014-10-01

    The purpose of this study was to evaluate changes in methodological quality and sample size in randomized controlled trials (RCTs) of spinal manipulative therapy (SMT) for neck and low back pain over a specified period. A secondary purpose was to make recommendations for improvement for future SMT trials based upon our findings. Randomized controlled trials that examined the effect of SMT in adults with neck and/or low back pain and reported at least 1 patient-reported outcome measure were included. Studies were identified from recent Cochrane reviews of SMT, and an update of the literature was conducted (March 2013). Risk of bias was assessed using the 12-item criteria recommended by the Cochrane Back Review Group. In addition, sample size was examined. The relationship between the overall risk of bias and sample size over time was evaluated using regression analyses, and RCTs were grouped into periods (epochs) of approximately 5 years. In total, 105 RCTs were included, of which 41 (39%) were considered to have a low risk of bias. There is significant improvement in the mean risk of bias over time (P statistically (odds ratio, 2.1; confidence interval, 1.5-3.0). Sensitivity analyses suggest no appreciable difference between studies for neck or low back pain for risk of bias or sample size. Methodological quality of RCTs of SMT for neck and low back pain is improving, whereas overall sample size has shown only small and nonsignificant increases. There is an increasing trend among studies to conduct sample size calculations, which relate to statistical power. Based upon these findings, 7 areas of improvement for future SMT trials are suggested. Copyright © 2014 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.

  4. A method for evaluating bias in global measurements of CO{sub 2} total columns from space

    Energy Technology Data Exchange (ETDEWEB)

    Wunch, D.; Wennberg, P. O.; Toon, G. C.; Connor, B. J.; Fisher, B.; Osterman, G. B.; Frankenberg, C.; Mandrake, L.; O?Dell, C.; Ahonen, P.; Biraud, S. C.; Castano, R.; Cressie, N.; Crisp, D.; Deutscher, N. M.; Eldering, A.; Fisher, M. L.; Griffith, D. W.T.; Gunson, M.; Heikkinen, P.; Keppel-Aleks, G.; Kyro, E.; Lindenmaier, R.; Macatangay, R.; Mendonca, J.; Messerschmidt, J.; Miller, C. E.; Morino, I.; Notholt, J.; Oyafuso, F. A.; Rettinger, M.; Robinson, J.; Roehl, C. M.; Salawitch, R. J.; Sherlock, V.; Strong, K.; Sussmann, R.; Tanaka, T.; Thompson, D. R.; Uchino, O.; Warneke, T.; Wofsy, S. C.

    2011-08-01

    We describe a method of evaluating systematic errors in measurements of total column dry-air mole fractions of CO{sub 2} (X{sub CO{sub 2}} ) from space, and we illustrate the method by applying it to the Atmospheric CO{sub 2} Observations from Space retrievals of the Greenhouse Gases Observing Satellite (ACOS-GOSAT) v2.8. The approach exploits the lack of large gradients in X{sub CO{sub 2}} south of 25{degree} S to identify large-scale offsets and other biases in the ACOS-GOSAT data with several retrieval parameters and errors in instrument calibration. We demonstrate the effectiveness of the method by comparing the ACOS-GOSAT data in the Northern Hemisphere with ground truth provided by the Total Carbon Column Observing Network (TCCON). We use the correlation between free-tropospheric temperature and X{sub CO{sub 2}} in the Northern Hemisphere to define a dynamically informed coincidence criterion between the ground-based TCCON measurements and the ACOS-GOSAT measurements. We illustrate that this approach provides larger sample sizes, hence giving a more robust comparison than one that simply uses time, latitude and longitude criteria. Our results show that the agreement with the TCCON data improves after accounting for the systematic errors. A preliminary evaluation of the improved v2.9 ACOS-GOSAT data is also discussed.

  5. Photoacoustic sample vessel and method of elevated pressure operation

    Science.gov (United States)

    Autrey, Tom; Yonker, Clement R.

    2004-05-04

    An improved photoacoustic vessel and method of photoacoustic analysis. The photoacoustic sample vessel comprises an acoustic detector, an acoustic couplant, and an acoustic coupler having a chamber for holding the acoustic couplant and a sample. The acoustic couplant is selected from the group consisting of liquid, solid, and combinations thereof. Passing electromagnetic energy through the sample generates an acoustic signal within the sample, whereby the acoustic signal propagates through the sample to and through the acoustic couplant to the acoustic detector.

  6. Unpredictable bias when using the missing indicator method or complete case analysis for missing confounder values: an empirical example.

    NARCIS (Netherlands)

    Knol, M.J.; Janssen, K.J.; Donders, A.R.T.; Egberts, A.C.G.; Heerdink, E.R.; Grobbee, D.E.; Moons, K.G.; Geerlings, M.I.

    2010-01-01

    OBJECTIVE: Missing indicator method (MIM) and complete case analysis (CC) are frequently used to handle missing confounder data. Using empirical data, we demonstrated the degree and direction of bias in the effect estimate when using these methods compared with multiple imputation (MI). STUDY DESIGN

  7. Bats from Fazenda Intervales, Southeastern Brazil: species account and comparison between different sampling methods

    Directory of Open Access Journals (Sweden)

    Christine V. Portfors

    2000-06-01

    Full Text Available Assessing the composition of an area's bat fauna is typically accomplished by using captures or by monitoring echolocation calls with bat detectors. The two methods may not provide the same data regarding species composition. Mist nets and harp traps may be biased towards sampling low flying species, and bat detectors biased towards detecting high intensity echolocators. A comparison of the bat fauna of Fazenda Intervales, southeastern Brazil, as revealed by mist nets and harp trap captures, checking roosts and by monitoring echolocation calls of flying bats illustrates this point. A total of 17 species of bats was sampled. Fourteen bat species were captured and the echolocation calls of 12 species were recorded, three of them not revealed by mist nets or harp traps. The different sampling methods provided different pictures of the bat fauna. Phyllostomid bats dominated the catches in mist nets, but in the field their echolocation calls were never detected. No single sampling approach provided a complete assessment of the bat fauna in the study area. In general, bats producing low intensity echolocation calls, such as phyllostomids, are more easily assessed by netting, and bats producing high intensity echolocation calls are better surveyed by bat detectors. The results demonstrate that a combined and varied approach to sampling is required for a complete assessment of the bat fauna of an area.

  8. Enhanced Sampling in Free Energy Calculations: Combining SGLD with the Bennett's Acceptance Ratio and Enveloping Distribution Sampling Methods.

    Science.gov (United States)

    König, Gerhard; Miller, Benjamin T; Boresch, Stefan; Wu, Xiongwu; Brooks, Bernard R

    2012-10-09

    One of the key requirements for the accurate calculation of free energy differences is proper sampling of conformational space. Especially in biological applications, molecular dynamics simulations are often confronted with rugged energy surfaces and high energy barriers, leading to insufficient sampling and, in turn, poor convergence of the free energy results. In this work, we address this problem by employing enhanced sampling methods. We explore the possibility of using self-guided Langevin dynamics (SGLD) to speed up the exploration process in free energy simulations. To obtain improved free energy differences from such simulations, it is necessary to account for the effects of the bias due to the guiding forces. We demonstrate how this can be accomplished for the Bennett's acceptance ratio (BAR) and the enveloping distribution sampling (EDS) methods. While BAR is considered among the most efficient methods available for free energy calculations, the EDS method developed by Christ and van Gunsteren is a promising development that reduces the computational costs of free energy calculations by simulating a single reference state. To evaluate the accuracy of both approaches in connection with enhanced sampling, EDS was implemented in CHARMM. For testing, we employ benchmark systems with analytical reference results and the mutation of alanine to serine. We find that SGLD with reweighting can provide accurate results for BAR and EDS where conventional molecular dynamics simulations fail. In addition, we compare the performance of EDS with other free energy methods. We briefly discuss the implications of our results and provide practical guidelines for conducting free energy simulations with SGLD.

  9. γ-ray spectrometry results versus sample preparation methods

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    According to recommended conditions two bio-samples, tea leave and flour, are prepared with different methods: grounding into powder and reducing to ash, then they were analyzed by γ ray spectrometry. Remarkable difference was shown between the measured values of tea samples prepared with these different methods. One of the reasons may be that the method of reducing to ash makes some nuclides lost. Compared with the "non-destructive"method of grounding into powder, the method of reducing to ash can be much more sensible to the loss of some nuclides. The probable reasons are discussed for the varied influences of different preparation methods of tea leave and flour samples.

  10. Robust numerical methods for conservation laws using a biased averaging procedure

    Science.gov (United States)

    Choi, Hwajeong

    In this thesis, we introduce a new biased averaging procedure (BAP) and use it in developing high resolution schemes for conservation laws. Systems of conservation laws arise in variety of physical problems, such as the Euler equation of compressible flows, magnetohydrodynamics, multicomponent flows, the blast waves and the flow of glaciers. Many modern shock capturing schemes are based on solution reconstructions by high order polynomial interpolations, and time evolution by the solutions of Riemann problems. Due to the existence of discontinuities in the solution, the interpolating polynomial has to be carefully constructed to avoid possible oscillations near discontinuities. The BAP is a more general and simpler way to approximate higher order derivatives of given data without introducing oscillations, compared to limiters and the essentially non-oscillatory interpolations. For the solution of a system of conservation laws, we present a finite volume method which employs a flux splitting and uses componentwise reconstruction of the upwind fluxes. A high order piecewise polynomial constructed by using BAP is used to approximate the component of upwind fluxes. This scheme does not require characteristic decomposition nor Riemann solver, offering easy implementation and a relatively small computational cost. More importantly, the BAP is naturally extended for unstructured grids and it will be demonstrated through a cell-centered finite volume method, along with adaptive mesh refinement. A number of numerical experiments from various applications demonstrates the robustness and the accuracy of this approach, and show the potential of this approach for other practical applications.

  11. A method for selecting training samples based on camera response

    Science.gov (United States)

    Zhang, Leihong; Li, Bei; Pan, Zilan; Liang, Dong; Kang, Yi; Zhang, Dawei; Ma, Xiuhua

    2016-09-01

    In the process of spectral reflectance reconstruction, sample selection plays an important role in the accuracy of the constructed model and in reconstruction effects. In this paper, a method for training sample selection based on camera response is proposed. It has been proved that the camera response value has a close correlation with the spectral reflectance. Consequently, in this paper we adopt the technique of drawing a sphere in camera response value space to select the training samples which have a higher correlation with the test samples. In addition, the Wiener estimation method is used to reconstruct the spectral reflectance. Finally, we find that the method of sample selection based on camera response value has the smallest color difference and root mean square error after reconstruction compared to the method using the full set of Munsell color charts, the Mohammadi training sample selection method, and the stratified sampling method. Moreover, the goodness of fit coefficient of this method is also the highest among the four sample selection methods. Taking all the factors mentioned above into consideration, the method of training sample selection based on camera response value enhances the reconstruction accuracy from both the colorimetric and spectral perspectives.

  12. Fast egg collection method greatly improves randomness of egg sampling in Drosophila melanogaster.

    Science.gov (United States)

    Schou, Mads Fristrup

    2013-01-01

    When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column and diminishes environmental variance. This method was compared with a traditional egg collection method where eggs are collected directly from the medium. Within each method the observed and expected standard deviations of egg-to-adult viability were compared, whereby the difference in the randomness of the samples between the two methods was assessed. The method presented here was superior to the traditional method. Only 14% of the samples had a standard deviation higher than expected, as compared with 58% in the traditional method. To reduce bias in the estimation of the variance and the mean of a trait and to obtain a representative collection of genotypes, the method presented here is strongly recommended when collecting eggs from Drosophila.

  13. Systems and methods for self-synchronized digital sampling

    Science.gov (United States)

    Samson, Jr., John R. (Inventor)

    2008-01-01

    Systems and methods for self-synchronized data sampling are provided. In one embodiment, a system for capturing synchronous data samples is provided. The system includes an analog to digital converter adapted to capture signals from one or more sensors and convert the signals into a stream of digital data samples at a sampling frequency determined by a sampling control signal; and a synchronizer coupled to the analog to digital converter and adapted to receive a rotational frequency signal from a rotating machine, wherein the synchronizer is further adapted to generate the sampling control signal, and wherein the sampling control signal is based on the rotational frequency signal.

  14. Testing for bias between the Kjeldahl and Dumas methods for the determination of nitrogen in meat mixtures, by using data from a designed interlaboratory experiment.

    Science.gov (United States)

    Thompson, Michael; Owen, Linda; Wilkinson, Kate; Wood, Roger; Damant, Andrew

    2004-12-01

    Bias between the Dumas and the Kjeldahl methods for the determination of protein nitrogen in food was studied by conducting an interlaboratory study involving 40 laboratories and 20 different test materials. Biases were found to be small and statistically significant only for the chicken test materials, where a bias of 0.020±0.004% m/m was detected.

  15. DOE methods for evaluating environmental and waste management samples.

    Energy Technology Data Exchange (ETDEWEB)

    Goheen, S C; McCulloch, M; Thomas, B L; Riley, R G; Sklarew, D S; Mong, G M; Fadeff, S K [eds.; Pacific Northwest Lab., Richland, WA (United States)

    1994-04-01

    DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) provides applicable methods in use by. the US Department of Energy (DOE) laboratories for sampling and analyzing constituents of waste and environmental samples. The development of DOE Methods is supported by the Laboratory Management Division (LMD) of the DOE. This document contains chapters and methods that are proposed for use in evaluating components of DOE environmental and waste management samples. DOE Methods is a resource intended to support sampling and analytical activities that will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the US Environmental Protection Agency (EPA), or others.

  16. Sampling bee communities using pan traps: alternative methods increase sample size

    Science.gov (United States)

    Monitoring of the status of bee populations and inventories of bee faunas require systematic sampling. Efficiency and ease of implementation has encouraged the use of pan traps to sample bees. Efforts to find an optimal standardized sampling method for pan traps have focused on pan trap color. Th...

  17. A random spatial sampling method in a rural developing nation

    Science.gov (United States)

    Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas

    2014-01-01

    Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...

  18. Sampling methods for amphibians in streams in the Pacific Northwest.

    Science.gov (United States)

    R. Bruce Bury; Paul Stephen. Corn

    1991-01-01

    Methods describing how to sample aquatic and semiaquatic amphibians in small streams and headwater habitats in the Pacific Northwest are presented. We developed a technique that samples 10-meter stretches of selected streams, which was adequate to detect presence or absence of amphibian species and provided sample sizes statistically sufficient to compare abundance of...

  19. Comparative Study of Two Vitreous Humor Sampling Methods in Rabbits

    Institute of Scientific and Technical Information of China (English)

    WANG Lan; ZHOU Weiguang; REN Liang; LIU Qian; LIU Liang

    2006-01-01

    To compare and evaluate two methodologies, entire-sampling and micro-sampling for the harvesting of vitreous humor, the vitreous humor of rabbits were sampled with the two methods respectively, and the concentrations of calcium, chlorine, potassium, sodium and phosphorus of the were measured. The results showed that the differences in the variance coefficient and two-eye concentrations of micro-sampled specimens were less than those of the entire-sampled specimens. In the micro-sampling group, the concentrations of repeated micro-sampling showed no differences among different groups (P>0.05) and the intra-ocular fluid dynamics did not have significant influence on post-mortem sampling. The sampling technique may affect the concentrations of specimen collected.Our study suggests that micro-sampling is less influenced by the human factor and is reliable, reproducible, and more suitable for forensic investigation.

  20. Uniform Sampling Table Method and its Applications II--Evaluating the Uniform Sampling by Experiment.

    Science.gov (United States)

    Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei

    2015-01-01

    A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.

  1. THE USE OF RANKING SAMPLING METHOD WITHIN MARKETING RESEARCH

    Directory of Open Access Journals (Sweden)

    CODRUŢA DURA

    2011-01-01

    Full Text Available Marketing and statistical literature available to practitioners provides a wide range of sampling methods that can be implemented in the context of marketing research. Ranking sampling method is based on taking apart the general population into several strata, namely into several subdivisions which are relatively homogenous regarding a certain characteristic. In fact, the sample will be composed by selecting, from each stratum, a certain number of components (which can be proportional or non-proportional to the size of the stratum until the pre-established volume of the sample is reached. Using ranking sampling within marketing research requires the determination of some relevant statistical indicators - average, dispersion, sampling error etc. To that end, the paper contains a case study which illustrates the actual approach used in order to apply the ranking sample method within a marketing research made by a company which provides Internet connection services, on a particular category of customers – small and medium enterprises.

  2. Effect of additional sample bias in Meshed Plasma Immersion Ion Deposition (MPIID) on microstructural, surface and mechanical properties of Si-DLC films

    Science.gov (United States)

    Wu, Mingzhong; Tian, Xiubo; Li, Muqin; Gong, Chunzhi; Wei, Ronghua

    2016-07-01

    Meshed Plasma Immersion Ion Deposition (MPIID) using cage-like hollow cathode discharge is a modified process of conventional PIID, but it allows the deposition of thick diamond-like carbon (DLC) films (up to 50 μm) at a high deposition rate (up to 6.5 μm/h). To further improve the DLC film properties, a new approach to the MPIID process is proposed, in which the energy of ions incident to the sample surface can be independently controlled by an additional voltage applied between the samples and the metal meshed cage. In this study, the meshed cage was biased with a pulsed DC power supply at -1350 V peak voltage for the plasma generation, while the samples inside the cage were biased with a DC voltage from 0 V to -500 V with respect to the cage to study its effect. Si-DLC films were synthesized with a mixture of Ar, C2H2 and tetramethylsilane (TMS). After the depositions, scanning electron microscopy (SEM), atomic force microscopy (AFM), X-ray photoelectrons spectroscopy (XPS), Raman spectroscopy and nanoindentation were used to study the morphology, surface roughness, chemical bonding and structure, and the surface hardness as well as the modulus of elasticity of the Si-DLC films. It was observed that the intense ion bombardment significantly densified the films, reduced the surface roughness, reduced the H and Si contents, and increased the nanohardness (H) and modulus of elasticity (E), whereas the deposition rate decreased slightly. Using the H and E data, high values of H3/E2 and H/E were obtained on the biased films, indicating the potential excellent mechanical and tribological properties of the films. In this paper, the effects of the sample bias voltage on the film properties are discussed in detail and the optimal bias voltage is presented.

  3. Configurations and calibration methods for passive sampling techniques.

    Science.gov (United States)

    Ouyang, Gangfeng; Pawliszyn, Janusz

    2007-10-19

    Passive sampling technology has developed very quickly in the past 15 years, and is widely used for the monitoring of pollutants in different environments. The design and quantification of passive sampling devices require an appropriate calibration method. Current calibration methods that exist for passive sampling, including equilibrium extraction, linear uptake, and kinetic calibration, are presented in this review. A number of state-of-the-art passive sampling devices that can be used for aqueous and air monitoring are introduced according to their calibration methods.

  4. An efficient method for sampling the essential subspace of proteins

    NARCIS (Netherlands)

    Amadei, A; Linssen, A.B M; de Groot, B.L.; van Aalten, D.M.F.; Berendsen, H.J.C.

    1996-01-01

    A method is presented for a more efficient sampling of the configurational space of proteins as compared to conventional sampling techniques such as molecular dynamics. The method is based on the large conformational changes in proteins revealed by the ''essential dynamics'' analysis. A form of cons

  5. Engineering Study of 500 ML Sample Bottle Transportation Methods

    Energy Technology Data Exchange (ETDEWEB)

    BOGER, R.M.

    1999-08-25

    This engineering study reviews and evaluates all available methods for transportation of 500-mL grab sample bottles, reviews and evaluates transportation requirements and schedules and analyzes and recommends the most cost-effective method for transporting 500-mL grab sample bottles.

  6. Rapid method for sampling metals for materials identification

    Science.gov (United States)

    Higgins, L. E.

    1971-01-01

    Nondamaging process similar to electrochemical machining is useful in obtaining metal samples from places inaccessible to conventional sampling methods or where methods would be hazardous or contaminating to specimens. Process applies to industries where metals or metal alloys play a vital role.

  7. A Mixed Methods Sampling Methodology for a Multisite Case Study

    Science.gov (United States)

    Sharp, Julia L.; Mobley, Catherine; Hammond, Cathy; Withington, Cairen; Drew, Sam; Stringfield, Sam; Stipanovic, Natalie

    2012-01-01

    The flexibility of mixed methods research strategies makes such approaches especially suitable for multisite case studies. Yet the utilization of mixed methods to select sites for these studies is rarely reported. The authors describe their pragmatic mixed methods approach to select a sample for their multisite mixed methods case study of a…

  8. A method to correct sampling ghosts in historic near-infrared Fourier Transform Spectrometer (FTS measurements

    Directory of Open Access Journals (Sweden)

    S. Dohe

    2013-04-01

    Full Text Available The Total Carbon Column Observing Network (TCCON has been established to provide ground-based remote sensing measurements of the column-average dry air mole fractions of key greenhouse gases. To ensure the network wide consistency, biases between Fourier Transform spectrometers at different sites have to be well controlled. In this study we investigate a fundamental correction scheme for errors in the sampling of the interferogram. This is a two-step procedure in which the laser sampling error (LSE is quantified using a subset of suitable interferograms and then used to resample all the interferograms in the timeseries. Timeseries of measurements acquired at the TCCON sites Izaña and Lauder are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions. Estimated LSE are in good agreement with sampling errors inferred from lamp measurements of the ghost to parent ratio (Lauder. The LSE introduce retrieval biases which are minimised when the interferograms are resampled. The original timeseries of Xair and XCO2 at both sites show discrepancies of 0.2–0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling discrepancies are reduced to 0.1% at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also contribute to the residual difference.

  9. Evaluation of common methods for sampling invertebrate pollinator assemblages: net sampling out-perform pan traps.

    Science.gov (United States)

    Popic, Tony J; Davila, Yvonne C; Wardle, Glenda M

    2013-01-01

    Methods for sampling ecological assemblages strive to be efficient, repeatable, and representative. Unknowingly, common methods may be limited in terms of revealing species function and so of less value for comparative studies. The global decline in pollination services has stimulated surveys of flower-visiting invertebrates, using pan traps and net sampling. We explore the relative merits of these two methods in terms of species discovery, quantifying abundance, function, and composition, and responses of species to changing floral resources. Using a spatially-nested design we sampled across a 5000 km(2) area of arid grasslands, including 432 hours of net sampling and 1296 pan trap-days, between June 2010 and July 2011. Net sampling yielded 22% more species and 30% higher abundance than pan traps, and better reflected the spatio-temporal variation of floral resources. Species composition differed significantly between methods; from 436 total species, 25% were sampled by both methods, 50% only by nets, and the remaining 25% only by pans. Apart from being less comprehensive, if pan traps do not sample flower-visitors, the link to pollination is questionable. By contrast, net sampling functionally linked species to pollination through behavioural observations of flower-visitation interaction frequency. Netted specimens are also necessary for evidence of pollen transport. Benefits of net-based sampling outweighed minor differences in overall sampling effort. As pan traps and net sampling methods are not equivalent for sampling invertebrate-flower interactions, we recommend net sampling of invertebrate pollinator assemblages, especially if datasets are intended to document declines in pollination and guide measures to retain this important ecosystem service.

  10. Evaluating Composite Sampling Methods of Bacillus spores at Low Concentrations

    Energy Technology Data Exchange (ETDEWEB)

    Hess, Becky M.; Amidan, Brett G.; Anderson, Kevin K.; Hutchison, Janine R.

    2016-10-13

    Restoring facility operations after the 2001 Amerithrax attacks took over three months to complete, highlighting the need to reduce remediation time. The most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite: a single cellulose sponge samples multiple coupons; 2) single medium multi-pass composite: a single cellulose sponge is used to sample multiple coupons; and 3) multi-medium post-sample composite: a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155CFU/cm2, respectively). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p-value < 0.0001) and coupon material (p-value = 0.0008). Recovery efficiency (RE) was higher overall using the post-sample composite (PSC) method compared to single medium composite from both clean and grime coated materials. RE with the PSC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, painted wall board, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but significantly lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when

  11. Comparing bias correction methods in downscaling meteorological variables for hydrologic impact study in an arid area in China

    Science.gov (United States)

    Fang, G. H.; Yang, J.; Chen, Y. N.; Zammit, C.

    2014-11-01

    Water resources are essential to the ecosystem and social economy in the desert and oasis of the arid Tarim River Basin, Northwest China, and expected to be vulnerable to climate change. Regional Climate Models (RCM) have been proved to provide more reliable results for regional impact study of climate change (e.g. on water resources) than GCM models. However, it is still necessary to apply bias correction before they are used for water resources research due to often considerable biases. In this paper, after a sensitivity analysis on input meteorological variables based on Sobol' method, we compared five precipitation correction methods and three temperature correction methods to the output of a RCM model with its application to the Kaidu River Basin, one of the headwaters of the Tarim River Basin. Precipitation correction methods include Linear Scaling (LS), LOCal Intensity scaling (LOCI), Power Transformation (PT), Distribution Mapping (DM) and Quantile Mapping (QM); and temperature correction methods include LS, VARIance scaling (VARI) and DM. These corrected precipitation and temperature were compared to the observed meteorological data, and then their impacts on streamflow were also compared by driving a distributed hydrologic model. The results show: (1) precipitation, temperature, solar radiation are sensitivity to streamflow while relative humidity and wind speed are not, (2) raw RCM simulations are heavily biased from observed meteorological data, which results in biases in the simulated streamflows, and all bias correction methods effectively improved theses simulations, (3) for precipitation, PT and QM methods performed equally best in correcting the frequency-based indices (e.g. SD, percentile values) while LOCI method performed best in terms of the time series based indices (e.g. Nash-Sutcliffe coefficient, R2), (4) for temperature, all bias correction methods performed equally well in correcting raw temperature. (5) For simulated streamflow

  12. Comparing bias correction methods in downscaling meteorological variables for hydrologic impact study in an arid area in China

    Directory of Open Access Journals (Sweden)

    G. H. Fang

    2014-11-01

    Full Text Available Water resources are essential to the ecosystem and social economy in the desert and oasis of the arid Tarim River Basin, Northwest China, and expected to be vulnerable to climate change. Regional Climate Models (RCM have been proved to provide more reliable results for regional impact study of climate change (e.g. on water resources than GCM models. However, it is still necessary to apply bias correction before they are used for water resources research due to often considerable biases. In this paper, after a sensitivity analysis on input meteorological variables based on Sobol' method, we compared five precipitation correction methods and three temperature correction methods to the output of a RCM model with its application to the Kaidu River Basin, one of the headwaters of the Tarim River Basin. Precipitation correction methods include Linear Scaling (LS, LOCal Intensity scaling (LOCI, Power Transformation (PT, Distribution Mapping (DM and Quantile Mapping (QM; and temperature correction methods include LS, VARIance scaling (VARI and DM. These corrected precipitation and temperature were compared to the observed meteorological data, and then their impacts on streamflow were also compared by driving a distributed hydrologic model. The results show: (1 precipitation, temperature, solar radiation are sensitivity to streamflow while relative humidity and wind speed are not, (2 raw RCM simulations are heavily biased from observed meteorological data, which results in biases in the simulated streamflows, and all bias correction methods effectively improved theses simulations, (3 for precipitation, PT and QM methods performed equally best in correcting the frequency-based indices (e.g. SD, percentile values while LOCI method performed best in terms of the time series based indices (e.g. Nash–Sutcliffe coefficient, R2, (4 for temperature, all bias correction methods performed equally well in correcting raw temperature. (5 For simulated streamflow

  13. Amplification methods bias metagenomic libraries of uncultured single-stranded and double-stranded DNA viruses.

    Science.gov (United States)

    Kim, Kyoung-Ho; Bae, Jin-Woo

    2011-11-01

    Investigation of viruses in the environment often requires the amplification of viral DNA before sequencing of viral metagenomes. In this study, two of the most widely used amplification methods, the linker amplified shotgun library (LASL) and multiple displacement amplification (MDA) methods, were applied to a sample from the seawater surface. Viral DNA was extracted from viruses concentrated by tangential flow filtration and amplified by these two methods. 454 pyrosequencing was used to read the metagenomic sequences from different libraries. The resulting taxonomic classifications of the viruses, their functional assignments, and assembly patterns differed substantially depending on the amplification method. Only double-stranded DNA viruses were retrieved from the LASL, whereas most sequences in the MDA library were from single-stranded DNA viruses, and double-stranded DNA viral sequences were minorities. Thus, the two amplification methods reveal different aspects of viral diversity.

  14. Phylogenetic representativeness: a new method for evaluating taxon sampling in evolutionary studies

    Directory of Open Access Journals (Sweden)

    Passamonti Marco

    2010-04-01

    Full Text Available Abstract Background Taxon sampling is a major concern in phylogenetic studies. Incomplete, biased, or improper taxon sampling can lead to misleading results in reconstructing evolutionary relationships. Several theoretical methods are available to optimize taxon choice in phylogenetic analyses. However, most involve some knowledge about the genetic relationships of the group of interest (i.e., the ingroup, or even a well-established phylogeny itself; these data are not always available in general phylogenetic applications. Results We propose a new method to assess taxon sampling developing Clarke and Warwick statistics. This method aims to measure the "phylogenetic representativeness" of a given sample or set of samples and it is based entirely on the pre-existing available taxonomy of the ingroup, which is commonly known to investigators. Moreover, our method also accounts for instability and discordance in taxonomies. A Python-based script suite, called PhyRe, has been developed to implement all analyses we describe in this paper. Conclusions We show that this method is sensitive and allows direct discrimination between representative and unrepresentative samples. It is also informative about the addition of taxa to improve taxonomic coverage of the ingroup. Provided that the investigators' expertise is mandatory in this field, phylogenetic representativeness makes up an objective touchstone in planning phylogenetic studies.

  15. Substrate bias effect on crystallinity of polycrystalline silicon thin films prepared by pulsed ion-beam evaporation method

    Energy Technology Data Exchange (ETDEWEB)

    Ali, Fazlat; Gunji, Michiharu; Yang, Sung-Chae; Suzuki, Tsuneo; Suematsu, Hisayuki; Jiang, Weihua; Yatsui, Kiyoshi [Nagaoka Univ. of Technology, Extreme Energy-Density Research Inst., Nagaoka, Niigata (Japan)

    2002-06-01

    The deposition of polycrystalline silicon thin films has been tried by a pulsed ion-beam evaporation method, where high crystallinity and deposition rate have been achieved without heating the substrate. The crystallinity and the deposition rate were improved by applying bias voltage to the substrate, where instantaneous substrate heating might have occurred by ion-bombardment. (author)

  16. Alternative sample preparation methods for MALDI-MS

    Energy Technology Data Exchange (ETDEWEB)

    Hurst, G.B.; Buchanan, M.V. [Oak Ridge National Lab., TN (United States); Czartoski, T.J. [Kenyon College, Gambier, OH (United States)

    1994-12-31

    Since the introduction of matrix-assisted laser desorption and ionization (MALDI), sample preparation has been a limiting step in the applicability of this important technique for mass spectrometric analysis of biomolecules. A number of variations on the original sample preparation method for have been described. The {open_quotes}standard{close_quotes} method of MALDI sample preparation requires mixing a solution containing the analyte and a large excess of matrix, and allowing a small volume of this solution to dry on a probe tip before insertion into the mass spectrometer. The resulting sample can fairly inhomogeneous. As a result, the process of aiming the desorption laser at a favorable spot on the dried sample can be tedious and time-consuming. The authors are evaluating several approaches to MALDI sample preparation, with the goal of developing a faster and more reproducible method.

  17. Evaluating Composite Sampling Methods of Bacillus Spores at Low Concentrations

    Science.gov (United States)

    Hess, Becky M.; Amidan, Brett G.; Anderson, Kevin K.; Hutchison, Janine R.

    2016-01-01

    Restoring all facility operations after the 2001 Amerithrax attacks took years to complete, highlighting the need to reduce remediation time. Some of the most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite (SM-SPC): a single cellulose sponge samples multiple coupons with a single pass across each coupon; 2) single medium multi-pass composite: a single cellulose sponge samples multiple coupons with multiple passes across each coupon (SM-MPC); and 3) multi-medium post-sample composite (MM-MPC): a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155 CFU/cm2). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted dry wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (pmethod compared to the SM-SPC and SM-MPC methods. RE with the MM-MPC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, dry wall, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces. PMID:27736999

  18. Orientation sampling for dictionary-based diffraction pattern indexing methods

    Science.gov (United States)

    Singh, S.; De Graef, M.

    2016-12-01

    A general framework for dictionary-based indexing of diffraction patterns is presented. A uniform sampling method of orientation space using the cubochoric representation is introduced and used to derive an empirical relation between the average disorientation between neighboring sampling points and the number of grid points sampled along the semi-edge of the cubochoric cube. A method to uniformly sample misorientation iso-surfaces is also presented. This method is used to show that the dot product serves as a proxy for misorientation. Furthermore, it is shown that misorientation iso-surfaces in Rodrigues space are quadractic surfaces. Finally, using the concept of Riesz energies, it is shown that the sampling method results in a near optimal covering of orientation space.

  19. Reducing the Impact of Sampling Bias in NASA MODIS and VIIRS Level 3 Satellite Derived IR SST Observations over the Arctic

    Science.gov (United States)

    Minnett, P. J.; Liu, Y.; Kilpatrick, K. A.

    2016-12-01

    Sea-surface temperature (SST) measurements by satellites in the northern hemisphere high latitudes confront several difficulties. Year-round prevalent clouds, effects near ice edges, and the relative small difference between SST and low-level cloud temperatures lead to a significant loss of infrared observations regardless of the more frequent polar satellite overpasses. Recent research (Liu and Minnett, 2016) identified sampling issues in the Level 3 NASA MODIS SST products when 4km observations are aggregated into global grids at different time and space scales, particularly in the Arctic, where a binary decision cloud mask designed for global data is often overly conservative at high latitudes and results in many gaps and missing data. This under sampling of some Arctic regions results in a warm bias in Level 3 products, likely a result of warmer surface temperature, more distant from the ice edge, being identified more frequently as cloud free. Here we present an improved method for cloud detection in the Arctic using a majority vote from an ensemble of four classifiers trained based on an Alternative Decision Tree (ADT) algorithm (Freund and Mason 1999, Pfahringer et. al. 2001). This new cloud classifier increases sampling of clear pixel by 50% in several regions and generally produces cooler monthly average SST fields in the ice-free Arctic, while still retaining the same error characteristics at 1km resolution relative to in situ observations. SST time series of 12 years of MODIS (Aqua and Terra) and more recently VIIRS sensors are compared and the improvements in errors and uncertainties resulting from better cloud screening for Level 3 gridded products are assessed and summarized.

  20. Respondent driven sampling: determinants of recruitment and a method to improve point estimation.

    Directory of Open Access Journals (Sweden)

    Nicky McCreesh

    Full Text Available INTRODUCTION: Respondent-driven sampling (RDS is a variant of a link-tracing design intended for generating unbiased estimates of the composition of hidden populations that typically involves giving participants several coupons to recruit their peers into the study. RDS may generate biased estimates if coupons are distributed non-randomly or if potential recruits present for interview non-randomly. We explore if biases detected in an RDS study were due to either of these mechanisms, and propose and apply weights to reduce bias due to non-random presentation for interview. METHODS: Using data from the total population, and the population to whom recruiters offered their coupons, we explored how age and socioeconomic status were associated with being offered a coupon, and, if offered a coupon, with presenting for interview. Population proportions were estimated by weighting by the assumed inverse probabilities of being offered a coupon (as in existing RDS methods, and also of presentation for interview if offered a coupon by age and socioeconomic status group. RESULTS: Younger men were under-recruited primarily because they were less likely to be offered coupons. The under-recruitment of higher socioeconomic status men was due in part to them being less likely to present for interview. Consistent with these findings, weighting for non-random presentation for interview by age and socioeconomic status group greatly improved the estimate of the proportion of men in the lowest socioeconomic group, reducing the root-mean-squared error of RDS estimates of socioeconomic status by 38%, but had little effect on estimates for age. The weighting also improved estimates for tribe and religion (reducing root-mean-squared-errors by 19-29%, but had little effect for sexual activity or HIV status. CONCLUSIONS: Data collected from recruiters on the characteristics of men to whom they offered coupons may be used to reduce bias in RDS studies. Further evaluation of

  1. GPS satellite and receiver instrumental biases estimation using least squares method for accurate ionosphere modelling

    Indian Academy of Sciences (India)

    G Sasibhushana Rao

    2007-10-01

    The positional accuracy of the Global Positioning System (GPS)is limited due to several error sources.The major error is ionosphere.By augmenting the GPS,the Category I (CAT I)Precision Approach (PA)requirements can be achieved.The Space-Based Augmentation System (SBAS)in India is known as GPS Aided Geo Augmented Navigation (GAGAN).One of the prominent errors in GAGAN that limits the positional accuracy is instrumental biases.Calibration of these biases is particularly important in achieving the CAT I PA landings.In this paper,a new algorithm is proposed to estimate the instrumental biases by modelling the TEC using 4th order polynomial.The algorithm uses values corresponding to a single station for one month period and the results confirm the validity of the algorithm.The experimental results indicate that the estimation precision of the satellite-plus-receiver instrumental bias is of the order of ± 0.17 nsec.The observed mean bias error is of the order − 3.638 nsec and − 4.71 nsec for satellite 1 and 31 respectively.It is found that results are consistent over the period.

  2. Method for using polarization gating to measure a scattering sample

    Energy Technology Data Exchange (ETDEWEB)

    Baba, Justin S.

    2015-08-04

    Described herein are systems, devices, and methods facilitating optical characterization of scattering samples. A polarized optical beam can be directed to pass through a sample to be tested. The optical beam exiting the sample can then be analyzed to determine its degree of polarization, from which other properties of the sample can be determined. In some cases, an apparatus can include a source of an optical beam, an input polarizer, a sample, an output polarizer, and a photodetector. In some cases, a signal from a photodetector can be processed through attenuation, variable offset, and variable gain.

  3. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments.

  4. Amplified RNA degradation in T7-amplification methods results in biased microarray hybridizations

    Directory of Open Access Journals (Sweden)

    Ivell Richard

    2003-11-01

    Full Text Available Abstract Background The amplification of RNA with the T7-System is a widely used technique for obtaining increased amounts of RNA starting from limited material. The amplified RNA (aRNA can subsequently be used for microarray hybridizations, warranting sufficient signal for image analysis. We describe here an amplification-time dependent degradation of aRNA in prolonged standard T7 amplification protocols, that results in lower average size aRNA and decreased yields. Results A time-dependent degradation of amplified RNA (aRNA could be observed when using the classical "Eberwine" T7-Amplification method. When the amplification was conducted for more than 4 hours, the resulting aRNA showed a significantly smaller size distribution on gel electrophoresis and a concomitant reduction of aRNA yield. The degradation of aRNA could be correlated to the presence of the T7 RNA Polymerase in the amplification cocktail. The aRNA degradation resulted in a strong bias in microarray hybridizations with a high coefficient of variation and a significant reduction of signals of certain transcripts, that seem to be susceptible to this RNA degrading activity. The time-dependent degradation of these transcripts was verified by a real-time PCR approach. Conclusions It is important to perform amplifications not longer than 4 hours as there is a characteristic 'quality vs. yield' situation for longer amplification times. When conducting microarray hybridizations it is important not to compare results obtained with aRNA from different amplification times.

  5. Transuranic waste characterization sampling and analysis methods manual

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-05-01

    The Transuranic Waste Characterization Sampling and Analysis Methods Manual (Methods Manual) provides a unified source of information on the sampling and analytical techniques that enable Department of Energy (DOE) facilities to comply with the requirements established in the current revision of the Transuranic Waste Characterization Quality Assurance Program Plan (QAPP) for the Waste Isolation Pilot Plant (WIPP) Transuranic (TRU) Waste Characterization Program (the Program). This Methods Manual includes all of the testing, sampling, and analytical methodologies accepted by DOE for use in implementing the Program requirements specified in the QAPP.

  6. Transuranic waste characterization sampling and analysis methods manual

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-05-01

    The Transuranic Waste Characterization Sampling and Analysis Methods Manual (Methods Manual) provides a unified source of information on the sampling and analytical techniques that enable Department of Energy (DOE) facilities to comply with the requirements established in the current revision of the Transuranic Waste Characterization Quality Assurance Program Plan (QAPP) for the Waste Isolation Pilot Plant (WIPP) Transuranic (TRU) Waste Characterization Program (the Program). This Methods Manual includes all of the testing, sampling, and analytical methodologies accepted by DOE for use in implementing the Program requirements specified in the QAPP.

  7. Sequential biases in accumulating evidence

    Science.gov (United States)

    Huggins, Richard; Dogo, Samson Henry

    2015-01-01

    Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed ‘sequential decision bias’ and ‘sequential design bias’, are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed‐effect and the random‐effects models of meta‐analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence‐based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd. PMID:26626562

  8. A method to correct sampling ghosts in historic near-infrared Fourier transform spectrometer (FTS measurements

    Directory of Open Access Journals (Sweden)

    S. Dohe

    2013-08-01

    Full Text Available The Total Carbon Column Observing Network (TCCON has been established to provide ground-based remote sensing measurements of the column-averaged dry air mole fractions (DMF of key greenhouse gases. To ensure network-wide consistency, biases between Fourier transform spectrometers at different sites have to be well controlled. Errors in interferogram sampling can introduce significant biases in retrievals. In this study we investigate a two-step scheme to correct these errors. In the first step the laser sampling error (LSE is estimated by determining the sampling shift which minimises the magnitude of the signal intensity in selected, fully absorbed regions of the solar spectrum. The LSE is estimated for every day with measurements which meet certain selection criteria to derive the site-specific time series of the LSEs. In the second step, this sequence of LSEs is used to resample all the interferograms acquired at the site, and hence correct the sampling errors. Measurements acquired at the Izaña and Lauder TCCON sites are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions (e.g. realignment. Estimated LSEs are in good agreement with sampling errors inferred from the ratio of primary and ghost spectral signatures in optically bandpass-limited tungsten lamp spectra acquired at Lauder. The original time series of Xair and XCO2 (XY: column-averaged DMF of the target gas Y at both sites show discrepancies of 0.2–0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling, discrepancies are reduced to 0.1% or less at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also have contributed to the residual difference. In the future the proposed method will be used to correct historical spectra at all TCCON sites.

  9. A method to correct sampling ghosts in historic near-infrared Fourier transform spectrometer (FTS) measurements

    Science.gov (United States)

    Dohe, S.; Sherlock, V.; Hase, F.; Gisi, M.; Robinson, J.; Sepúlveda, E.; Schneider, M.; Blumenstock, T.

    2013-08-01

    The Total Carbon Column Observing Network (TCCON) has been established to provide ground-based remote sensing measurements of the column-averaged dry air mole fractions (DMF) of key greenhouse gases. To ensure network-wide consistency, biases between Fourier transform spectrometers at different sites have to be well controlled. Errors in interferogram sampling can introduce significant biases in retrievals. In this study we investigate a two-step scheme to correct these errors. In the first step the laser sampling error (LSE) is estimated by determining the sampling shift which minimises the magnitude of the signal intensity in selected, fully absorbed regions of the solar spectrum. The LSE is estimated for every day with measurements which meet certain selection criteria to derive the site-specific time series of the LSEs. In the second step, this sequence of LSEs is used to resample all the interferograms acquired at the site, and hence correct the sampling errors. Measurements acquired at the Izaña and Lauder TCCON sites are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions (e.g. realignment). Estimated LSEs are in good agreement with sampling errors inferred from the ratio of primary and ghost spectral signatures in optically bandpass-limited tungsten lamp spectra acquired at Lauder. The original time series of Xair and XCO2 (XY: column-averaged DMF of the target gas Y) at both sites show discrepancies of 0.2-0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling, discrepancies are reduced to 0.1% or less at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also have contributed to the residual difference. In the future the proposed method will be used to correct historical spectra at all TCCON sites.

  10. Sampling and analysis methods for geothermal fluids and gases

    Energy Technology Data Exchange (ETDEWEB)

    Watson, J.C.

    1978-07-01

    The sampling procedures for geothermal fluids and gases include: sampling hot springs, fumaroles, etc.; sampling condensed brine and entrained gases; sampling steam-lines; low pressure separator systems; high pressure separator systems; two-phase sampling; downhole samplers; and miscellaneous methods. The recommended analytical methods compiled here cover physical properties, dissolved solids, and dissolved and entrained gases. The sequences of methods listed for each parameter are: wet chemical, gravimetric, colorimetric, electrode, atomic absorption, flame emission, x-ray fluorescence, inductively coupled plasma-atomic emission spectroscopy, ion exchange chromatography, spark source mass spectrometry, neutron activation analysis, and emission spectrometry. Material on correction of brine component concentrations for steam loss during flashing is presented. (MHR)

  11. Research into the sampling methods of digital beam position measurement

    Institute of Scientific and Technical Information of China (English)

    邬维浩; 赵雷; 陈二雷; 刘树彬; 安琪

    2015-01-01

    A fully digital beam position monitoring system (DBPM) has been designed for SSRF (Shanghai Synchrotron Radiation Facility). As analog-to-digital converter (ADC) is a crucial part in the DBPM system, the sampling methods should be studied to achieve optimum performance. Different sampling modes were used and com-pared through tests. Long term variation among four sampling channels, which would introduce errors in beam position measurement, is investigated. An interleaved distribution scheme was designed to address this issue. To evaluate the sampling methods, in-beam tests were conducted in SSRF. Test results indicate that with proper sampling methods, a turn-by-turn (TBT) position resolution better than 1 µm is achieved, and the slow-acquisition (SA) position resolution is improved from 4.28 µm to 0.17 µm.

  12. Selection bias in dynamically-measured super-massive black hole samples: consequences for pulsar timing arrays

    CERN Document Server

    Sesana, A; Bernardi, M; Sheth, R K

    2016-01-01

    Supermassive black hole -- host galaxy relations are key to the computation of the expected gravitational wave background (GWB) in the pulsar timing array (PTA) frequency band. It has been recently pointed out that standard relations adopted in GWB computations are in fact biased-high. We show that when this selection bias is taken into account, the expected GWB in the PTA band is a factor of about three smaller than previously estimated. Compared to other scaling relations recently published in the literature, the median amplitude of the signal at $f=1$yr$^{-1}$ drops from $1.3\\times10^{-15}$ to $4\\times10^{-16}$. Although this solves any potential tension between theoretical predictions and recent PTA limits without invoking other dynamical effects (such as stalling, eccentricity or strong coupling with the galactic environment), it also makes the GWB detection more challenging.

  13. Effect of additional sample bias in Meshed Plasma Immersion Ion Deposition (MPIID) on microstructural, surface and mechanical properties of Si-DLC films

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Mingzhong [State Key Laboratory of Advanced Welding & Joining, Harbin Institute of Technology, Harbin 150001 (China); School of Materials Science & Engineering, Jiamusi University, Jiamusi 154007 (China); Tian, Xiubo, E-mail: xiubotian@163.com [State Key Laboratory of Advanced Welding & Joining, Harbin Institute of Technology, Harbin 150001 (China); Li, Muqin [School of Materials Science & Engineering, Jiamusi University, Jiamusi 154007 (China); Gong, Chunzhi [State Key Laboratory of Advanced Welding & Joining, Harbin Institute of Technology, Harbin 150001 (China); Wei, Ronghua [Southwest Research Institute, San Antonio, TX 78238 (United States)

    2016-07-15

    Highlights: • A novel Meshed Plasma Immersion Ion Deposition is proposed. • The deposited Si-DLC films possess denser structures and high deposition rate. • It is attributed to ion bombardment of the deposited films. • The ion energy can be independently controlled by an additional bias (novel set up). - Abstract: Meshed Plasma Immersion Ion Deposition (MPIID) using cage-like hollow cathode discharge is a modified process of conventional PIID, but it allows the deposition of thick diamond-like carbon (DLC) films (up to 50 μm) at a high deposition rate (up to 6.5 μm/h). To further improve the DLC film properties, a new approach to the MPIID process is proposed, in which the energy of ions incident to the sample surface can be independently controlled by an additional voltage applied between the samples and the metal meshed cage. In this study, the meshed cage was biased with a pulsed DC power supply at −1350 V peak voltage for the plasma generation, while the samples inside the cage were biased with a DC voltage from 0 V to −500 V with respect to the cage to study its effect. Si-DLC films were synthesized with a mixture of Ar, C{sub 2}H{sub 2} and tetramethylsilane (TMS). After the depositions, scanning electron microscopy (SEM), atomic force microscopy (AFM), X-ray photoelectrons spectroscopy (XPS), Raman spectroscopy and nanoindentation were used to study the morphology, surface roughness, chemical bonding and structure, and the surface hardness as well as the modulus of elasticity of the Si-DLC films. It was observed that the intense ion bombardment significantly densified the films, reduced the surface roughness, reduced the H and Si contents, and increased the nanohardness (H) and modulus of elasticity (E), whereas the deposition rate decreased slightly. Using the H and E data, high values of H{sup 3}/E{sup 2} and H/E were obtained on the biased films, indicating the potential excellent mechanical and tribological properties of the films. In this

  14. A comprehensive comparison of perpendicular distance sampling methods for sampling downed coarse woody debris

    Science.gov (United States)

    Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine; Michael S. Williams

    2013-01-01

    Many new methods for sampling down coarse woody debris have been proposed in the last dozen or so years. One of the most promising in terms of field application, perpendicular distance sampling (PDS), has several variants that have been progressively introduced in the literature. In this study, we provide an overview of the different PDS variants and comprehensive...

  15. A Novel Method to Handle the Effect of Uneven Sampling Effort in Biodiversity Databases

    Science.gov (United States)

    Pardo, Iker; Pata, María P.; Gómez, Daniel; García, María B.

    2013-01-01

    How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses. PMID:23326357

  16. A novel method to handle the effect of uneven sampling effort in biodiversity databases.

    Science.gov (United States)

    Pardo, Iker; Pata, María P; Gómez, Daniel; García, María B

    2013-01-01

    How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses.

  17. Nominal Weights Mean Equating: A Method for Very Small Samples

    Science.gov (United States)

    Babcock, Ben; Albano, Anthony; Raymond, Mark

    2012-01-01

    The authors introduced nominal weights mean equating, a simplified version of Tucker equating, as an alternative for dealing with very small samples. The authors then conducted three simulation studies to compare nominal weights mean equating to six other equating methods under the nonequivalent groups anchor test design with sample sizes of 20,…

  18. Field evaluation of personal sampling methods for multiple bioaerosols.

    Directory of Open Access Journals (Sweden)

    Chi-Hsun Wang

    Full Text Available Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min. Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  19. Comparison of Methods for Soil Sampling and Carbon Content Determination

    Directory of Open Access Journals (Sweden)

    Željka Zgorelec

    2011-03-01

    Full Text Available In this paper methods for sampling and analysis of total carbon in soil were compared. Soil sampling was done by sampling scheme according to agricultural soil monitoring recommendations. Soil samples were collected as single (four individual probe patterns and composite soil samples (16 individual probe patterns from agriculture soil. In soil samples mass ratio of total soil carbon was analyzed by dry combustion method (according to Dumas; HRN ISO 10694:2004 in Analytical Laboratory of Department of General Agronomy, Faculty of Agriculture University of Zagreb (FAZ and by oxidation method with chromium sulfuric acid (modified HRN ISO 14235:2004 in Analytical laboratory of Croatian Center for Agriculture, Food and Rural Affairs, Department of Soil and Land Conservation (ZZT. The observed data showed very strong correlation (r = 0.8943; n = 42 between two studied methods of analysis. Very strong correlation was also noted between different sampling procedures for single and composite samples in both laboratories, and coefficients of correlation were 0.9697 and 0.9950 (n = 8, respectively.

  20. Field evaluation of personal sampling methods for multiple bioaerosols.

    Science.gov (United States)

    Wang, Chi-Hsun; Chen, Bean T; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine

    2015-01-01

    Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  1. A distance limited method for sampling downed coarse woody debris

    Science.gov (United States)

    Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine; Michael S. Williams

    2012-01-01

    A new sampling method for down coarse woody debris is proposed based on limiting the perpendicular distance from individual pieces to a randomly chosen sample point. Two approaches are presented that allow different protocols to be used to determine field measurements; estimators for each protocol are also developed. Both protocols are compared via simulation against...

  2. Adaptive cluster sampling: An efficient method for assessing inconspicuous species

    Science.gov (United States)

    Andrea M. Silletti; Joan Walker

    2003-01-01

    Restorationistis typically evaluate the success of a project by estimating the population sizes of species that have been planted or seeded. Because total census is raely feasible, they must rely on sampling methods for population estimates. However, traditional random sampling designs may be inefficient for species that, for one reason or another, are challenging to...

  3. A microRNA isolation method from clinical samples

    Directory of Open Access Journals (Sweden)

    Sepideh Zununi Vahed

    2016-03-01

    Conclusion: The current isolation method can be applied for most clinical samples including cells, formalin-fixed and paraffin-embedded (FFPE tissues and even body fluids with a wide applicability in molecular biology investigations.

  4. [Biological Advisory Subcommittee Sampling Methods : Results, Resolutions, and Correspondences : 2002

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This document contains a variety of information concerning Biological Advisory Subcommittee sampling methods at the Rocky Mountain Arsenal Refuge in 2002. Multiple...

  5. Comparing bias correction methods in downscaling meteorological variables for a hydrologic impact study in an arid area in China

    Science.gov (United States)

    Fang, G. H.; Yang, J.; Chen, Y. N.; Zammit, C.

    2015-06-01

    Water resources are essential to the ecosystem and social economy in the desert and oasis of the arid Tarim River basin, northwestern China, and expected to be vulnerable to climate change. It has been demonstrated that regional climate models (RCMs) provide more reliable results for a regional impact study of climate change (e.g., on water resources) than general circulation models (GCMs). However, due to their considerable bias it is still necessary to apply bias correction before they are used for water resources research. In this paper, after a sensitivity analysis on input meteorological variables based on the Sobol' method, we compared five precipitation correction methods and three temperature correction methods in downscaling RCM simulations applied over the Kaidu River basin, one of the headwaters of the Tarim River basin. Precipitation correction methods applied include linear scaling (LS), local intensity scaling (LOCI), power transformation (PT), distribution mapping (DM) and quantile mapping (QM), while temperature correction methods are LS, variance scaling (VARI) and DM. The corrected precipitation and temperature were compared to the observed meteorological data, prior to being used as meteorological inputs of a distributed hydrologic model to study their impacts on streamflow. The results show (1) streamflows are sensitive to precipitation, temperature and solar radiation but not to relative humidity and wind speed; (2) raw RCM simulations are heavily biased from observed meteorological data, and its use for streamflow simulations results in large biases from observed streamflow, and all bias correction methods effectively improved these simulations; (3) for precipitation, PT and QM methods performed equally best in correcting the frequency-based indices (e.g., standard deviation, percentile values) while the LOCI method performed best in terms of the time-series-based indices (e.g., Nash-Sutcliffe coefficient, R2); (4) for temperature, all

  6. Separation methods for taurine analysis in biological samples.

    Science.gov (United States)

    Mou, Shifen; Ding, Xiaojing; Liu, Yongjian

    2002-12-05

    Taurine plays an important role in a variety of physiological functions, pharmacological actions and pathological conditions. Many methods for taurine analysis, therefore, have been reported to monitor its levels in biological samples. This review discusses the following techniques: sample preparation; separation and determination methods including high-performance liquid chromatography, gas chromatography, ion chromatography, capillary electrophoresis and hyphenation procedures. It covers articles published between 1990 and 2001.

  7. Comparison of non-landslide sampling strategies to counteract inventory-based biases within national-scale statistical landslide susceptibility models

    Science.gov (United States)

    Lima, Pedro; Steger, Stefan; Glade, Thomas

    2017-04-01

    Landslides can represent a significant threat for people and infrastructure in hilly and mountainous landscapes worldwide. The understanding and prediction of those geomorphic processes is crucial to avoid economic loses or even casualties to people and their properties. Statistical based landslide susceptibility models are well known for being highly reliant on the quality, representativeness and availability of input data. In this context, several studies indicate that the landslide inventory represents the most important input data. However each landslide mapping technique or data collection has its drawbacks. Consequently, biased landslide inventories may be commonly introduced into statistical models, especially at regional or even national scale. It remains to the researcher to be aware of potential limitations and design strategies to avoid or reduce the potential propagation of input data errors and biases influences on the modelling outcomes. Previous studies have proven that such erroneous landslide inventories may lead to unrealistic landslide susceptibility maps. We assume that one possibility to tackle systematic landslide inventory-based biases might be a concentration on sampling strategies that focus on the distribution of non-landslide locations. For this purpose, we test an approach for the Austrian territory that concentrates on a modified non-landslide sampling strategy, instead the traditional applied random sampling. It is expected that the way non-landslide locations are represented (e.g. equally over the area or within those areas where mapping campaigns have been conducted) is important to reduce a potential over- or underestimation of landslide susceptibility within specific areas caused by bias. As presumably each landslide inventory is known to be systematically incomplete, especially in those areas where no mapping campaign was previously conducted. This is also applicable to the one currently available for the Austrian territory

  8. Transformation-cost time-series method for analyzing irregularly sampled data.

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations-with associated costs-to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  9. Using Category Structures to Test Iterated Learning as a Method for Identifying Inductive Biases

    Science.gov (United States)

    Griffiths, Thomas L.; Christian, Brian R.; Kalish, Michael L.

    2008-01-01

    Many of the problems studied in cognitive science are inductive problems, requiring people to evaluate hypotheses in the light of data. The key to solving these problems successfully is having the right inductive biases--assumptions about the world that make it possible to choose between hypotheses that are equally consistent with the observed…

  10. On-capillary sample cleanup method for the electrophoretic determination of carbohydrates in juice samples.

    Science.gov (United States)

    Morales-Cid, Gabriel; Simonet, Bartolomé M; Cárdenas, Soledad; Valcárcel, Miguel

    2007-05-01

    On many occasions, sample treatment is a critical step in electrophoretic analysis. As an alternative to batch procedures, in this work, a new strategy is presented with a view to develop an on-capillary sample cleanup method. This strategy is based on the partial filling of the capillary with carboxylated single-walled carbon nanotube (c-SWNT). The nanoparticles retain interferences from the matrix allowing the determination and quantification of carbohydrates (viz glucose, maltose and fructose). The precision of the method for the analysis of real samples ranged from 5.3 to 6.4%. The proposed method was compared with a method based on a batch filtration of the juice sample through diatomaceous earth and further electrophoretic determination. This method was also validated in this work. The RSD for this other method ranged from 5.1 to 6%. The results obtained by both methods were statistically comparable demonstrating the accuracy of the proposed methods and their effectiveness. Electrophoretic separation of carbohydrates was achieved using 200 mM borate solution as a buffer at pH 9.5 and applying 15 kV. During separation, the capillary temperature was kept constant at 40 degrees C. For the on-capillary cleanup method, a solution containing 50 mg/L of c-SWNTs prepared in 300 mM borate solution at pH 9.5 was introduced for 60 s into the capillary just before sample introduction. For the electrophoretic analysis of samples cleaned in batch with diatomaceous earth, it is also recommended to introduce into the capillary, just before the sample, a 300 mM borate solution as it enhances the sensitivity and electrophoretic resolution.

  11. A quantitative sampling method for Oncomelania quadrasi by filter paper.

    Science.gov (United States)

    Tanaka, H; Santos, M J; Matsuda, H; Yasuraoka, K; Santos, A T

    1975-08-01

    Filter paper was found to attract Oncomelania quadrasi in waters the same way as fallen dried banana leaves, although less number of other species of snails was collected on the former than on the latter. Snails were collected in limited areas using a tube (85 cm2 area at cross-section) and a filter paper (20 X 20 CM) samplers. The sheet of filter paper was placed close to the spot where a tube sample was taken, and recovered after 24 hours. At each sampling, 30 samples were taken by each method in an area and sampling was made four times. The correlation of the number of snails collected by the tube and that by filter paper was studied. The ratio of the snail counts by the tube sampler to those by the filter paper was 1.18. A loose correlation was observed between snail counts of both methods as shown by the correlation coefficient r = 0.6502. The formulas for the regression line were Y = 0.77 X + 1.6 and X = 0.55 Y + 1.35 for 3 experiments where Y is the number of snails collected by tube sampling and X is the number of snails collected in the sheet of filter paper. The type of snail distribution was studied in the 30 samples taken by each method and this was observed to be nearly the same in both sampling methods. All sampling data were found to fit the negative binomial distribution with the values of the constant k varying very much from 0.5775 to 5.9186 in (q -- p)-k. In each experiment, the constant k was always larger in tube sampling than in filter paper sampling. This indicates that the uneven distribution of snails on the soil surface becomes more conspicuous by the filter paper sampling.

  12. Capillary microextraction: A new method for sampling methamphetamine vapour.

    Science.gov (United States)

    Nair, M V; Miskelly, G M

    2016-11-01

    Clandestine laboratories pose a serious health risk to first responders, investigators, decontamination companies, and the public who may be inadvertently exposed to methamphetamine and other chemicals used in its manufacture. Therefore there is an urgent need for reliable methods to detect and measure methamphetamine at such sites. The most common method for determining methamphetamine contamination at former clandestine laboratory sites is selected surface wipe sampling, followed by analysis with gas chromatography-mass spectrometry (GC-MS). We are investigating the use of sampling for methamphetamine vapour to complement such wipe sampling. In this study, we report the use of capillary microextraction (CME) devices for sampling airborne methamphetamine, and compare their sampling efficiency with a previously reported dynamic SPME method. The CME devices consisted of PDMS-coated glass filter strips inside a glass tube. The devices were used to dynamically sample methamphetamine vapour in the range of 0.42-4.2μgm(-3), generated by a custom-built vapour dosing system, for 1-15min, and methamphetamine was analysed using a GC-MS fitted with a ChromatoProbe thermal desorption unit. The devices showed good reproducibility (RSDsampling times and peak area, which can be utilised for calibration. Under identical sampling conditions, the CME devices were approximately 30 times more sensitive than the dynamic SPME method. The CME devices could be stored for up to 3days after sampling prior to analysis. Consecutive sampling of methamphetamine and its isotopic substitute, d-9 methamphetamine showed no competitive displacement. This suggests that CME devices, pre-loaded with an internal standard, could be a feasible method for sampling airborne methamphetamine at former clandestine laboratories. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Statistical methods to correct for verification bias in diagnostic studies are inadequate when there are few false negatives: a simulation study

    OpenAIRE

    Vickers Andrew J; Cronin Angel M

    2008-01-01

    Abstract Background A common feature of diagnostic research is that results for a diagnostic gold standard are available primarily for patients who are positive for the test under investigation. Data from such studies are subject to what has been termed "verification bias". We evaluated statistical methods for verification bias correction when there are few false negatives. Methods A simulation study was conducted of a screening study subject to verification bias. We compared estimates of the...

  14. Probability Sampling Method for a Hidden Population Using Respondent-Driven Sampling: Simulation for Cancer Survivors.

    Science.gov (United States)

    Jung, Minsoo

    2015-01-01

    When there is no sampling frame within a certain group or the group is concerned that making its population public would bring social stigma, we say the population is hidden. It is difficult to approach this kind of population survey-methodologically because the response rate is low and its members are not quite honest with their responses when probability sampling is used. The only alternative known to address the problems caused by previous methods such as snowball sampling is respondent-driven sampling (RDS), which was developed by Heckathorn and his colleagues. RDS is based on a Markov chain, and uses the social network information of the respondent. This characteristic allows for probability sampling when we survey a hidden population. We verified through computer simulation whether RDS can be used on a hidden population of cancer survivors. According to the simulation results of this thesis, the chain-referral sampling of RDS tends to minimize as the sample gets bigger, and it becomes stabilized as the wave progresses. Therefore, it shows that the final sample information can be completely independent from the initial seeds if a certain level of sample size is secured even if the initial seeds were selected through convenient sampling. Thus, RDS can be considered as an alternative which can improve upon both key informant sampling and ethnographic surveys, and it needs to be utilized for various cases domestically as well.

  15. SAMSAN- MODERN NUMERICAL METHODS FOR CLASSICAL SAMPLED SYSTEM ANALYSIS

    Science.gov (United States)

    Frisch, H. P.

    1994-01-01

    SAMSAN was developed to aid the control system analyst by providing a self consistent set of computer algorithms that support large order control system design and evaluation studies, with an emphasis placed on sampled system analysis. Control system analysts have access to a vast array of published algorithms to solve an equally large spectrum of controls related computational problems. The analyst usually spends considerable time and effort bringing these published algorithms to an integrated operational status and often finds them less general than desired. SAMSAN reduces the burden on the analyst by providing a set of algorithms that have been well tested and documented, and that can be readily integrated for solving control system problems. Algorithm selection for SAMSAN has been biased toward numerical accuracy for large order systems with computational speed and portability being considered important but not paramount. In addition to containing relevant subroutines from EISPAK for eigen-analysis and from LINPAK for the solution of linear systems and related problems, SAMSAN contains the following not so generally available capabilities: 1) Reduction of a real non-symmetric matrix to block diagonal form via a real similarity transformation matrix which is well conditioned with respect to inversion, 2) Solution of the generalized eigenvalue problem with balancing and grading, 3) Computation of all zeros of the determinant of a matrix of polynomials, 4) Matrix exponentiation and the evaluation of integrals involving the matrix exponential, with option to first block diagonalize, 5) Root locus and frequency response for single variable transfer functions in the S, Z, and W domains, 6) Several methods of computing zeros for linear systems, and 7) The ability to generate documentation "on demand". All matrix operations in the SAMSAN algorithms assume non-symmetric matrices with real double precision elements. There is no fixed size limit on any matrix in any

  16. Integrated Hamiltonian sampling: a simple and versatile method for free energy simulations and conformational sampling.

    Science.gov (United States)

    Mori, Toshifumi; Hamers, Robert J; Pedersen, Joel A; Cui, Qiang

    2014-07-17

    Motivated by specific applications and the recent work of Gao and co-workers on integrated tempering sampling (ITS), we have developed a novel sampling approach referred to as integrated Hamiltonian sampling (IHS). IHS is straightforward to implement and complementary to existing methods for free energy simulation and enhanced configurational sampling. The method carries out sampling using an effective Hamiltonian constructed by integrating the Boltzmann distributions of a series of Hamiltonians. By judiciously selecting the weights of the different Hamiltonians, one achieves rapid transitions among the energy landscapes that underlie different Hamiltonians and therefore an efficient sampling of important regions of the conformational space. Along this line, IHS shares similar motivations as the enveloping distribution sampling (EDS) approach of van Gunsteren and co-workers, although the ways that distributions of different Hamiltonians are integrated are rather different in IHS and EDS. Specifically, we report efficient ways for determining the weights using a combination of histogram flattening and weighted histogram analysis approaches, which make it straightforward to include many end-state and intermediate Hamiltonians in IHS so as to enhance its flexibility. Using several relatively simple condensed phase examples, we illustrate the implementation and application of IHS as well as potential developments for the near future. The relation of IHS to several related sampling methods such as Hamiltonian replica exchange molecular dynamics and λ-dynamics is also briefly discussed.

  17. System and method for measuring fluorescence of a sample

    Energy Technology Data Exchange (ETDEWEB)

    Riot, Vincent J.

    2017-06-27

    The present disclosure provides a system and a method for measuring fluorescence of a sample. The sample may be a polymerase-chain-reaction (PCR) array, a loop-mediated-isothermal amplification array, etc. LEDs are used to excite the sample, and a photodiode is used to collect the sample's fluorescence. An electronic offset signal is used to reduce the effects of background fluorescence and the noises from the measurement system. An integrator integrates the difference between the output of the photodiode and the electronic offset signal over a given period of time. The resulting integral is then converted into digital domain for further processing and storage.

  18. System and method for measuring fluorescence of a sample

    Energy Technology Data Exchange (ETDEWEB)

    Riot, Vincent J

    2015-03-24

    The present disclosure provides a system and a method for measuring fluorescence of a sample. The sample may be a polymerase-chain-reaction (PCR) array, a loop-mediated-isothermal amplification array, etc. LEDs are used to excite the sample, and a photodiode is used to collect the sample's fluorescence. An electronic offset signal is used to reduce the effects of background fluorescence and the noises from the measurement system. An integrator integrates the difference between the output of the photodiode and the electronic offset signal over a given period of time. The resulting integral is then converted into digital domain for further processing and storage.

  19. Soil separator and sampler and method of sampling

    Science.gov (United States)

    O'Brien, Barry H [Idaho Falls, ID; Ritter, Paul D [Idaho Falls, ID

    2010-02-16

    A soil sampler includes a fluidized bed for receiving a soil sample. The fluidized bed may be in communication with a vacuum for drawing air through the fluidized bed and suspending particulate matter of the soil sample in the air. In a method of sampling, the air may be drawn across a filter, separating the particulate matter. Optionally, a baffle or a cyclone may be included within the fluidized bed for disentrainment, or dedusting, so only the finest particulate matter, including asbestos, will be trapped on the filter. The filter may be removable, and may be tested to determine the content of asbestos and other hazardous particulate matter in the soil sample.

  20. Method of separate determination of high-ohmic sample resistance and contact resistance

    Directory of Open Access Journals (Sweden)

    Vadim A. Golubiatnikov

    2015-09-01

    Full Text Available A method of separate determination of two-pole sample volume resistance and contact resistance is suggested. The method is applicable to high-ohmic semiconductor samples: semi-insulating gallium arsenide, detector cadmium-zinc telluride (CZT, etc. The method is based on near-contact region illumination by monochromatic radiation of variable intensity from light emitting diodes with quantum energies exceeding the band gap of the material. It is necessary to obtain sample photo-current dependence upon light emitting diode current and to find the linear portion of this dependence. Extrapolation of this linear portion to the Y-axis gives the cut-off current. As the bias voltage is known, it is easy to calculate sample volume resistance. Then, using dark current value, one can determine the total contact resistance. The method was tested for n-type semi-insulating GaAs. The contact resistance value was shown to be approximately equal to the sample volume resistance. Thus, the influence of contacts must be taken into account when electrophysical data are analyzed.

  1. Reliability analysis method for slope stability based on sample weight

    Directory of Open Access Journals (Sweden)

    Zhi-gang YANG

    2009-09-01

    Full Text Available The single safety factor criteria for slope stability evaluation, derived from the rigid limit equilibrium method or finite element method (FEM, may not include some important information, especially for steep slopes with complex geological conditions. This paper presents a new reliability method that uses sample weight analysis. Based on the distribution characteristics of random variables, the minimal sample size of every random variable is extracted according to a small sample t-distribution under a certain expected value, and the weight coefficient of each extracted sample is considered to be its contribution to the random variables. Then, the weight coefficients of the random sample combinations are determined using the Bayes formula, and different sample combinations are taken as the input for slope stability analysis. According to one-to-one mapping between the input sample combination and the output safety coefficient, the reliability index of slope stability can be obtained with the multiplication principle. Slope stability analysis of the left bank of the Baihetan Project is used as an example, and the analysis results show that the present method is reasonable and practicable for the reliability analysis of steep slopes with complex geological conditions.

  2. Using re-sampling methods in mortality studies.

    Directory of Open Access Journals (Sweden)

    Igor Itskovich

    Full Text Available Traditional methods of computing standardized mortality ratios (SMR in mortality studies rely upon a number of conventional statistical propositions to estimate confidence intervals for obtained values. Those propositions include a common but arbitrary choice of the confidence level and the assumption that observed number of deaths in the test sample is a purely random quantity. The latter assumption may not be fully justified for a series of periodic "overlapping" studies. We propose a new approach to evaluating the SMR, along with its confidence interval, based on a simple re-sampling technique. The proposed method is most straightforward and requires neither the use of above assumptions nor any rigorous technique, employed by modern re-sampling theory, for selection of a sample set. Instead, we include all possible samples that correspond to the specified time window of the study in the re-sampling analysis. As a result, directly obtained confidence intervals for repeated overlapping studies may be tighter than those yielded by conventional methods. The proposed method is illustrated by evaluating mortality due to a hypothetical risk factor in a life insurance cohort. With this method used, the SMR values can be forecast more precisely than when using the traditional approach. As a result, the appropriate risk assessment would have smaller uncertainties.

  3. Method of monitoring core sampling during borehole drilling

    Energy Technology Data Exchange (ETDEWEB)

    Duckworth, A.; Barnes, D.; Gennings, T.L.

    1991-04-30

    This paper describes a method of monitoring the acquisition of a core sample obtained from a coring tool on a drillstring in a borehole. It comprises: measuring the weight of a portion of the drillstring at a pre-selected borehole depth when the drillstring is off bottom to define a first measurement; drilling to acquire a core sample; measuring the weight of a portion of the drillstring at a pre-selected borehole depth when the drillstring is off-bottom to define a second measurement; determining the difference between the first and second measurements, the difference corresponding to the weight of the core sample; and comparing the measured core sample weight to a calculated weight of a full core sample to determine if the core sample has been acquired.

  4. 7 CFR 58.812 - Methods of sample analysis.

    Science.gov (United States)

    2010-01-01

    ... Marketing Service, Dairy Programs, or the Official Methods of Analysis of the Association of Official... 7 Agriculture 3 2010-01-01 2010-01-01 false Methods of sample analysis. 58.812 Section 58.812 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE...

  5. Heat-capacity measurements on small samples: The hybrid method

    NARCIS (Netherlands)

    Klaasse, J.C.P.; Brück, E.H.

    2008-01-01

    A newly developed method is presented for measuring heat capacities on small samples, particularly where thermal isolation is not sufficient for the use of the traditional semiadiabatic heat-pulse technique. This "hybrid technique" is a modification of this heat-pulse method in case the temperature

  6. Heat-capacity measurements on small samples: The hybrid method

    NARCIS (Netherlands)

    Klaasse, J.C.P.; Brück, E.H.

    2008-01-01

    A newly developed method is presented for measuring heat capacities on small samples, particularly where thermal isolation is not sufficient for the use of the traditional semiadiabatic heat-pulse technique. This "hybrid technique" is a modification of this heat-pulse method in case the temperature

  7. Clean up or mess up: the effect of sampling biases on measurements of degree distributions in mobile phone datasets

    CERN Document Server

    Decuyper, Adeline; Traag, Vincent; Blondel, Vincent D; Delvenne, Jean-Charles

    2016-01-01

    Mobile phone data have been extensively used in the recent years to study social behavior. However, most of these studies are based on only partial data whose coverage is limited both in space and time. In this paper, we point to an observation that the bias due to the limited coverage in time may have an important influence on the results of the analyses performed. In particular, we observe significant differences, both qualitatively and quantitatively, in the degree distribution of the network, depending on the way the dataset is pre-processed and we present a possible explanation for the emergence of Double Pareto LogNormal (DPLN) degree distributions in temporal data.

  8. Differences in Movement Pattern and Detectability between Males and Females Influence How Common Sampling Methods Estimate Sex Ratio

    Science.gov (United States)

    Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco

    2016-01-01

    Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population’s sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns. PMID:27441554

  9. Differences in Movement Pattern and Detectability between Males and Females Influence How Common Sampling Methods Estimate Sex Ratio.

    Science.gov (United States)

    Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco

    2016-01-01

    Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population's sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns.

  10. Efficiency of snake sampling methods in the Brazilian semiarid region.

    Science.gov (United States)

    Mesquita, Paula C M D; Passos, Daniel C; Cechin, Sonia Z

    2013-09-01

    The choice of sampling methods is a crucial step in every field survey in herpetology. In countries where time and financial support are limited, the choice of the methods is critical. The methods used to sample snakes often lack objective criteria, and the traditional methods have apparently been more important when making the choice. Consequently researches using not-standardized methods are frequently found in the literature. We have compared four commonly used methods for sampling snake assemblages in a semiarid area in Brazil. We compared the efficacy of each method based on the cost-benefit regarding the number of individuals and species captured, time, and financial investment. We found that pitfall traps were the less effective method in all aspects that were evaluated and it was not complementary to the other methods in terms of abundance of species and assemblage structure. We conclude that methods can only be considered complementary if they are standardized to the objectives of the study. The use of pitfall traps in short-term surveys of the snake fauna in areas with shrubby vegetation and stony soil is not recommended.

  11. Comparative Estimation of Genetic Diversity in Population Studies using Molecular Sampling and Traditional Sampling Methods.

    Science.gov (United States)

    Saeb, Amr Tm; David, Satish Kumar

    2014-01-01

    Entomopathogenic nematodes (EPN) are efficient biological pest control agents. Population genetics studies on EPN are seldom known. Therefore, it is of interest to evaluate the significance of molecular sampling method (MSM) for accuracy, time needed, and cost effectiveness over traditional sampling method (TSM). The study was conducted at the Mohican Hills golf course at the state of Ohio where the EPN H. bacteriophora has been monitored for 18 years. The nematode population occupies an area of approximately 3700 m(2) with density range from 0.25-2 per gram soil. Genetic diversity of EPN was studied by molecular sampling method (MSM) and traditional sampling method (TSM) using the mitochondrial gene pcox1. The MSM picked 88% in compared to TSM with only 30% of sequenced cox 1 gene. All studied genetic polymorphism measures (sequence and haplotype) showed high levels of genetic diversity of MSM over TSM. MSM minimizes the chance of mitochondrial genes amplification from non target organisms (insect or other contaminating microorganisms). Moreover, it allows the sampling of more individuals with a reliable and credible representative sample size. Thus, we show that MSM supersedes TSM in labour intensity, time consumption and requirement of no special experience and efficiency.

  12. An investigation of the structural transitions between different forms of DNA using the Adaptively Biased (ABMD) and Steered Molecular Dynamics Methods

    Science.gov (United States)

    Moradi, Mahmoud; Babin, Volodymyr; Roland, Christopher; Darden, Thomas A.; Sagui, Celeste

    2008-10-01

    Left-handed A-DNA and B-DNA along with right-handed Z-DNA, are believed to be the three main biologically active double-helix structures associated with DNA. The free energy differences associated with the A to B-DNA, and B to Z-DNA transitions in an implicit solvent environment have been investigated using the recently developed Adaptively Biased Molecular Dynamics (ABMD) method, with the RMSD as the collective variable associated with the former transition, and handedness and radius of gyration as the collective variables associated with the latter. The ABMD method belongs to the general category of umbrella sampling methods with a time-dependent potential, and allows for an accurate estimation of the free energy barriers associated with the transitions. The results are compared to those obtained using the Steered Molecular Dynamics method, and ultimately are used in order to gain insight into the microscopics of the DNA transitions.

  13. Influences of diurnal sampling bias on fixed-point monitoring of plankton biodiversity determined using a massively parallel sequencing-based technique.

    Science.gov (United States)

    Nagai, Satoshi; Hida, Kohsuke; Urushizaki, Shingo; Onitsuka, Goh; Yasuike, Motoshige; Nakamura, Yoji; Fujiwara, Atushi; Tajimi, Seisuke; Kimoto, Katsunori; Kobayashi, Takanori; Gojobori, Takashi; Ototake, Mitsuru

    2016-02-01

    In this study, we investigated the influence of diurnal sampling bias on the community structure of plankton by comparing the biodiversity among seawater samples (n=9) obtained every 3h for 24h by using massively parallel sequencing (MPS)-based plankton monitoring at a fixed point conducted at Himedo seaport in Yatsushiro Sea, Japan. The number of raw operational taxonomy units (OTUs) and OTUs after re-sampling was 507-658 (558 ± 104, mean ± standard deviation) and 448-544 (467 ± 81), respectively, indicating high plankton biodiversity at the sampling location. The relative abundance of the top 20 OTUs in the samples from Himedo seaport was 48.8-67.7% (58.0 ± 5.8%), and the highest-ranked OTU was Pseudo-nitzschia species (Bacillariophyta) with a relative abundance of 17.3-39.2%, followed by Oithona sp. 1 and Oithona sp. 2 (Arthropoda). During seawater sampling, the semidiurnal tidal current having an amplitude of 0.3ms(-1) was dominant, and the westward residual current driven by the northeasterly wind was continuously observed during the 24-h monitoring. Therefore, the relative abundance of plankton species apparently fluctuated among the samples, but no significant difference was noted according to G-test (p>0.05). Significant differences were observed between the samples obtained from a different locality (Kusuura in Yatsushiro Sea) and at different dates, suggesting that the influence of diurnal sampling bias on plankton diversity, determined using the MPS-based survey, was not significant and acceptable. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size: e105825

    National Research Council Canada - National Science Library

    Anton Kühberger; Astrid Fritz; Thomas Scherndl

    2014-01-01

    .... We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values...

  15. Method for fractional solid-waste sampling and chemical analysis

    DEFF Research Database (Denmark)

    Riber, Christian; Rodushkin, I.; Spliid, Henrik

    2007-01-01

    to repeated particle-size reduction, mixing, and mass reduction until a sufficiently small but representative sample was obtained for digestion prior to chemical analysis. The waste-fraction samples were digested according to their properties for maximum recognition of all the studied substances. By combining...... four subsampling methods and five digestion methods, paying attention to the heterogeneity and the material characteristics of the waste fractions, it was possible to determine 61 substances with low detection limits, reasonable variance, and high accuracy. For most of the substances of environmental...... concern, the waste-sample concentrations were above the detection limit (e.g. Cd gt; 0.001 mg kg-1, Cr gt; 0.01 mg kg-1, Hg gt; 0.002 mg kg-1, Pb gt; 0.005 mg kg-1). The variance was in the range of 5-100%, depending on material fraction and substance as documented by repeated sampling of two highly...

  16. Examination of Hydrate Formation Methods: Trying to Create Representative Samples

    Energy Technology Data Exchange (ETDEWEB)

    Kneafsey, T.J.; Rees, E.V.L.; Nakagawa, S.; Kwon, T.-H.

    2011-04-01

    Forming representative gas hydrate-bearing laboratory samples is important so that the properties of these materials may be measured, while controlling the composition and other variables. Natural samples are rare, and have often experienced pressure and temperature changes that may affect the property to be measured [Waite et al., 2008]. Forming methane hydrate samples in the laboratory has been done a number of ways, each having advantages and disadvantages. The ice-to-hydrate method [Stern et al., 1996], contacts melting ice with methane at the appropriate pressure to form hydrate. The hydrate can then be crushed and mixed with mineral grains under controlled conditions, and then compacted to create laboratory samples of methane hydrate in a mineral medium. The hydrate in these samples will be part of the load-bearing frame of the medium. In the excess gas method [Handa and Stupin, 1992], water is distributed throughout a mineral medium (e.g. packed moist sand, drained sand, moistened silica gel, other porous media) and the mixture is brought to hydrate-stable conditions (chilled and pressurized with gas), allowing hydrate to form. This method typically produces grain-cementing hydrate from pendular water in sand [Waite et al., 2004]. In the dissolved gas method [Tohidi et al., 2002], water with sufficient dissolved guest molecules is brought to hydrate-stable conditions where hydrate forms. In the laboratory, this is can be done by pre-dissolving the gas of interest in water and then introducing it to the sample under the appropriate conditions. With this method, it is easier to form hydrate from more soluble gases such as carbon dioxide. It is thought that this method more closely simulates the way most natural gas hydrate has formed. Laboratory implementation, however, is difficult, and sample formation is prohibitively time consuming [Minagawa et al., 2005; Spangenberg and Kulenkampff, 2005]. In another version of this technique, a specified quantity of gas

  17. Fluidics platform and method for sample preparation and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Benner, W. Henry; Dzenitis, John M.; Bennet, William J.; Baker, Brian R.

    2014-08-19

    Herein provided are fluidics platform and method for sample preparation and analysis. The fluidics platform is capable of analyzing DNA from blood samples using amplification assays such as polymerase-chain-reaction assays and loop-mediated-isothermal-amplification assays. The fluidics platform can also be used for other types of assays and analyzes. In some embodiments, a sample in a sealed tube can be inserted directly. The following isolation, detection, and analyzes can be performed without a user's intervention. The disclosed platform may also comprises a sample preparation system with a magnetic actuator, a heater, and an air-drying mechanism, and fluid manipulation processes for extraction, washing, elution, assay assembly, assay detection, and cleaning after reactions and between samples.

  18. Self-contained cryogenic gas sampling apparatus and method

    Science.gov (United States)

    McManus, G.J.; Motes, B.G.; Bird, S.K.; Kotter, D.K.

    1996-03-26

    Apparatus for obtaining a whole gas sample, is composed of: a sample vessel having an inlet for receiving a gas sample; a controllable valve mounted for controllably opening and closing the inlet; a valve control coupled to the valve for opening and closing the valve at selected times; a portable power source connected for supplying operating power to the valve control; and a cryogenic coolant in thermal communication with the vessel for cooling the interior of the vessel to cryogenic temperatures. A method is described for obtaining an air sample using the apparatus described above, by: placing the apparatus at a location at which the sample is to be obtained; operating the valve control to open the valve at a selected time and close the valve at a selected subsequent time; and between the selected times maintaining the vessel at a cryogenic temperature by heat exchange with the coolant. 3 figs.

  19. A Review of Methods for Detecting Melamine in Food Samples.

    Science.gov (United States)

    Lu, Yang; Xia, Yinqiang; Liu, Guozhen; Pan, Mingfei; Li, Mengjuan; Lee, Nanju Alice; Wang, Shuo

    2017-01-02

    Melamine is a synthetic chemical used in the manufacture of resins, pigments, and superplasticizers. Human beings can be exposed to melamine through various sources such as migration from related products into foods, pesticide contamination, and illegal addition to foods. Toxicity studies suggest that prolonged consumption of melamine could lead to the formation of kidney stones or even death. Therefore, reliable and accurate detection methods are essential to prevent human exposure to melamine. Sample preparation is of critical importance, since it could directly affect the performance of analytical methods. Some methods for the detection of melamine include instrumental analysis, immunoassays, and sensor methods. In this paper, we have summarized the state-of-the-art methods used for food sample preparation as well as the various detection techniques available for melamine. Combinations of multiple techniques and new materials used in the detection of melamine have also been reviewed. Finally, future perspectives on the applications of microfluidic devices have also been provided.

  20. An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method

    Energy Technology Data Exchange (ETDEWEB)

    Campolina, Daniel; Lima, Paulo Rubens I., E-mail: campolina@cdtn.br, E-mail: pauloinacio@cpejr.com.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil). Servico de Tecnologia de Reatores; Pereira, Claubia; Veloso, Maria Auxiliadora F., E-mail: claubia@nuclear.ufmg.br, E-mail: dora@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Dept. de Engenharia Nuclear

    2015-07-01

    Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k{sub eff} was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)

  1. Uncertainty in biological monitoring: a framework for data collection and analysis to account for multiple sources of sampling bias

    Science.gov (United States)

    Ruiz-Gutierrez, Viviana; Hooten, Melvin B.; Campbell Grant, Evan H.

    2016-01-01

    Biological monitoring programmes are increasingly relying upon large volumes of citizen-science data to improve the scope and spatial coverage of information, challenging the scientific community to develop design and model-based approaches to improve inference.Recent statistical models in ecology have been developed to accommodate false-negative errors, although current work points to false-positive errors as equally important sources of bias. This is of particular concern for the success of any monitoring programme given that rates as small as 3% could lead to the overestimation of the occurrence of rare events by as much as 50%, and even small false-positive rates can severely bias estimates of occurrence dynamics.We present an integrated, computationally efficient Bayesian hierarchical model to correct for false-positive and false-negative errors in detection/non-detection data. Our model combines independent, auxiliary data sources with field observations to improve the estimation of false-positive rates, when a subset of field observations cannot be validated a posteriori or assumed as perfect. We evaluated the performance of the model across a range of occurrence rates, false-positive and false-negative errors, and quantity of auxiliary data.The model performed well under all simulated scenarios, and we were able to identify critical auxiliary data characteristics which resulted in improved inference. We applied our false-positive model to a large-scale, citizen-science monitoring programme for anurans in the north-eastern United States, using auxiliary data from an experiment designed to estimate false-positive error rates. Not correcting for false-positive rates resulted in biased estimates of occupancy in 4 of the 10 anuran species we analysed, leading to an overestimation of the average number of occupied survey routes by as much as 70%.The framework we present for data collection and analysis is able to efficiently provide reliable inference for

  2. Methods for Sampling and Measurement of Compressed Air Contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Stroem, L.

    1976-10-15

    In order to improve the technique for measuring oil and water entrained in a compressed air stream, a laboratory study has been made of some methods for sampling and measurement. For this purpose water or oil as artificial contaminants were injected in thin streams into a test loop, carrying dry compressed air. Sampling was performed in a vertical run, down-stream of the injection point. Wall attached liquid, coarse droplet flow, and fine droplet flow were sampled separately. The results were compared with two-phase flow theory and direct observation of liquid behaviour. In a study of sample transport through narrow tubes, it was observed that, below a certain liquid loading, the sample did not move, the liquid remaining stationary on the tubing wall. The basic analysis of the collected samples was made by gravimetric methods. Adsorption tubes were used with success to measure water vapour. A humidity meter with a sensor of the aluminium oxide type was found to be unreliable. Oil could be measured selectively by a flame ionization detector, the sample being pretreated in an evaporation- condensation unit

  3. A two-step semiparametric method to accommodate sampling weights in multiple imputation.

    Science.gov (United States)

    Zhou, Hanzhi; Elliott, Michael R; Raghunathan, Trviellore E

    2016-03-01

    Multiple imputation (MI) is a well-established method to handle item-nonresponse in sample surveys. Survey data obtained from complex sampling designs often involve features that include unequal probability of selection. MI requires imputation to be congenial, that is, for the imputations to come from a Bayesian predictive distribution and for the observed and complete data estimator to equal the posterior mean given the observed or complete data, and similarly for the observed and complete variance estimator to equal the posterior variance given the observed or complete data; more colloquially, the analyst and imputer make similar modeling assumptions. Yet multiply imputed data sets from complex sample designs with unequal sampling weights are typically imputed under simple random sampling assumptions and then analyzed using methods that account for the sampling weights. This is a setting in which the analyst assumes more than the imputer, which can led to biased estimates and anti-conservative inference. Less commonly used alternatives such as including case weights as predictors in the imputation model typically require interaction terms for more complex estimators such as regression coefficients, and can be vulnerable to model misspecification and difficult to implement. We develop a simple two-step MI framework that accounts for sampling weights using a weighted finite population Bayesian bootstrap method to validly impute the whole population (including item nonresponse) from the observed data. In the second step, having generated posterior predictive distributions of the entire population, we use standard IID imputation to handle the item nonresponse. Simulation results show that the proposed method has good frequentist properties and is robust to model misspecification compared to alternative approaches. We apply the proposed method to accommodate missing data in the Behavioral Risk Factor Surveillance System when estimating means and parameters of

  4. Applying 2D Bias Correction Method to Gridded Simulations of Precipitation and Temperature over Southeastern South America.

    Science.gov (United States)

    Piani, C.; Montroull, N.; Saurral, R. I.

    2014-12-01

    The two dimensional bias correction methodology for temperature and precipitation, developed by Piani et al. (2012) for station data, was applied to the CCSM4 (NCAR) model gridded output from the CMIP5 dataset and a 40 year gridded dataset over Southeastern South America (Tencer et al., 2011; Jones et al., 2012). Copula density functions of observed temperature and precipitation showed significant structure when subsets of sixteen gridpoints were pooled together. By contrast no structure is detectable in copulas of GCM data. By construction, independent one dimensional bias correction of temperature and precipitation cannot correct copula density distributions hence, the 2D method is applied. The method is validated, as customary, by calibrating and subsequently validating the methodology with non-overlapping 20 year time periods. Visual inspection of single copula density functions for all grid points is unfeasible. Hence the bias correction method is validated by calculating a Kolmogorov-Smirnoff (KS) type statistic measuring the distance between observed and simulated and between observed and corrected copulas at each grid point. Results for the KS statistic are plotted in the figure shown. The methodology clearly shows great potential for application to climate impact modeling. References Jones, P. D., Lister, D. H., Harpham, C., Rusticucci, M. and Penalba, O. (2013), Construction of a daily precipitation grid for southeastern South America for the period 1961-2000. Int. J. Climatol., 33: 2508-2519. doi: 10.1002/joc.3605 Piani, C., &Haerter, J. O. (2012). Two dimensional bias correction of temperature and precipitation copulas in climate models. Geophysical Research Letters, 39(20). Tencer, B., Rusticucci, M., Jones, P., & Lister, D. (2011).A Southeastern South American Daily Gridded Dataset of Observed Surface Minimum and Maximum Temperature for 1961-2000. Bulletin of the American Meteorological Society, 92(10). Figure. Kolmogorov-Smirnoff type statistic

  5. Optimizing the atmospheric sampling sites using fuzzy mathematic methods

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    A new approach applying fuzzy mathematic theorems, including the Primary Matrix Element Theorem and the Fisher ClassificationMethod, was established to solve the optimization problem of atmospheric environmental sampling sites. According to its basis, an applicationin the optimization of sampling sites in the atmospheric environmental monitoring was discussed. The method was proven to be suitable andeffective. The results were admitted and applied by the Environmental Protection Bureau (EPB) of many cities of China. A set of computersoftware of this approach was also completely compiled and used.

  6. On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods

    Science.gov (United States)

    Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.

    2003-01-01

    Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.

  7. Recording 2-D Nutation NQR Spectra by Random Sampling Method.

    Science.gov (United States)

    Glotova, Olga; Sinyavsky, Nikolaj; Jadzyn, Maciej; Ostafin, Michal; Nogaj, Boleslaw

    2010-10-01

    The method of random sampling was introduced for the first time in the nutation nuclear quadrupole resonance (NQR) spectroscopy where the nutation spectra show characteristic singularities in the form of shoulders. The analytic formulae for complex two-dimensional (2-D) nutation NQR spectra (I = 3/2) were obtained and the condition for resolving the spectral singularities for small values of an asymmetry parameter η was determined. Our results show that the method of random sampling of a nutation interferogram allows significant reduction of time required to perform a 2-D nutation experiment and does not worsen the spectral resolution.

  8. A comparison of sampling methods for examining the laryngeal microbiome

    Science.gov (United States)

    Hanshew, Alissa S.; Jetté, Marie E.; Tadayon, Stephanie; Thibeault, Susan L.

    2017-01-01

    Shifts in healthy human microbial communities have now been linked to disease in numerous body sites. Noninvasive swabbing remains the sampling technique of choice in most locations; however, it is not well known if this method samples the entire community, or only those members that are easily removed from the surface. We sought to compare the communities found via swabbing and biopsied tissue in true vocal folds, a location that is difficult to sample without causing potential damage and impairment to tissue function. A secondary aim of this study was to determine if swab sampling of the false vocal folds could be used as proxy for true vocal folds. True and false vocal fold mucosal samples (swabbed and biopsied) were collected from six pigs and used for 454 pyrosequencing of the V3–V5 region of the 16S rRNA gene. Most of the alpha and beta measures of diversity were found to be significantly similar between swabbed and biopsied tissue samples. Similarly, the communities found in true and false vocal folds did not differ considerably. These results suggest that samples taken via swabs are sufficient to assess the community, and that samples taken from the false vocal folds may be used as proxies for the true vocal folds. Assessment of these techniques opens an avenue to less traumatic means to explore the role microbes play in the development of diseases of the vocal folds, and perhaps the rest of the respiratory tract. PMID:28362810

  9. Identifying social learning in animal populations: a new 'option-bias' method.

    Directory of Open Access Journals (Sweden)

    Rachel L Kendal

    Full Text Available BACKGROUND: Studies of natural animal populations reveal widespread evidence for the diffusion of novel behaviour patterns, and for intra- and inter-population variation in behaviour. However, claims that these are manifestations of animal 'culture' remain controversial because alternative explanations to social learning remain difficult to refute. This inability to identify social learning in social settings has also contributed to the failure to test evolutionary hypotheses concerning the social learning strategies that animals deploy. METHODOLOGY/PRINCIPAL FINDINGS: We present a solution to this problem, in the form of a new means of identifying social learning in animal populations. The method is based on the well-established premise of social learning research, that--when ecological and genetic differences are accounted for--social learning will generate greater homogeneity in behaviour between animals than expected in its absence. Our procedure compares the observed level of homogeneity to a sampling distribution generated utilizing randomization and other procedures, allowing claims of social learning to be evaluated according to consensual standards. We illustrate the method on data from groups of monkeys provided with novel two-option extractive foraging tasks, demonstrating that social learning can indeed be distinguished from unlearned processes and a social learning, and revealing that the monkeys only employed social learning for the more difficult tasks. The method is further validated against published datasets and through simulation, and exhibits higher statistical power than conventional inferential statistics. CONCLUSIONS/SIGNIFICANCE: The method is potentially a significant technological development, which could prove of considerable value in assessing the validity of claims for culturally transmitted behaviour in animal groups. It will also be of value in enabling investigation of the social learning strategies deployed in

  10. Identifying social learning in animal populations: a new 'option-bias' method.

    Science.gov (United States)

    Kendal, Rachel L; Kendal, Jeremy R; Hoppitt, Will; Laland, Kevin N

    2009-08-06

    Studies of natural animal populations reveal widespread evidence for the diffusion of novel behaviour patterns, and for intra- and inter-population variation in behaviour. However, claims that these are manifestations of animal 'culture' remain controversial because alternative explanations to social learning remain difficult to refute. This inability to identify social learning in social settings has also contributed to the failure to test evolutionary hypotheses concerning the social learning strategies that animals deploy. We present a solution to this problem, in the form of a new means of identifying social learning in animal populations. The method is based on the well-established premise of social learning research, that--when ecological and genetic differences are accounted for--social learning will generate greater homogeneity in behaviour between animals than expected in its absence. Our procedure compares the observed level of homogeneity to a sampling distribution generated utilizing randomization and other procedures, allowing claims of social learning to be evaluated according to consensual standards. We illustrate the method on data from groups of monkeys provided with novel two-option extractive foraging tasks, demonstrating that social learning can indeed be distinguished from unlearned processes and a social learning, and revealing that the monkeys only employed social learning for the more difficult tasks. The method is further validated against published datasets and through simulation, and exhibits higher statistical power than conventional inferential statistics. The method is potentially a significant technological development, which could prove of considerable value in assessing the validity of claims for culturally transmitted behaviour in animal groups. It will also be of value in enabling investigation of the social learning strategies deployed in captive and natural animal populations.

  11. Evaluation of sample preservation methods for poultry manure.

    Science.gov (United States)

    Pan, J; Fadel, J G; Zhang, R; El-Mashad, H M; Ying, Y; Rumsey, T

    2009-08-01

    When poultry manure is collected but cannot be analyzed immediately, a method for storing the manure is needed to ensure accurate subsequent analyses. This study has 3 objectives: (1) to investigate effects of 4 poultry manure sample preservation methods (refrigeration, freezing, acidification, and freeze-drying) on the compositional characteristics of poultry manure; (2) to determine compositional differences in fresh manure with manure samples at 1, 2, and 3 d of accumulation under bird cages; and (3) to assess the influence of 14-d freezing storage on the composition of manure when later exposed to 25 degrees C for 7 d as compared with fresh manure. All manure samples were collected from a layer house. Analyses performed on the manure samples included total Kjeldahl nitrogen, uric acid nitrogen, ammonia nitrogen, and urea nitrogen. In experiment 1, the storage methods most similar to fresh manure, in order of preference, were freezing, freeze-drying, acidification, and refrigeration. Thoroughly mixing manure samples and compressing them to 2 to 3 mm is important for the freezing and freeze-dried samples. In general, refrigeration was found unacceptable for nitrogen analyses. A significant effect (P Kjeldahl nitrogen and uric acid nitrogen were significantly lower (P < 0.05) for 1, 2, and 3 d of accumulation compared with fresh manure. Manure after 1, 2, and 3 d of accumulation had similar nitrogen compositions. The results from experiment 3 show that nitrogen components from fresh manure samples and thawed samples from 14 d of freezing are similar at 7 d but high variability of nitrogen compositions during intermediate times from 0 to 7 d prevents the recommendation of freezing manure for use in subsequent experiments and warrants future experimentation. In conclusion, fresh poultry manure can be frozen for accurate subsequent nitrogen compositional analyses but this same frozen manure may not be a reliable substitute for fresh manure if a subsequent experiment

  12. A molecular method to assess Phytophthora diversity in environmental samples.

    Science.gov (United States)

    Scibetta, Silvia; Schena, Leonardo; Chimento, Antonio; Cacciola, Santa O; Cooke, David E L

    2012-03-01

    Current molecular detection methods for the genus Phytophthora are specific to a few key species rather than the whole genus and this is a recognized weakness of protocols for ecological studies and international plant health legislation. In the present study a molecular approach was developed to detect Phytophthora species in soil and water samples using novel sets of genus-specific primers designed against the internal transcribed spacer (ITS) regions. Two different rDNA primer sets were tested: one assay amplified a long product including the ITS1, 5.8S and ITS2 regions (LP) and the other a shorter product including the ITS1 only (SP). Both assays specifically amplified products from Phytophthora species without cross-reaction with the related Pythium s. lato, however the SP assay proved the more sensitive and reliable. The method was validated using woodland soil and stream water from Invergowrie, Scotland. On-site use of a knapsack sprayer and in-line water filters proved more rapid and effective than centrifugation at sampling Phytophthora propagules. A total of 15 different Phytophthora phylotypes were identified which clustered within the reported ITS-clades 1, 2, 3, 6, 7 and 8. The range and type of the sequences detected varied from sample to sample and up to three and five different Phytophthora phylotypes were detected within a single sample of soil or water, respectively. The most frequently detected sequences were related to members of ITS-clade 6 (i.e. P. gonapodyides-like). The new method proved very effective at discriminating multiple species in a given sample and can also detect as yet unknown species. The reported primers and methods will prove valuable for ecological studies, biosecurity and commercial plant, soil or water (e.g. irrigation water) testing as well as the wider metagenomic sampling of this fascinating component of microbial pathogen diversity.

  13. Comparison of sampling methods for radiocarbon dating of carbonyls in air samples via accelerator mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Schindler, Matthias, E-mail: matthias.schindler@physik.uni-erlangen.de; Kretschmer, Wolfgang; Scharf, Andreas; Tschekalinskij, Alexander

    2016-05-15

    Three new methods to sample and prepare various carbonyl compounds for radiocarbon measurements were developed and tested. Two of these procedures utilized the Strecker synthetic method to form amino acids from carbonyl compounds with either sodium cyanide or trimethylsilyl cyanide. The third procedure used semicarbazide to form crystalline carbazones with the carbonyl compounds. The resulting amino acids and semicarbazones were then separated and purified using thin layer chromatography. The separated compounds were then combusted to CO{sub 2} and reduced to graphite to determine {sup 14}C content by accelerator mass spectrometry (AMS). All of these methods were also compared with the standard carbonyl compound sampling method wherein a compound is derivatized with 2,4-dinitrophenylhydrazine and then separated by high-performance liquid chromatography (HPLC).

  14. Comparison of sampling methods for radiocarbon dating of carbonyls in air samples via accelerator mass spectrometry

    Science.gov (United States)

    Schindler, Matthias; Kretschmer, Wolfgang; Scharf, Andreas; Tschekalinskij, Alexander

    2016-05-01

    Three new methods to sample and prepare various carbonyl compounds for radiocarbon measurements were developed and tested. Two of these procedures utilized the Strecker synthetic method to form amino acids from carbonyl compounds with either sodium cyanide or trimethylsilyl cyanide. The third procedure used semicarbazide to form crystalline carbazones with the carbonyl compounds. The resulting amino acids and semicarbazones were then separated and purified using thin layer chromatography. The separated compounds were then combusted to CO2 and reduced to graphite to determine 14C content by accelerator mass spectrometry (AMS). All of these methods were also compared with the standard carbonyl compound sampling method wherein a compound is derivatized with 2,4-dinitrophenylhydrazine and then separated by high-performance liquid chromatography (HPLC).

  15. Applying surrogate species presences to correct sample bias in species distribution models: a case study using the Pilbara population of the Northern Quoll

    Directory of Open Access Journals (Sweden)

    Shaun W. Molloy

    2017-05-01

    Full Text Available The management of populations of threatened species requires the capacity to identify areas of high habitat value. We developed a high resolution species distribution model (SDM for the endangered Pilbara northern quoll Dasyurus hallucatus, population using MaxEnt software and a combined suite of bioclimatic and landscape variables. Once common throughout much of northern Australia, this marsupial carnivore has recently declined throughout much of its former range and is listed as endangered by the IUCN. Other than the potential threats presented by climate change, and the invasive cane toad Rhinella marina (which has not yet arrived in the Pilbara. The Pilbara population is also impacted by introduced predators, pastoral and mining activities. To account for sample bias resulting from targeted surveys unevenly spread through the region, a pseudo-absence bias layer was developed from presence records of other critical weight-range non-volant mammals. The resulting model was then tested using the biomod2 package which produces ensemble models from individual models created with different algorithms. This ensemble model supported the distribution determined by the bias compensated MaxEnt model with a covariance of of 86% between models with both models largely identifying the same areas as high priority habitat. The primary product of this exercise is a high resolution SDM which corroborates and elaborates on our understanding of the ecology and habitat preferences of the Pilbara Northern Quoll population thereby improving our capacity to manage this population in the face of future threats.

  16. Blue noise sampling method based on mixture distance

    Science.gov (United States)

    Qin, Hongxing; Hong, XiaoYang; Xiao, Bin; Zhang, Shaoting; Wang, Guoyin

    2014-11-01

    Blue noise sampling is a core component for a large number of computer graphic applications such as imaging, modeling, animation, and rendering. However, most existing methods are concentrated on preserving spatial domain properties like density and anisotropy, while ignoring feature preserving. In order to solve the problem, we present a new distance metric called mixture distance for blue noise sampling, which is a combination of geodesic and feature distances. Based on mixture distance, the blue noise property and features can be preserved by controlling the ratio of the geodesic distance to the feature distance. With the intention of meeting different requirements from various applications, an adaptive adjustment for parameters is also proposed to achieve a balance between the preservation of features and spatial properties. Finally, implementation on a graphic processing unit is introduced to improve the efficiency of computation. The efficacy of the method is demonstrated by the results of image stippling, surface sampling, and remeshing.

  17. A General Linear Method for Equating with Small Samples

    Science.gov (United States)

    Albano, Anthony D.

    2015-01-01

    Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…

  18. A Frequency Domain Design Method For Sampled-Data Compensators

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Jannerup, Ole Erik

    1990-01-01

    A new approach to the design of a sampled-data compensator in the frequency domain is investigated. The starting point is a continuous-time compensator for the continuous-time system which satisfy specific design criteria. The new design method will graphically show how the discrete...

  19. 7 CFR 58.245 - Method of sample analysis.

    Science.gov (United States)

    2010-01-01

    ... laboratory analysis contained in either DA Instruction 918-RL as issued by the USDA, Agricultural Marketing... 7 Agriculture 3 2010-01-01 2010-01-01 false Method of sample analysis. 58.245 Section 58.245 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE...

  20. Performance of sampling methods to estimate log characteristics for wildlife.

    Science.gov (United States)

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton

    2004-01-01

    Accurate estimation of the characteristics of log resources, or coarse woody debris (CWD), is critical to effective management of wildlife and other forest resources. Despite the importance of logs as wildlife habitat, methods for sampling logs have traditionally focused on silvicultural and fire applications. These applications have emphasized estimates of log volume...

  1. The proportionator: unbiased stereological estimation using biased automatic image analysis and non-uniform probability proportional to size sampling

    DEFF Research Database (Denmark)

    Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb

    2008-01-01

    of its entirely different sampling strategy, based on known but non-uniform sampling probabilities, the proportionator for the first time allows the real CE at the section level to be automatically estimated (not just predicted), unbiased - for all estimators and at no extra cost to the user.......The proportionator is a novel and radically different approach to sampling with microscopes based on well-known statistical theory (probability proportional to size - PPS sampling). It uses automatic image analysis, with a large range of options, to assign to every field of view in the section......, the desired number of fields are sampled automatically with probability proportional to the weight and presented to the expert observer. Using any known stereological probe and estimator, the correct count in these fields leads to a simple, unbiased estimate of the total amount of structure in the sections...

  2. Comparison of fitting methods and b-value sampling strategies for intravoxel incoherent motion in breast cancer.

    Science.gov (United States)

    Cho, Gene Young; Moy, Linda; Zhang, Jeff L; Baete, Steven; Lattanzi, Riccardo; Moccaldi, Melanie; Babb, James S; Kim, Sungheon; Sodickson, Daniel K; Sigmund, Eric E

    2015-10-01

    To compare fitting methods and sampling strategies, including the implementation of an optimized b-value selection for improved estimation of intravoxel incoherent motion (IVIM) parameters in breast cancer. Fourteen patients (age, 48.4 ± 14.27 years) with cancerous lesions underwent 3 Tesla breast MRI examination for a HIPAA-compliant, institutional review board approved diffusion MR study. IVIM biomarkers were calculated using "free" versus "segmented" fitting for conventional or optimized (repetitions of key b-values) b-value selection. Monte Carlo simulations were performed over a range of IVIM parameters to evaluate methods of analysis. Relative bias values, relative error, and coefficients of variation (CV) were obtained for assessment of methods. Statistical paired t-tests were used for comparison of experimental mean values and errors from each fitting and sampling method. Comparison of the different analysis/sampling methods in simulations and experiments showed that the "segmented" analysis and the optimized method have higher precision and accuracy, in general, compared with "free" fitting of conventional sampling when considering all parameters. Regarding relative bias, IVIM parameters fp and Dt differed significantly between "segmented" and "free" fitting methods. IVIM analysis may improve using optimized selection and "segmented" analysis, potentially enabling better differentiation of breast cancer subtypes and monitoring of treatment. © 2014 Wiley Periodicals, Inc.

  3. Development of an automated data processing method for sample to sample comparison of seized methamphetamines.

    Science.gov (United States)

    Choe, Sanggil; Lee, Jaesin; Choi, Hyeyoung; Park, Yujin; Lee, Heesang; Pyo, Jaesung; Jo, Jiyeong; Park, Yonghoon; Choi, Hwakyung; Kim, Suncheun

    2012-11-30

    The information about the sources of supply, trafficking routes, distribution patterns and conspiracy links can be obtained from methamphetamine profiling. The precursor and synthetic method for the clandestine manufacture can be estimated from the analysis of minor impurities contained in methamphetamine. Also, the similarity between samples can be evaluated using the peaks that appear in chromatograms. In South Korea, methamphetamine was the most popular drug but the total seized amount of methamphetamine whole through the country was very small. Therefore, it would be more important to find the links between samples than the other uses of methamphetamine profiling. Many Asian countries including Japan and South Korea have been using the method developed by National Research Institute of Police Science of Japan. The method used gas chromatography-flame ionization detector (GC-FID), DB-5 column and four internal standards. It was developed to increase the amount of impurities and minimize the amount of methamphetamine. After GC-FID analysis, the raw data have to be processed. The data processing steps are very complex and require a lot of time and effort. In this study, Microsoft Visual Basic Application (VBA) modules were developed to handle these data processing steps. This module collected the results from the data into an Excel file and then corrected the retention time shift and response deviation generated from the sample preparation and instruments analysis. The developed modules were tested for their performance using 10 samples from 5 different cases. The processed results were analyzed with Pearson correlation coefficient for similarity assessment and the correlation coefficient of the two samples from the same case was more than 0.99. When the modules were applied to 131 seized methamphetamine samples, four samples from two different cases were found to have the common origin and the chromatograms of the four samples were appeared visually identical

  4. Bias analysis and the simulation-extrapolation method for survival data with covariate measurement error under parametric proportional odds models.

    Science.gov (United States)

    Yi, Grace Y; He, Wenqing

    2012-05-01

    It has been well known that ignoring measurement error may result in substantially biased estimates in many contexts including linear and nonlinear regressions. For survival data with measurement error in covariates, there has been extensive discussion in the literature with the focus on proportional hazards (PH) models. Recently, research interest has extended to accelerated failure time (AFT) and additive hazards (AH) models. However, the impact of measurement error on other models, such as the proportional odds model, has received relatively little attention, although these models are important alternatives when PH, AFT, or AH models are not appropriate to fit data. In this paper, we investigate this important problem and study the bias induced by the naive approach of ignoring covariate measurement error. To adjust for the induced bias, we describe the simulation-extrapolation method. The proposed method enjoys a number of appealing features. Its implementation is straightforward and can be accomplished with minor modifications of existing software. More importantly, the proposed method does not require modeling the covariate process, which is quite attractive in practice. As the precise values of error-prone covariates are often not observable, any modeling assumption on such covariates has the risk of model misspecification, hence yielding invalid inferences if this happens. The proposed method is carefully assessed both theoretically and empirically. Theoretically, we establish the asymptotic normality for resulting estimators. Numerically, simulation studies are carried out to evaluate the performance of the estimators as well as the impact of ignoring measurement error, along with an application to a data set arising from the Busselton Health Study. Sensitivity of the proposed method to misspecification of the error model is studied as well.

  5. Sample Selected Averaging Method for Analyzing the Event Related Potential

    Science.gov (United States)

    Taguchi, Akira; Ono, Youhei; Kimura, Tomoaki

    The event related potential (ERP) is often measured through the oddball task. On the oddball task, subjects are given “rare stimulus” and “frequent stimulus”. Measured ERPs were analyzed by the averaging technique. In the results, amplitude of the ERP P300 becomes large when the “rare stimulus” is given. However, measured ERPs are included samples without an original feature of ERP. Thus, it is necessary to reject unsuitable measured ERPs when using the averaging technique. In this paper, we propose the rejection method for unsuitable measured ERPs for the averaging technique. Moreover, we combine the proposed method and Woody's adaptive filter method.

  6. Evaluation of a bias correction method applied to downscaled precipitation and temperature reanalysis data for the Rhine basin

    Directory of Open Access Journals (Sweden)

    W. Terink

    2010-01-01

    Full Text Available In many climate impact studies hydrological models are forced with meteorological data without an attempt to assess the quality of these data. The objective of this study is to compare downscaled ERA15 (ECMWF-reanalysis data precipitation and temperature with observed precipitation and temperature and apply a bias correction to these forcing variables. Precipitation is corrected by fitting the mean and coefficient of variation (CV of the observations. Temperature is corrected by fitting the mean and standard deviation of the observations. It appears that the uncorrected ERA15 is too warm and too wet for most of the Rhine basin. The bias correction leads to satisfactory results, precipitation and temperature differences decreased significantly, although there are a few years for which the correction of precipitation is less satisfying. Corrections were largest during summer for both precipitation and temperature, and for September and October for precipitation only. Besides the statistics the correction method was intended to correct for, it is also found to improve the correlations for the fraction of wet days and lag-1 autocorrelations between ERA15 and the observations. For the validation period temperature is corrected very well, but for precipitation the RMSE of the daily difference between modeled and observed precipitation has increased for the corrected situation. When taking random years for calibration, and the remaining years for validation, the spread in the mean bias error (MBE becomes larger for the corrected precipitation during validation, but the overal average MBE has decreased.

  7. An Improved Dynamical Downscaling Method with GCM Bias Corrections and Its Validation with 30 Years of Climate Simulations

    KAUST Repository

    Xu, Zhongfeng

    2012-09-01

    An improved dynamical downscaling method (IDD) with general circulation model (GCM) bias corrections is developed and assessed over North America. A set of regional climate simulations is performed with the Weather Research and Forecasting Model (WRF) version 3.3 embedded in the National Center for Atmospheric Research\\'s (NCAR\\'s) Community Atmosphere Model (CAM). The GCM climatological means and the amplitudes of interannual variations are adjusted based on the National Centers for Environmental Prediction (NCEP)-NCAR global reanalysis products (NNRP) before using them to drive WRF. In this study, the WRF downscaling experiments are identical except the initial and lateral boundary conditions derived from the NNRP, original GCM output, and bias-corrected GCM output, respectively. The analysis finds that the IDD greatly improves the downscaled climate in both climatological means and extreme events relative to the traditional dynamical downscaling approach (TDD). The errors of downscaled climatological mean air temperature, geopotential height, wind vector, moisture, and precipitation are greatly reduced when the GCM bias corrections are applied. In the meantime, IDD also improves the downscaled extreme events characterized by the reduced errors in 2-yr return levels of surface air temperature and precipitation. In comparison with TDD, IDD is also able to produce a more realistic probability distribution in summer daily maximum temperature over the central U.S.-Canada region as well as in summer and winter daily precipitation over the middle and eastern United States. © 2012 American Meteorological Society.

  8. Child mortality estimation: methods used to adjust for bias due to AIDS in estimating trends in under-five mortality.

    Science.gov (United States)

    Walker, Neff; Hill, Kenneth; Zhao, Fengmin

    2012-01-01

    In most low- and middle-income countries, child mortality is estimated from data provided by mothers concerning the survival of their children using methods that assume no correlation between the mortality risks of the mothers and those of their children. This assumption is not valid for populations with generalized HIV epidemics, however, and in this review, we show how the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME) uses a cohort component projection model to correct for AIDS-related biases in the data used to estimate trends in under-five mortality. In this model, births in a given year are identified as occurring to HIV-positive or HIV-negative mothers, the lives of the infants and mothers are projected forward using survivorship probabilities to estimate survivors at the time of a given survey, and the extent to which excess mortality of children goes unreported because of the deaths of HIV-infected mothers prior to the survey is calculated. Estimates from the survey for past periods can then be adjusted for the estimated bias. The extent of the AIDS-related bias depends crucially on the dynamics of the HIV epidemic, on the length of time before the survey that the estimates are made for, and on the underlying non-AIDS child mortality. This simple methodology (which does not take into account the use of effective antiretroviral interventions) gives results qualitatively similar to those of other studies.

  9. Child mortality estimation: methods used to adjust for bias due to AIDS in estimating trends in under-five mortality.

    Directory of Open Access Journals (Sweden)

    Neff Walker

    Full Text Available In most low- and middle-income countries, child mortality is estimated from data provided by mothers concerning the survival of their children using methods that assume no correlation between the mortality risks of the mothers and those of their children. This assumption is not valid for populations with generalized HIV epidemics, however, and in this review, we show how the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME uses a cohort component projection model to correct for AIDS-related biases in the data used to estimate trends in under-five mortality. In this model, births in a given year are identified as occurring to HIV-positive or HIV-negative mothers, the lives of the infants and mothers are projected forward using survivorship probabilities to estimate survivors at the time of a given survey, and the extent to which excess mortality of children goes unreported because of the deaths of HIV-infected mothers prior to the survey is calculated. Estimates from the survey for past periods can then be adjusted for the estimated bias. The extent of the AIDS-related bias depends crucially on the dynamics of the HIV epidemic, on the length of time before the survey that the estimates are made for, and on the underlying non-AIDS child mortality. This simple methodology (which does not take into account the use of effective antiretroviral interventions gives results qualitatively similar to those of other studies.

  10. Transuranic waste characterization sampling and analysis methods manual. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Suermann, J.F.

    1996-04-01

    This Methods Manual provides a unified source of information on the sampling and analytical techniques that enable Department of Energy (DOE) facilities to comply with the requirements established in the current revision of the Transuranic Waste Characterization Quality Assurance Program Plan (QAPP) for the Waste Isolation Pilot Plant (WIPP) Transuranic (TRU) Waste Characterization Program (the Program) and the WIPP Waste Analysis Plan. This Methods Manual includes all of the testing, sampling, and analytical methodologies accepted by DOE for use in implementing the Program requirements specified in the QAPP and the WIPP Waste Analysis Plan. The procedures in this Methods Manual are comprehensive and detailed and are designed to provide the necessary guidance for the preparation of site-specific procedures. With some analytical methods, such as Gas Chromatography/Mass Spectrometry, the Methods Manual procedures may be used directly. With other methods, such as nondestructive characterization, the Methods Manual provides guidance rather than a step-by-step procedure. Sites must meet all of the specified quality control requirements of the applicable procedure. Each DOE site must document the details of the procedures it will use and demonstrate the efficacy of such procedures to the Manager, National TRU Program Waste Characterization, during Waste Characterization and Certification audits.

  11. Transuranic waste characterization sampling and analysis methods manual. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Suermann, J.F.

    1996-04-01

    This Methods Manual provides a unified source of information on the sampling and analytical techniques that enable Department of Energy (DOE) facilities to comply with the requirements established in the current revision of the Transuranic Waste Characterization Quality Assurance Program Plan (QAPP) for the Waste Isolation Pilot Plant (WIPP) Transuranic (TRU) Waste Characterization Program (the Program) and the WIPP Waste Analysis Plan. This Methods Manual includes all of the testing, sampling, and analytical methodologies accepted by DOE for use in implementing the Program requirements specified in the QAPP and the WIPP Waste Analysis Plan. The procedures in this Methods Manual are comprehensive and detailed and are designed to provide the necessary guidance for the preparation of site-specific procedures. With some analytical methods, such as Gas Chromatography/Mass Spectrometry, the Methods Manual procedures may be used directly. With other methods, such as nondestructive characterization, the Methods Manual provides guidance rather than a step-by-step procedure. Sites must meet all of the specified quality control requirements of the applicable procedure. Each DOE site must document the details of the procedures it will use and demonstrate the efficacy of such procedures to the Manager, National TRU Program Waste Characterization, during Waste Characterization and Certification audits.

  12. Media Bias

    OpenAIRE

    Sendhil Mullainathan; Andrei Shleifer

    2002-01-01

    There are two different types of media bias. One bias, which we refer to as ideology, reflects a news outlet's desire to affect reader opinions in a particular direction. The second bias, which we refer to as spin, reflects the outlet's attempt to simply create a memorable story. We examine competition among media outlets in the presence of these biases. Whereas competition can eliminate the effect of ideological bias, it actually exaggerates the incentive to spin stories.

  13. Reducing bias in population and landscape genetic inferences: the effects of sampling related individuals and multiple life stages.

    Science.gov (United States)

    Peterman, William; Brocato, Emily R; Semlitsch, Raymond D; Eggert, Lori S

    2016-01-01

    In population or landscape genetics studies, an unbiased sampling scheme is essential for generating accurate results, but logistics may lead to deviations from the sample design. Such deviations may come in the form of sampling multiple life stages. Presently, it is largely unknown what effect sampling different life stages can have on population or landscape genetic inference, or how mixing life stages can affect the parameters being measured. Additionally, the removal of siblings from a data set is considered best-practice, but direct comparisons of inferences made with and without siblings are limited. In this study, we sampled embryos, larvae, and adult Ambystoma maculatum from five ponds in Missouri, and analyzed them at 15 microsatellite loci. We calculated allelic richness, heterozygosity and effective population sizes for each life stage at each pond and tested for genetic differentiation (F ST and D C ) and isolation-by-distance (IBD) among ponds. We tested for differences in each of these measures between life stages, and in a pooled population of all life stages. All calculations were done with and without sibling pairs to assess the effect of sibling removal. We also assessed the effect of reducing the number of microsatellites used to make inference. No statistically significant differences were found among ponds or life stages for any of the population genetic measures, but patterns of IBD differed among life stages. There was significant IBD when using adult samples, but tests using embryos, larvae, or a combination of the three life stages were not significant. We found that increasing the ratio of larval or embryo samples in the analysis of genetic distance weakened the IBD relationship, and when using D C , the IBD was no longer significant when larvae and embryos exceeded 60% of the population sample. Further, power to detect an IBD relationship was reduced when fewer microsatellites were used in the analysis.

  14. Low-Resolution Spectroscopy of Gamma-ray Burst Optical Afterglows: Biases in the Swift Sample and Characterization of the Absorbers

    CERN Document Server

    Fynbo, J P U; Prochaska, J X; Malesani, D; Ledoux, C; Postigo, A de Ugarte; Nardini, M; Vreeswijk, P M; Hjorth, J; Sollerman, J; Chen, H -W; Thoene, C C; Bjoernsson, G; Bloom, J S; Castro-Tirado, A J; Christensen, L; De Cia, A; Gorosabel, J U; Jaunsen, A; Jensen, B L; Levan, A; Maund, J; Masetti, N; Milvang-Jensen, B; Palazzi, E; Perley, D A; Pian, E; Rol, E; Schady, P; Starling, R; Tanvir, N; Watson, D J; Wiersema, K; Xu, D; Augusteijn, T; Grundahl, F; Telting, J; Quirion, P -O

    2009-01-01

    (Abridged). We present a sample of 77 optical afterglows (OAs) of Swift detected GRBs for which spectroscopic follow-up observations have been secured. We provide linelists and equivalent widths for all detected lines redward of Ly-alpha. We discuss to what extent the current sample of Swift bursts with OA spectroscopy is a biased subsample of all Swift detected GRBs. For that purpose we define an X-ray selected sample of Swift bursts with optimal conditions for ground-based follow up from the period March 2005 to September 2008; 146 bursts fulfill our sample criteria. We derive the redshift distribution for this sample and conclude that less than 19% of Swift bursts are at z>7. We compare the high energy properties for three sub-samples of bursts in the sample: i) bursts with redshifts measured from OA spectroscopy, ii) bursts with detected OA, but no OA-based redshift, and iii) bursts with no detection of the OA. The bursts in group i) have significantly less excess X-ray absorption than bursts in the other...

  15. COMPARISON OF SAMPLE PREPARATION METHODS FOR CHIP-CHIP ASSAYS

    OpenAIRE

    O'Geen, Henriette; Nicolet, Charles M.; Blahnik, Kim; Green, Roland; Farnham, Peggy J.

    2006-01-01

    A single ChIP sample does not provide enough DNA for hybridization to a genomic tiling array. A commonly used technique for amplifying the DNA obtained from ChIP assays is linker-mediated PCR (LMPCR). However, using this amplification method, we could not identify Oct4 binding sites on genomic tiling arrays representing 1% of the human genome (ENCODE arrays). In contrast, hybridization of a pool of 10 ChIP samples to the arrays produced reproducible binding patterns and low background signals...

  16. On-line sample processing methods in flow analysis

    DEFF Research Database (Denmark)

    Miró, Manuel; Hansen, Elo Harald

    2008-01-01

    In this chapter, the state of the art of flow injection and related approaches thereof for automation and miniaturization of sample processing regardless of the aggregate state of the sample medium is overviewed. The potential of the various generation of flow injection for implementation of in......-line dilution, derivatization, separation and preconcentration methods encompassing solid reactors, solvent extraction, sorbent extraction, precipitation/coprecipitation, hydride/vapor generation and digestion/leaching protocols as hyphenated to a plethora of detection devices is discussed in detail...

  17. Rational Construction of Stochastic Numerical Methods for Molecular Sampling

    CERN Document Server

    Leimkuhler, Benedict

    2012-01-01

    In this article, we focus on the sampling of the configurational Gibbs-Boltzmann distribution, that is, the calculation of averages of functions of the position coordinates of a molecular $N$-body system modelled at constant temperature. We show how a formal series expansion of the invariant measure of a Langevin dynamics numerical method can be obtained in a straightforward way using the Baker-Campbell-Hausdorff lemma. We then compare Langevin dynamics integrators in terms of their invariant distributions and demonstrate a superconvergence property (4th order accuracy where only 2nd order would be expected) of one method in the high friction limit; this method, moreover, can be reduced to a simple modification of the Euler-Maruyama method for Brownian dynamics involving a non-Markovian (coloured noise) random process. In the Brownian dynamics case, 2nd order accuracy of the invariant density is achieved. All methods considered are efficient for molecular applications (requiring one force evaluation per times...

  18. A Novel Fast Method for Point-sampled Model Simplification

    Directory of Open Access Journals (Sweden)

    Cao Zhi

    2016-01-01

    Full Text Available A novel fast simplification method for point-sampled statue model is proposed. Simplifying method for 3d model reconstruction is a hot topic in the field of 3D surface construction. But it is difficult as point cloud of many 3d models is very large, so its running time becomes very long. In this paper, a two-stage simplifying method is proposed. Firstly, a feature-preserved non-uniform simplification method for cloud points is presented, which simplifies the data set to remove the redundancy while keeping down the features of the model. Secondly, an affinity clustering simplifying method is used to classify the point cloud into a sharp point or a simple point. The advantage of Affinity Propagation clustering is passing messages among data points and fast speed of processing. Together with the re-sampling, it can dramatically reduce the duration of the process while keep a lower memory cost. Both theoretical analysis and experimental results show that after the simplification, the performance of the proposed method is efficient as well as the details of the surface are preserved well.

  19. Horvitz-Thompson survey sample methods for estimating large-scale animal abundance

    Science.gov (United States)

    Samuel, M.D.; Garton, E.O.

    1994-01-01

    Large-scale surveys to estimate animal abundance can be useful for monitoring population status and trends, for measuring responses to management or environmental alterations, and for testing ecological hypotheses about abundance. However, large-scale surveys may be expensive and logistically complex. To ensure resources are not wasted on unattainable targets, the goals and uses of each survey should be specified carefully and alternative methods for addressing these objectives always should be considered. During survey design, the impoflance of each survey error component (spatial design, propofiion of detected animals, precision in detection) should be considered carefully to produce a complete statistically based survey. Failure to address these three survey components may produce population estimates that are inaccurate (biased low), have unrealistic precision (too precise) and do not satisfactorily meet the survey objectives. Optimum survey design requires trade-offs in these sources of error relative to the costs of sampling plots and detecting animals on plots, considerations that are specific to the spatial logistics and survey methods. The Horvitz-Thompson estimators provide a comprehensive framework for considering all three survey components during the design and analysis of large-scale wildlife surveys. Problems of spatial and temporal (especially survey to survey) heterogeneity in detection probabilities have received little consideration, but failure to account for heterogeneity produces biased population estimates. The goal of producing unbiased population estimates is in conflict with the increased variation from heterogeneous detection in the population estimate. One solution to this conflict is to use an MSE-based approach to achieve a balance between bias reduction and increased variation. Further research is needed to develop methods that address spatial heterogeneity in detection, evaluate the effects of temporal heterogeneity on survey

  20. Genetic distance sampling: a novel sampling method for obtaining core collections using genetic distances with an application to cultivated lettuce

    NARCIS (Netherlands)

    Jansen, J.; Hintum, van T.J.L.

    2007-01-01

    This paper introduces a novel sampling method for obtaining core collections, entitled genetic distance sampling. The method incorporates information about distances between individual accessions into a random sampling procedure. A basic feature of the method is that automatically larger samples are

  1. Microfluidic Sample Preparation Methods for the Analysis of Milk Contaminants

    Directory of Open Access Journals (Sweden)

    Andrea Adami

    2016-01-01

    Full Text Available In systems for food analysis, one of the major challenges is related to the quantification of specific species into the complex chemical and physical composition of foods, that is, the effect of “matrix”; the sample preparation is often the key to a successful application of biosensors to real measurements but little attention is traditionally paid to such aspects in sensor research. In this critical review, we discuss several microfluidic concepts that can play a significant role in sample preparation, highlighting the importance of sample preparation for efficient detection of food contamination. As a case study, we focus on the challenges related to the detection of aflatoxin M1 in milk and we evaluate possible approaches based on inertial microfluidics, electrophoresis, and acoustic separation, compared with traditional laboratory and industrial methods for phase separation as a baseline of thrust and well-established techniques.

  2. A Mixed Methods Investigation of Mixed Methods Sampling Designs in Social and Health Science Research

    Science.gov (United States)

    Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.

    2007-01-01

    A sequential design utilizing identical samples was used to classify mixed methods studies via a two-dimensional model, wherein sampling designs were grouped according to the time orientation of each study's components and the relationship of the qualitative and quantitative samples. A quantitative analysis of 121 studies representing nine fields…

  3. Galaxies in the Illustris simulation as seen by the Sloan Digital Sky Survey - I: Bulge+disc decompositions, methods, and biases.

    Science.gov (United States)

    Bottrell, Connor; Torrey, Paul; Simard, Luc; Ellison, Sara L.

    2017-05-01

    We present an image-based method for comparing the structural properties of galaxies produced in hydrodynamical simulations to real galaxies in the Sloan Digital Sky Survey. The key feature of our work is the introduction of extensive observational realism, such as object crowding, noise and viewing angle, to the synthetic images of simulated galaxies, so that they can be fairly compared to real galaxy catalogues. We apply our methodology to the dust-free synthetic image catalogue of galaxies from the Illustris simulation at z = 0, which are then fit with bulge+disc models to obtain morphological parameters. In this first paper in a series, we detail our methods, quantify observational biases and present publicly available bulge+disc decomposition catalogues. We find that our bulge+disc decompositions are largely robust to the observational biases that affect decompositions of real galaxies. However, we identify a significant population of galaxies (roughly 30 per cent of the full sample) in Illustris that are prone to internal segmentation, leading to systematically reduced flux estimates by up to a factor of 6, smaller half-light radii by up to a factor of ˜2 and generally erroneous bulge-to-total fractions of (B/T) = 0.

  4. Galaxies in the Illustris simulation as seen by the Sloan Digital Sky Survey - I: Bulge+disc decompositions, methods, and biases.

    Science.gov (United States)

    Bottrell, Connor; Torrey, Paul; Simard, Luc; Ellison, Sara L.

    2017-01-01

    We present an image-based method for comparing the structural properties of galaxies produced in hydrodynamical simulations to real galaxies in the Sloan Digital Sky Survey. The key feature of our work is the introduction of extensive observational realism, such as object crowding, noise and viewing angle, to the synthetic images of simulated galaxies, so that they can be fairly compared to real galaxy catalogs. We apply our methodology to the dust-free synthetic image catalog of galaxies from the Illustris simulation at z = 0, which are then fit with bulge+disc models to obtain morphological parameters. In this first paper in a series, we detail our methods, quantify observational biases, and present publicly available bulge+disc decomposition catalogs. We find that our bulge+disc decompositions are largely robust to the observational biases that affect decompositions of real galaxies. However, we identify a significant population of galaxies (roughly 30% of the full sample) in Illustris that are prone to internal segmentation, leading to systematically reduced flux estimates by up to a factor of 6, smaller half-light radii by up to a factor of ˜ 2, and generally erroneous bulge-to-total fractions of (B/T)=0.

  5. An improved method for concentrating rotavirus from water samples

    Directory of Open Access Journals (Sweden)

    Kittigul Leera

    2001-01-01

    Full Text Available A modified adsorption-elution method for the concentration of seeded rotavirus from water samples was used to determine various factors which affected the virus recovery. An enzyme-linked immunosorbent assay was used to detect the rotavirus antigen after concentration. Of the various eluents compared, 0.05M glycine, pH 11.5 gave the highest rotavirus antigen recovery using negatively charged membrane filtration whereas 2.9% tryptose phosphate broth containing 6% glycine; pH 9.0 was found to give the greatest elution efficiency when a positively charged membrane was used. Reconcentration of water samples by a speedVac concentrator showed significantly higher rotavirus recovery than polyethylene glycol precipitation through both negatively and positively charged filters (p-value 1,800 MPN/100 ml were observed but rotavirus was not detected in any sample. This study suggests that the speedVac reconcentration method gives the most efficient rotavirus recovery from water samples.

  6. A direct method for e-cigarette aerosol sample collection.

    Science.gov (United States)

    Olmedo, Pablo; Navas-Acien, Ana; Hess, Catherine; Jarmul, Stephanie; Rule, Ana

    2016-08-01

    E-cigarette use is increasing in populations around the world. Recent evidence has shown that the aerosol produced by e-cigarettes can contain a variety of toxicants. Published studies characterizing toxicants in e-cigarette aerosol have relied on filters, impingers or sorbent tubes, which are methods that require diluting or extracting the sample in a solution during collection. We have developed a collection system that directly condenses e-cigarette aerosol samples for chemical and toxicological analyses. The collection system consists of several cut pipette tips connected with short pieces of tubing. The pipette tip-based collection system can be connected to a peristaltic pump, a vacuum pump, or directly to an e-cigarette user for the e-cigarette aerosol to flow through the system. The pipette tip-based system condenses the aerosol produced by the e-cigarette and collects a liquid sample that is ready for analysis without the need of intermediate extraction solutions. We tested a total of 20 e-cigarettes from 5 different brands commercially available in Maryland. The pipette tip-based collection system condensed between 0.23 and 0.53mL of post-vaped e-liquid after 150 puffs. The proposed method is highly adaptable, can be used during field work and in experimental settings, and allows collecting aerosol samples from a wide variety of e-cigarette devices, yielding a condensate of the likely exact substance that is being delivered to the lungs.

  7. Microbial diversity in fecal samples depends on DNA extraction method

    DEFF Research Database (Denmark)

    Mirsepasi, Hengameh; Persson, Søren; Struve, Carsten

    2014-01-01

    BACKGROUND: There are challenges, when extracting bacterial DNA from specimens for molecular diagnostics, since fecal samples also contain DNA from human cells and many different substances derived from food, cell residues and medication that can inhibit downstream PCR. The purpose of the study...... was to evaluate two different DNA extraction methods in order to choose the most efficient method for studying intestinal bacterial diversity using Denaturing Gradient Gel Electrophoresis (DGGE). FINDINGS: In this study, a semi-automatic DNA extraction system (easyMag®, BioMérieux, Marcy I'Etoile, France......) and a manual one (QIAamp DNA Stool Mini Kit, Qiagen, Hilden, Germany) were tested on stool samples collected from 3 patients with Inflammatory Bowel disease (IBD) and 5 healthy individuals. DNA extracts obtained by the QIAamp DNA Stool Mini Kit yield a higher amount of DNA compared to DNA extracts obtained...

  8. A Method for Separating PCBs and OCPs in Biota Samples

    Directory of Open Access Journals (Sweden)

    GONG Fu-qiang

    2014-10-01

    Full Text Available Chromatographic fraction and cleanup method was developed for PCBs and OCPs in biota samples, using a self-developing chro- matographic fraction instrument and solid phase mixture. The solid phase was composed of florisil(30%-35%, acid-treated silica gel(50%-60%and anhydrous sodiumsulphate(10%-15%. The recoveriesof spiked PCBsand OCPs in column ranged from 96.4% to 119% and from 78.4% to 103% respectively, while in fish fat tissue ranged from 74.4% to 100% and from 78.3% to 102% respectively. This approach was proved to be an efficient, fast, simple and cost-effective method for fraction and cleanup of PCBs and OCPs in biota samples.

  9. Method of determining an electrical property of a test sample

    DEFF Research Database (Denmark)

    2010-01-01

    A method of obtaining an electrical property of a test sample, comprising a non-conductive area and a conductive or semi-conductive test area, byperforming multiple measurements using a multi-point probe. The method comprising the steps of providing a magnetic field having field lines passing...... each tip, selecting one tip to be a current source positioned between conductive tips being used for determining a voltage in the test sample, performing a first measurement, moving the probe and performing a second measurement, calculating on the basis of the first and second measurement...... perpendicularly through the test area, bringing the probe into a first position on the test area, the conductive tips of the probe being in contact with the test area, determining a position for each tip relative to the boundary between the non- conductive area and the test area, determining distances between...

  10. Post-Decontamination Vapor Sampling and Analytical Test Methods

    Science.gov (United States)

    2015-08-12

    presence of a vapor hazard. Bag-and-sample test methods provide an indication that off-gassing may be present, but because the airflow rate is...number, nomenclature, identifier, manufacturer, lot number, and other pertinent information/ indicators , if applicable, will be recorded in the...data. c. The container will be sealed with a lid lined with Teflon® polytetrafluoroethylene (PTFE) (DuPont™, E.I. du Pont de Nemours and Company

  11. PURE CULTURE METHOD: GIARDIA LAMBLIA FROM DIFFERENT STOOL SAMPLES

    Directory of Open Access Journals (Sweden)

    H.A YOUSEFI

    2000-03-01

    Full Text Available Introduction. Giardiasis is one of the health problems in the world including Iran. To determine the biochemical and biological problems and also identification of various strains, it is essential to obtain pure culture and then mass production of Giardia lamblia. The goal of this study was to isolate this protozoa purely.
    Methods. Giardia lamblia cysts were isolated from 50 stool samples by use of floating of a four - layer of sucrose method. The cysts were transfered to an inducing solution. Subsequently, they were cultured in a modified culture medium (TYIS-33. Following excystation of trophozoite and its multiplication, the parasite was caltured and purified.
    Findings. Excitation of trophozoite was observed in 40 samples (80 percent from which 22 samples (55 percent yielded pure culture. The doubling time was approximately 13hr and the peak of parasite was observed between third and fourth days.
    Conclusion. The proliferation and growth rate of Giardia lamblia have enabled us to use this method widely. Cystein and ascorbic acid which are present in the induction solution, have a key role in excystation of trophozoite. Purification and passage of samples has facilitated the culture of this parasite in vitro. Therefore this method has yielded better results in comparison with other studies. This is probably due to a decrease in the amount of bovine bile or using different strains of Giardia lamblia in the present study.

  12. Global metabolite analysis of yeast: evaluation of sample preparation methods

    DEFF Research Database (Denmark)

    Villas-Bôas, Silas Granato; Højer-Pedersen, Jesper; Åkesson, Mats Fredrik;

    2005-01-01

    , which is the analysis of a large number of metabolites with very diverse chemical and physical properties. This work reports the leakage of intracellular metabolites observed during quenching yeast cells with cold methanol solution, the efficacy of six different methods for the extraction...... of intracellular metabolites, and the losses noticed during sample concentration by lyophilization and solvent evaporation. A more reliable procedure is suggested for quenching yeast cells with cold methanol solution, followed by extraction of intracellular metabolites by pure methanol. The method can be combined...

  13. A direct sampling method to an inverse medium scattering problem

    KAUST Repository

    Ito, Kazufumi

    2012-01-10

    In this work we present a novel sampling method for time harmonic inverse medium scattering problems. It provides a simple tool to directly estimate the shape of the unknown scatterers (inhomogeneous media), and it is applicable even when the measured data are only available for one or two incident directions. A mathematical derivation is provided for its validation. Two- and three-dimensional numerical simulations are presented, which show that the method is accurate even with a few sets of scattered field data, computationally efficient, and very robust with respect to noises in the data. © 2012 IOP Publishing Ltd.

  14. Rock sampling. [method for controlling particle size distribution

    Science.gov (United States)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  15. Microextraction Methods for Preconcentration of Aluminium in Urine Samples

    Directory of Open Access Journals (Sweden)

    Farzad Farajbakhsh, Mohammad Amjadi, Jamshid Manzoori, Mohammad R. Ardalan, Abolghasem Jouyban

    2016-07-01

    Full Text Available Background: Analysis of aluminium (Al in urine samples is required in management of a number of diseases including patients with renal failure. This work aimed to present dispersive liquid-liquid microextraction (DLLME and ultrasound-assisted emulsification microextraction (USAEME methods for the preconcentration of ultra-trace amount of aluminum in human urine prior to its determination by a graphite furnace atomic absorption spectrometry (GFAAS. Methods: The microextraction methods were based on the complex formation of Al3+ with 8-hydroxyquinoline. The effect of various experimental parameters on the efficiencies of the methods and their optimum values were studied. Results: Under the optimal conditions, the limits of detection for USAEME-GFAAS and DLLME-GFAAS were 0.19 and 0.30 ng mL−1, respectively and corresponding relative standard deviations (RSD, n=5 for the determination of 40 ng mL−1 Al3+ were 5.9% and 4.9%. Conclusion: Both methods could be successfully used to the analysis of ultra trace concentrations of Al in urine samples of dialysis patients.

  16. Recent advances in sample preparation techniques for effective bioanalytical methods.

    Science.gov (United States)

    Kole, Prashant Laxman; Venkatesh, Gantala; Kotecha, Jignesh; Sheshala, Ravi

    2011-01-01

    This paper reviews the recent developments in bioanalysis sample preparation techniques and gives an update on basic principles, theory, applications and possibilities for automation, and a comparative discussion on the advantages and limitation of each technique. Conventional liquid-liquid extraction (LLE), protein precipitation (PP) and solid-phase extraction (SPE) techniques are now been considered as methods of the past. The last decade has witnessed a rapid development of novel sample preparation techniques in bioanalysis. Developments in SPE techniques such as selective sorbents and in the overall approach to SPE, such as hybrid SPE and molecularly imprinted polymer SPE, have been addressed. Considerable literature has been published in the area of solid-phase micro-extraction and its different versions, e.g. stir bar sorptive extraction, and their application in the development of selective and sensitive bioanalytical methods. Techniques such as dispersive solid-phase extraction, disposable pipette extraction and micro-extraction by packed sorbent offer a variety of extraction phases and provide unique advantages to bioanalytical methods. On-line SPE utilizing column-switching techniques is rapidly gaining acceptance in bioanalytical applications. PP sample preparation techniques such as PP filter plates/tubes offer many advantages like removal of phospholipids and proteins in plasma/serum. Newer approaches to conventional LLE techniques (salting-out LLE) are also covered in this review article.

  17. Analytical Bias Exceeding Desirable Quality Goal in 4 out of 5 Common Immunoassays: Results of a Native Single Serum Sample External Quality Assessment Program for Cobalamin, Folate, Ferritin, Thyroid-Stimulating Hormone, and Free T4 Analyses.

    Science.gov (United States)

    Kristensen, Gunn B B; Rustad, Pål; Berg, Jens P; Aakre, Kristin M

    2016-09-01

    We undertook this study to evaluate method differences for 5 components analyzed by immunoassays, to explore whether the use of method-dependent reference intervals may compensate for method differences, and to investigate commutability of external quality assessment (EQA) materials. Twenty fresh native single serum samples, a fresh native serum pool, Nordic Federation of Clinical Chemistry Reference Serum X (serum X) (serum pool), and 2 EQA materials were sent to 38 laboratories for measurement of cobalamin, folate, ferritin, free T4, and thyroid-stimulating hormone (TSH) by 5 different measurement procedures [Roche Cobas (n = 15), Roche Modular (n = 4), Abbott Architect (n = 8), Beckman Coulter Unicel (n = 2), and Siemens ADVIA Centaur (n = 9)]. The target value for each component was calculated based on the mean of method means or measured by a reference measurement procedure (free T4). Quality specifications were based on biological variation. Local reference intervals were reported from all laboratories. Method differences that exceeded acceptable bias were found for all components except folate. Free T4 differences from the uncommonly used reference measurement procedure were large. Reference intervals differed between measurement procedures but also within 1 measurement procedure. The serum X material was commutable for all components and measurement procedures, whereas the EQA materials were noncommutable in 13 of 50 occasions (5 components, 5 methods, 2 EQA materials). The bias between the measurement procedures was unacceptably large in 4/5 tested components. Traceability to reference materials as claimed by the manufacturers did not lead to acceptable harmonization. Adjustment of reference intervals in accordance with method differences and use of commutable EQA samples are not implemented commonly. © 2016 American Association for Clinical Chemistry.

  18. Sediment sampling and processing methods in Hungary, and possible improvements

    Science.gov (United States)

    Tamas, Eniko Anna; Koch, Daniel; Varga, Gyorgy

    2016-04-01

    The importance of the monitoring of sediment processes is unquestionable: sediment balance of regulated rivers suffered substantial alterations in the past century, affecting navigation, energy production, fish habitats and floodplain ecosystems alike; infiltration times to our drinking water wells have shortened, exposing them to an eventual pollution event and making them vulnerable; and sediment-attached contaminants accumulate in floodplains and reservoirs, threatening our healthy environment. The changes in flood characteristics and rating curves of our rivers are regularly being researched and described, involving state-of-the-art measurement methods, modeling tools and traditional statistics. Sediment processes however, are much less known. Unlike the investigation of flow processes, sediment-related research is scarce, which is partly due to the outdated methodology and poor database background in the specific field. Sediment-related data, information and analyses form an important and integral part of Civil engineering in relation to rivers all over the world. In relation to the second largest river of Europe, the Danube, it is widely known in expert community and for long discussed at different expert forums that the sediment balance of the river Danube has changed drastically over the past century. Sediment monitoring on the river Danube started as early as the end of the 19th century, with scattered measurements carried out. Regular sediment sampling was developed in the first half of the 20th century all along the river, with different station density and monitoring frequencies in different countries. After the first few decades of regular sampling, the concept of (mainly industrial) development changed along the river and data needs changed as well, furthermore the complicated and inexact methods of sampling bed load on the alluvial reach of the river were not developed further. Frequency of suspended sediment sampling is very low along the river

  19. Evaluation of enhanced sampling provided by accelerated molecular dynamics with Hamiltonian replica exchange methods.

    Science.gov (United States)

    Roe, Daniel R; Bergonzo, Christina; Cheatham, Thomas E

    2014-04-03

    Many problems studied via molecular dynamics require accurate estimates of various thermodynamic properties, such as the free energies of different states of a system, which in turn requires well-converged sampling of the ensemble of possible structures. Enhanced sampling techniques are often applied to provide faster convergence than is possible with traditional molecular dynamics simulations. Hamiltonian replica exchange molecular dynamics (H-REMD) is a particularly attractive method, as it allows the incorporation of a variety of enhanced sampling techniques through modifications to the various Hamiltonians. In this work, we study the enhanced sampling of the RNA tetranucleotide r(GACC) provided by H-REMD combined with accelerated molecular dynamics (aMD), where a boosting potential is applied to torsions, and compare this to the enhanced sampling provided by H-REMD in which torsion potential barrier heights are scaled down to lower force constants. We show that H-REMD and multidimensional REMD (M-REMD) combined with aMD does indeed enhance sampling for r(GACC), and that the addition of the temperature dimension in the M-REMD simulations is necessary to efficiently sample rare conformations. Interestingly, we find that the rate of convergence can be improved in a single H-REMD dimension by simply increasing the number of replicas from 8 to 24 without increasing the maximum level of bias. The results also indicate that factors beyond replica spacing, such as round trip times and time spent at each replica, must be considered in order to achieve optimal sampling efficiency.

  20. Rapid separation method for actinides in emergency air filter samples.

    Science.gov (United States)

    Maxwell, Sherrod L; Culligan, Brian K; Noyes, Gary W

    2010-12-01

    A new rapid method for the determination of actinides and strontium in air filter samples has been developed at the Savannah River Site Environmental Lab (Aiken, SC, USA) that can be used in emergency response situations. The actinides and strontium in air filter method utilizes a rapid acid digestion method and a streamlined column separation process with stacked TEVA, TRU and Sr Resin cartridges. Vacuum box technology and rapid flow rates are used to reduce analytical time. Alpha emitters are prepared using cerium fluoride microprecipitation for counting by alpha spectrometry. The purified (90)Sr fractions are mounted directly on planchets and counted by gas flow proportional counting. The method showed high chemical recoveries and effective removal of interferences. This new procedure was applied to emergency air filter samples received in the NRIP Emergency Response exercise administered by the National Institute for Standards and Technology (NIST) in April, 2009. The actinide and (90)Sr in air filter results were reported in less than 4 h with excellent quality. Copyright 2010 Elsevier Ltd. All rights reserved.

  1. Developing a Method for Resolving NOx Emission Inventory Biases Using Discrete Kalman Filter Inversion, Direct Sensitivities, and Satellite-Based Columns

    Science.gov (United States)

    An inverse method was developed to integrate satellite observations of atmospheric pollutant column concentrations and direct sensitivities predicted by a regional air quality model in order to discern biases in the emissions of the pollutant precursors.

  2. A time domain sampling method for inverse acoustic scattering problems

    Science.gov (United States)

    Guo, Yukun; Hömberg, Dietmar; Hu, Guanghui; Li, Jingzhi; Liu, Hongyu

    2016-06-01

    This work concerns the inverse scattering problems of imaging unknown/inaccessible scatterers by transient acoustic near-field measurements. Based on the analysis of the migration method, we propose efficient and effective sampling schemes for imaging small and extended scatterers from knowledge of time-dependent scattered data due to incident impulsive point sources. Though the inverse scattering problems are known to be nonlinear and ill-posed, the proposed imaging algorithms are totally "direct" involving only integral calculations on the measurement surface. Theoretical justifications are presented and numerical experiments are conducted to demonstrate the effectiveness and robustness of our methods. In particular, the proposed static imaging functionals enhance the performance of the total focusing method (TFM) and the dynamic imaging functionals show analogous behavior to the time reversal inversion but without solving time-dependent wave equations.

  3. Progressive prediction method for failure data with small sample size

    Institute of Scientific and Technical Information of China (English)

    WANG Zhi-hua; FU Hui-min; LIU Cheng-rui

    2011-01-01

    The small sample prediction problem which commonly exists in reliability analysis was discussed with the progressive prediction method in this paper.The modeling and estimation procedure,as well as the forecast and confidence limits formula of the progressive auto regressive(PAR) method were discussed in great detail.PAR model not only inherits the simple linear features of auto regressive(AR) model,but also has applicability for nonlinear systems.An application was illustrated for predicting the future fatigue failure for Tantalum electrolytic capacitors.Forecasting results of PAR model were compared with auto regressive moving average(ARMA) model,and it can be seen that the PAR method can be considered good and shows a promise for future applications.

  4. A Novel Method for Sampling Alpha-Helical Protein Backbones

    Science.gov (United States)

    Fain, Boris; Levitt, Michael

    2001-01-01

    We present a novel technique of sampling the configurations of helical proteins. Assuming knowledge of native secondary structure, we employ assembly rules gathered from a database of existing structures to enumerate the geometrically possible 3-D arrangements of the constituent helices. We produce a library of possible folds for 25 helical protein cores. In each case the method finds significant numbers of conformations close to the native structure. In addition we assign coordinates to all atoms for 4 of the 25 proteins. In the context of database driven exhaustive enumeration our method performs extremely well, yielding significant percentages of structures (0.02%--82%) within 6A of the native structure. The method's speed and efficiency make it a valuable contribution towards the goal of predicting protein structure.

  5. A method to combine non-probability sample data with probability sample data in estimating spatial means of environmental variables.

    Science.gov (United States)

    Brus, D J; de Gruijter, J J

    2003-04-01

    In estimating spatial means of environmental variables of a region from data collected by convenience or purposive sampling, validity of the results can be ensured by collecting additional data through probability sampling. The precision of the pi estimator that uses the probability sample can be increased by interpolating the values at the nonprobability sample points to the probability sample points, and using these interpolated values as an auxiliary variable in the difference or regression estimator. These estimators are (approximately) unbiased, even when the nonprobability sample is severely biased such as in preferential samples. The gain in precision compared to the pi estimator in combination with Simple Random Sampling is controlled by the correlation between the target variable and interpolated variable. This correlation is determined by the size (density) and spatial coverage of the nonprobability sample, and the spatial continuity of the target variable. In a case study the average ratio of the variances of the simple regression estimator and pi estimator was 0.68 for preferential samples of size 150 with moderate spatial clustering, and 0.80 for preferential samples of similar size with strong spatial clustering. In the latter case the simple regression estimator was substantially more precise than the simple difference estimator.

  6. Comparison between powder and slices diffraction methods in teeth samples

    Energy Technology Data Exchange (ETDEWEB)

    Colaco, Marcos V.; Barroso, Regina C. [Universidade do Estado do Rio de Janeiro (IF/UERJ), RJ (Brazil). Inst. de Fisica. Dept. de Fisica Aplicada; Porto, Isabel M. [Universidade Estadual de Campinas (FOP/UNICAMP), Piracicaba, SP (Brazil). Fac. de Odontologia. Dept. de Morfologia; Gerlach, Raquel F. [Universidade de Sao Paulo (FORP/USP), Rieirao Preto, SP (Brazil). Fac. de Odontologia. Dept. de Morfologia, Estomatologia e Fisiologia; Costa, Fanny N. [Coordenacao dos Programas de Pos-Graduacao de Engenharia (LIN/COPPE/UFRJ), RJ (Brazil). Lab. de Instrumentacao Nuclear

    2011-07-01

    Propose different methods to obtain crystallographic information about biological materials are important since powder method is a nondestructive method. Slices are an approximation of what would be an in vivo analysis. Effects of samples preparation cause differences in scattering profiles compared with powder method. The main inorganic component of bones and teeth is a calcium phosphate mineral whose structure closely resembles hydroxyapatite (HAp). The hexagonal symmetry, however, seems to work well with the powder diffraction data, and the crystal structure of HAp is usually described in space group P63/m. Were analyzed ten third molar teeth. Five teeth were separated in enamel, detin and circumpulpal detin powder and five in slices. All the scattering profile measurements were carried out at the X-ray diffraction beamline (XRD1) at the National Synchrotron Light Laboratory - LNLS, Campinas, Brazil. The LNLS synchrotron light source is composed of a 1.37 GeV electron storage ring, delivering approximately 4x10{sup -1}0 photons/s at 8 keV. A double-crystal Si(111) pre-monochromator, upstream of the beamline, was used to select a small energy bandwidth at 11 keV . Scattering signatures were obtained at intervals of 0.04 deg for angles from 24 deg to 52 deg. The human enamel experimental crystallite size obtained in this work were 30(3)nm (112 reflection) and 30(3)nm (300 reflection). These values were obtained from measurements of powdered enamel. When comparing the slice obtained 58(8)nm (112 reflection) and 37(7)nm (300 reflection) enamel diffraction patterns with those generated by the powder specimens, a few differences emerge. This work shows differences between powder and slices methods, separating characteristics of sample of the method's influence. (author)

  7. Exponentially-Biased Ground-State Sampling of Quantum Annealing Machines with Transverse-Field Driving Hamiltonians

    CERN Document Server

    Mandrà, Salvatore; Katzgraber, Helmut G

    2016-01-01

    We study the performance of the D-Wave 2X quantum annealing machine on systems with well-controlled ground-state degeneracy. While obtaining the ground-state of a spin-glass benchmark instance represents a difficult task, the gold standard for any optimization algorithm or machine is to sample all solutions that minimize the Hamiltonian with more or less equal probability. Our results show that while naive transverse-field quantum annealing on the D-Wave 2X device can find the ground-state energy of the problems, it is not well suited in identifying all degenerate ground-state configurations associated to a particular instance. Even worse, some states are exponentially suppressed, in agreement with previous studies on toy model problems [New J. Phys. 11, 073021 (2009)]. These results suggest that more complex driving Hamiltonians, which introduce transitions between all states with equal weights, are needed in future quantum annealing machines to ensure a fair sampling of the ground-state manifold.

  8. Methods for parasitic protozoans detection in the environmental samples

    Directory of Open Access Journals (Sweden)

    Skotarczak B.

    2009-09-01

    Full Text Available The environmental route of transmission of many parasitic protozoa and their potential for producing large numbers of transmissive stages constitute persistent threats to public and veterinary health. Conventional and new immunological and molecular methods enable to assess the occurrence, prevalence, levels and sources of waterborne protozoa. Concentration, purification, and detection are the three key steps in all methods that have been approved for routine monitoring of waterborne cysts and oocysts. These steps have been optimized to such an extent that low levels of naturally occurring (oocysts of protozoan can be efficiently recovered from water. Ten years have passed since the United States Environmental Protection Agency (USEPA introduced the 1622 and 1623 methods and used them to concentrate and detect the oocysts of Cryptosporidium and cysts of Giardia in water samples. Nevertheless, the methods still need studies and improvements. Pre-PCR processing procedures have been developed and they are still improved to remove or reduce the effects of PCR inhibitors. The progress in molecular methods allows to more precise distinction of species or simultaneous detection of several parasites, however, they are still not routinely used and need standardization. Standardized methods are required to maximize public health surveillance.

  9. Methods for parasitic protozoans detection in the environmental samples.

    Science.gov (United States)

    Skotarczak, B

    2009-09-01

    The environmental route of transmission of many parasitic protozoa and their potential for producing large numbers of transmissive stages constitute persistent threats to public and veterinary health. Conventional and new immunological and molecular methods enable to assess the occurrence, prevalence, levels and sources of waterborne protozoa. Concentration, purification, and detection are the three key steps in all methods that have been approved for routine monitoring of waterborne cysts and oocysts. These steps have been optimized to such an extent that low levels of naturally occurring (oo)cysts of protozoan can be efficiently recovered from water. Ten years have passed since the United States Environmental Protection Agency (USEPA) introduced the 1622 and 1623 methods and used them to concentrate and detect the oocysts of Cryptosporidium and cysts of Giardia in water samples. Nevertheless, the methods still need studies and improvements. Pre-PCR processing procedures have been developed and they are still improved to remove or reduce the effects of PCR inhibitors. The progress in molecular methods allows to more precise distinction of species or simultaneous detection of several parasites, however, they are still not routinely used and need standardization. Standardized methods are required to maximize public health surveillance.

  10. A direct sampling method for inverse electromagnetic medium scattering

    KAUST Repository

    Ito, Kazufumi

    2013-09-01

    In this paper, we study the inverse electromagnetic medium scattering problem of estimating the support and shape of medium scatterers from scattered electric/magnetic near-field data. We shall develop a novel direct sampling method based on an analysis of electromagnetic scattering and the behavior of the fundamental solution. It is applicable to a few incident fields and needs only to compute inner products of the measured scattered field with the fundamental solutions located at sampling points. Hence, it is strictly direct, computationally very efficient and highly robust to the presence of data noise. Two- and three-dimensional numerical experiments indicate that it can provide reliable support estimates for multiple scatterers in the case of both exact and highly noisy data. © 2013 IOP Publishing Ltd.

  11. [Wound microbial sampling methods in surgical practice, imprint techniques].

    Science.gov (United States)

    Chovanec, Z; Veverková, L; Votava, M; Svoboda, J; Peštál, A; Doležel, J; Jedlička, V; Veselý, M; Wechsler, J; Čapov, I

    2012-12-01

    The wound is a damage of tissue. The process of healing is influenced by many systemic and local factors. The most crucial and the most discussed local factor of wound healing is infection. Surgical site infection in the wound is caused by micro-organisms. This information is known for many years, however the conditions leading to an infection occurrence have not been sufficiently described yet. Correct sampling technique, correct storage, transportation, evaluation, and valid interpretation of these data are very important in clinical practice. There are many methods for microbiological sampling, but the best one has not been yet identified and validated. We aim to discuss the problem with the focus on the imprint technique.

  12. Evaluating outcome-correlated recruitment and geographic recruitment bias in a respondent-driven sample of people who inject drugs in Tijuana, Mexico.

    Science.gov (United States)

    Rudolph, Abby E; Gaines, Tommi L; Lozada, Remedios; Vera, Alicia; Brouwer, Kimberly C

    2014-12-01

    Respondent-driven sampling's (RDS) widespread use and reliance on untested assumptions suggests a need for new exploratory/diagnostic tests. We assessed geographic recruitment bias and outcome-correlated recruitment among 1,048 RDS-recruited people who inject drugs (Tijuana, Mexico). Surveys gathered demographics, drug/sex behaviors, activity locations, and recruiter-recruit pairs. Simulations assessed geographic and network clustering of active syphilis (RPR titers ≥1:8). Gender-specific predicted probabilities were estimated using logistic regression with GEE and robust standard errors. Active syphilis prevalence was 7 % (crude: men = 5.7 % and women = 16.6 %; RDS-adjusted: men = 6.7 % and women = 7.6 %). Syphilis clustered in the Zona Norte, a neighborhood known for drug and sex markets. Network simulations revealed geographic recruitment bias and non-random recruitment by syphilis status. Gender-specific prevalence estimates accounting for clustering were highest among those living/working/injecting/buying drugs in the Zona Norte and directly/indirectly connected to syphilis cases (men: 15.9 %, women: 25.6 %) and lowest among those with neither exposure (men: 3.0 %, women: 6.1 %). Future RDS analyses should assess/account for network and spatial dependencies.

  13. Selection bias in dynamically-measured super-massive black hole samples: its consequences and the quest for the most fundamental relation

    CERN Document Server

    Shankar, Francesco; Sheth, Ravi K; Ferrarese, Laura; Graham, Alister W; Savorgnan, Giulia; Allevato, Viola; Marconi, Alessandro; Laesker, Ronald; Lapi, Andrea

    2016-01-01

    We compare the set of local galaxies having dynamically measured black holes with a large, unbiased sample of galaxies extracted from the Sloan Digital Sky Survey. We confirm earlier work showing that the majority of black hole hosts have significantly higher velocity dispersions sigma than local galaxies of similar stellar mass. We use Monte-Carlo simulations to illustrate the effect on black hole scaling relations if this bias arises from the requirement that the black hole sphere of influence must be resolved to measure black hole masses with spatially resolved kinematics. We find that this selection effect artificially increases the normalization of the Mbh-sigma relation by a factor of at least ~3; the bias for the Mbh-Mstar relation is even larger. Our Monte Carlo simulations and analysis of the residuals from scaling relations both indicate that sigma is more fundamental than Mstar or effective radius. In particular, the Mbh-Mstar relation is mostly a consequence of the Mbh-sigma and sigma-Mstar relati...

  14. Statistical methods to correct for verification bias in diagnostic studies are inadequate when there are few false negatives: a simulation study

    Directory of Open Access Journals (Sweden)

    Vickers Andrew J

    2008-11-01

    Full Text Available Abstract Background A common feature of diagnostic research is that results for a diagnostic gold standard are available primarily for patients who are positive for the test under investigation. Data from such studies are subject to what has been termed "verification bias". We evaluated statistical methods for verification bias correction when there are few false negatives. Methods A simulation study was conducted of a screening study subject to verification bias. We compared estimates of the area-under-the-curve (AUC corrected for verification bias varying both the rate and mechanism of verification. Results In a single simulated data set, varying false negatives from 0 to 4 led to verification bias corrected AUCs ranging from 0.550 to 0.852. Excess variation associated with low numbers of false negatives was confirmed in simulation studies and by analyses of published studies that incorporated verification bias correction. The 2.5th – 97.5th centile range constituted as much as 60% of the possible range of AUCs for some simulations. Conclusion Screening programs are designed such that there are few false negatives. Standard statistical methods for verification bias correction are inadequate in this circumstance.

  15. Miniaturized sample preparation method for determination of amphetamines in urine.

    Science.gov (United States)

    Nishida, Manami; Namera, Akira; Yashiki, Mikio; Kimura, Kojiro

    2004-07-16

    A simple and miniaturized sample preparation method for determination of amphetamines in urine was developed using on-column derivatization and gas chromatography-mass spectrometry (GC-MS). Urine was directly applied to the extraction column that was pre-packed with Extrelut and sodium carbonate. Amphetamine (AP) and methamphetamine (MA) in urine were adsorbed on the surface of Extrelut. AP and MA were then converted to a free base and derivatized to N-propoxycarbonyl derivatives using propylchloroformate on the column. Pentadeuterated MA was used as an internal standard. The recoveries of AP and MA from urine were 100 and 102%, respectively. The calibration curves showed linearity in the range of 0.50-50 microg/mL for AP and MA in urine. When urine samples containing two different concentrations (0.50 and 5.0 microg/mL) of AP and MA were determined, the intra-day and inter-day coefficients of variation were 1.4-7.7%. This method was applied to 14 medico-legal cases of MA intoxication. The results were compared and a good agreement was obtained with a HPLC method.

  16. Vadose Zone Sampling Methods for Detection of Preferential Pesticides Transport

    Science.gov (United States)

    Peranginangin, N.; Richards, B. K.; Steenhuis, T. S.

    2003-12-01

    Leaching of agricultural applied chemicals through the vadose zone is a major cause for the occurrence of agrichemicals in groundwater. Accurate soil water sampling methods are needed to ensure meaningful monitoring results, especially for soils that have significant preferential flow paths. The purpose of this study was to assess the capability and the effectiveness of various soil water sampling methods in detecting preferential transport of pesticides in a strongly-structured silty clay loam (Hudson series) soil. Soil water sampling devices tested were wick pan and gravity pan lysimeters, tile lines, porous ceramic cups, and pipe lysimeters; all installed at 45 to105 cm depth below the ground surface. A reasonable worse-case scenario was tested by applying a simulated rain storm soon after pesticides were sprayed at agronomic rates. Herbicides atrazine (6-chloro-N2-ethyl-N4-isopropyl-1,3,5-triazine-2,4-diamine) and 2,4-D (2,4-dichloro-phenoxyacetic acid) were chosen as model compounds. Chloride (KCl) tracer was used to determine spatial and temporal distribution of non-reactive solute and water as well as a basis for determining the retardation in pesticides movement. Results show that observed pesticide mobility was much greater than would be predicted by uniform flow. Under relatively high soil moisture conditions, gravity and wick pan lysimeters had comparably good collection efficiencies, whereas the wick samplers had an advantage over gravity driven sampler when the soil moisture content was below field capacity. Pipe lysimeters had breakthrough patterns that were similar to pan samplers. At small plot scale, tile line samplers tended to underestimate solute concentration because of water dilution around the samplers. The use of porous cup samplers performed poorly because of their sensitivity to local profile characteristics: only by chance can they intercept and sample the preferential flow paths that are critical to transport. Wick sampler had the least

  17. A GPU code for analytic continuation through a sampling method

    Science.gov (United States)

    Nordström, Johan; Schött, Johan; Locht, Inka L. M.; Di Marco, Igor

    We here present a code for performing analytic continuation of fermionic Green's functions and self-energies as well as bosonic susceptibilities on a graphics processing unit (GPU). The code is based on the sampling method introduced by Mishchenko et al. (2000), and is written for the widely used CUDA platform from NVidia. Detailed scaling tests are presented, for two different GPUs, in order to highlight the advantages of this code with respect to standard CPU computations. Finally, as an example of possible applications, we provide the analytic continuation of model Gaussian functions, as well as more realistic test cases from many-body physics.

  18. Uncertainties in Air Exchange using Continuous-Injection, Long-Term Sampling Tracer-Gas Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sherman, Max H.; Walker, Iain S.; Lunden, Melissa M.

    2013-12-01

    The PerFluorocarbon Tracer (PFT) method is a low-cost approach commonly used for measuring air exchange in buildings using tracer gases. It is a specific application of the more general Continuous-Injection, Long-Term Sampling (CILTS) method. The technique is widely used but there has been little work on understanding the uncertainties (both precision and bias) associated with its use, particularly given that it is typically deployed by untrained or lightly trained people to minimize experimental costs. In this article we will conduct a first-principles error analysis to estimate the uncertainties and then compare that analysis to CILTS measurements that were over-sampled, through the use of multiple tracers and emitter and sampler distribution patterns, in three houses. We find that the CILTS method can have an overall uncertainty of 10-15percent in ideal circumstances, but that even in highly controlled field experiments done by trained experimenters expected uncertainties are about 20percent. In addition, there are many field conditions (such as open windows) where CILTS is not likely to provide any quantitative data. Even avoiding the worst situations of assumption violations CILTS should be considered as having a something like a ?factor of two? uncertainty for the broad field trials that it is typically used in. We provide guidance on how to deploy CILTS and design the experiment to minimize uncertainties.

  19. Artifact free denuder method for sampling of carbonaceous aerosols

    Science.gov (United States)

    Mikuška, P.; Vecera, Z.; Broškovicová, A.

    2003-04-01

    Over the past decade, a growing attention has been focused on the carbonaceous aerosols. Although they may account for 30--60% of the total fine aerosol mass, their concentration and formation mechanisms are not well understood, particularly in comparison with major fine particle inorganic species. The deficiency in knowledge of carbonaceous aerosols results from their complexity and because of problems associated with their collection. Conventional sampling techniques of the carbonaceous aerosols, which utilize filters/backup adsorbents suffer from sampling artefacts. Positive artifacts are mainly due to adsorption of gas-phase organic compounds by the filter material or by the already collected particles, whereas negative artifacts arise from the volatilisation of already collected organic compounds from the filter. Furthermore, in the course of the sampling, the composition of the collected organic compounds may be modified by oxidants (O_3, NO_2, PAN, peroxides) that are present in the air passing through the sampler. It is clear that new, artifact free, method for sampling of carbonaceous aerosols is needed. A combination of a diffusion denuder and a filter in series is very promising in this respect. The denuder is expected to collect gaseous oxidants and gas-phase organic compounds from sample air stream prior to collection of aerosol particles on filters, and eliminate thus both positive and negative sampling artifacts for carbonaceous aerosols. This combination is subject of the presentation. Several designs of diffusion denuders (cylindrical, annular, parallel plate, multi-channel) in combination with various types of wall coatings (dry, liquid) were examined. Special attention was given to preservation of the long-term collection efficiency. Different adsorbents (activated charcoal, molecular sieve, porous polymers) and sorbents coated with various chemical reagents (KI, Na_2SO_3, MnO_2, ascorbic acid) or chromatographic stationary phases (silicon oils

  20. CPI Bias in Korea

    Directory of Open Access Journals (Sweden)

    Chul Chung

    2007-12-01

    Full Text Available We estimate the CPI bias in Korea by employing the approach of Engel’s Law as suggested by Hamilton (2001. This paper is the first attempt to estimate the bias using Korean panel data, Korean Labor and Income Panel Study(KLIPS. Following Hamilton’s model with non­linear specification correction, our estimation result shows that the cumulative CPI bias over the sample period (2000-2005 was 0.7 percent annually. This CPI bias implies that about 21 percent of the inflation rate during the period can be attributed to the bias. In light of purchasing power parity, we provide an interpretation of the estimated bias.

  1. Some methods to regulate low-bias negative differential resistance in σ barrier separating nanoscale molecular transport systems

    Science.gov (United States)

    Shen, Ji-Mei; Liu, Jing; Min, Yi; Zhou, Li-Ping

    2016-12-01

    Using the first-principles method which combines the nonequilibrium Green’s function (NEGF) with density functional theory (DFT), the role of defect, dopant, barrier length and geometric deformation for low-bias negative differential resistance (NDR) in two capped armchair carbon nanotubes (CNTs) sandwiching σ barrier are systematically analyzed. We found that this method can regulate the negative differential resistance (NDR) effects such as current peak and peak position. The adjusting mechanism may originate from orbital interaction and orbital reconstruction. Our calculations try to manipulate the transport characteristics in energy space by simply manipulating the structure in real space, which may promise the potential applications in nanomolecular-electronics in the future.

  2. Episodic outbreaks bias estimates of age-specific force of infection: a corrected method using measles as an example.

    Science.gov (United States)

    Ferrari, M J; Djibo, A; Grais, R F; Grenfell, B T; Bjørnstad, O N

    2010-01-01

    Understanding age-specific differences in infection rates can be important in predicting the magnitude of and mortality in outbreaks and targeting age groups for vaccination programmes. Standard methods to estimate age-specific rates assume that the age-specific force of infection is constant in time. However, this assumption may easily be violated in the face of a highly variable outbreak history, as recently observed for acute immunizing infections like measles, in strongly seasonal settings. Here we investigate the biases that result from ignoring such fluctuations in incidence and present a correction based on the epidemic history. We apply the method to data from a measles outbreak in Niamey, Niger and show that, despite a bimodal age distribution of cases, the estimated age-specific force of infection is unimodal and concentrated in young children (<5 years) consistent with previous analyses of age-specific rates in the region.

  3. Mapping transmission risk of Lassa fever in West Africa: the importance of quality control, sampling bias, and error weighting.

    Science.gov (United States)

    Peterson, A Townsend; Moses, Lina M; Bausch, Daniel G

    2014-01-01

    Lassa fever is a disease that has been reported from sites across West Africa; it is caused by an arenavirus that is hosted by the rodent M. natalensis. Although it is confined to West Africa, and has been documented in detail in some well-studied areas, the details of the distribution of risk of Lassa virus infection remain poorly known at the level of the broader region. In this paper, we explored the effects of certainty of diagnosis, oversampling in well-studied region, and error balance on results of mapping exercises. Each of the three factors assessed in this study had clear and consistent influences on model results, overestimating risk in southern, humid zones in West Africa, and underestimating risk in drier and more northern areas. The final, adjusted risk map indicates broad risk areas across much of West Africa. Although risk maps are increasingly easy to develop from disease occurrence data and raster data sets summarizing aspects of environments and landscapes, this process is highly sensitive to issues of data quality, sampling design, and design of analysis, with macrogeographic implications of each of these issues and the potential for misrepresenting real patterns of risk.

  4. Mapping transmission risk of Lassa fever in West Africa: the importance of quality control, sampling bias, and error weighting.

    Directory of Open Access Journals (Sweden)

    A Townsend Peterson

    Full Text Available Lassa fever is a disease that has been reported from sites across West Africa; it is caused by an arenavirus that is hosted by the rodent M. natalensis. Although it is confined to West Africa, and has been documented in detail in some well-studied areas, the details of the distribution of risk of Lassa virus infection remain poorly known at the level of the broader region. In this paper, we explored the effects of certainty of diagnosis, oversampling in well-studied region, and error balance on results of mapping exercises. Each of the three factors assessed in this study had clear and consistent influences on model results, overestimating risk in southern, humid zones in West Africa, and underestimating risk in drier and more northern areas. The final, adjusted risk map indicates broad risk areas across much of West Africa. Although risk maps are increasingly easy to develop from disease occurrence data and raster data sets summarizing aspects of environments and landscapes, this process is highly sensitive to issues of data quality, sampling design, and design of analysis, with macrogeographic implications of each of these issues and the potential for misrepresenting real patterns of risk.

  5. Intergroup bias.

    Science.gov (United States)

    Hewstone, Miles; Rubin, Mark; Willis, Hazel

    2002-01-01

    This chapter reviews the extensive literature on bias in favor of in-groups at the expense of out-groups. We focus on five issues and identify areas for future research: (a) measurement and conceptual issues (especially in-group favoritism vs. out-group derogation, and explicit vs. implicit measures of bias); (b) modern theories of bias highlighting motivational explanations (social identity, optimal distinctiveness, uncertainty reduction, social dominance, terror management); (c) key moderators of bias, especially those that exacerbate bias (identification, group size, status and power, threat, positive-negative asymmetry, personality and individual differences); (d) reduction of bias (individual vs. intergroup approaches, especially models of social categorization); and (e) the link between intergroup bias and more corrosive forms of social hostility.

  6. The curvHDR method for gating flow cytometry samples

    Directory of Open Access Journals (Sweden)

    Wand Matthew P

    2010-01-01

    Full Text Available Abstract Background High-throughput flow cytometry experiments produce hundreds of large multivariate samples of cellular characteristics. These samples require specialized processing to obtain clinically meaningful measurements. A major component of this processing is a form of cell subsetting known as gating. Manual gating is time-consuming and subjective. Good automatic and semi-automatic gating algorithms are very beneficial to high-throughput flow cytometry. Results We develop a statistical procedure, named curvHDR, for automatic and semi-automatic gating. The method combines the notions of significant high negative curvature regions and highest density regions and has the ability to adapt well to human-perceived gates. The underlying principles apply to dimension of arbitrary size, although we focus on dimensions up to three. Accompanying software, compatible with contemporary flow cytometry infor-matics, is developed. Conclusion The method is seen to adapt well to nuances in the data and, to a reasonable extent, match human perception of useful gates. It offers big savings in human labour when processing high-throughput flow cytometry data whilst retaining a good degree of efficacy.

  7. THE ACCURACY AND BIAS EVALUATION OF THE USA UNEMPLOYMENT RATE FORECASTS. METHODS TO IMPROVE THE FORECASTS ACCURACY

    Directory of Open Access Journals (Sweden)

    MIHAELA BRATU (SIMIONESCU

    2012-12-01

    Full Text Available In this study some alternative forecasts for the unemployment rate of USA made by four institutions (International Monetary Fund (IMF, Organization for Economic Co-operation and Development (OECD, Congressional Budget Office (CBO and Blue Chips (BC are evaluated regarding the accuracy and the biasness. The most accurate predictions on the forecasting horizon 201-2011 were provided by IMF, followed by OECD, CBO and BC.. These results were gotten using U1 Theil’s statistic and a new method that has not been used before in literature in this context. The multi-criteria ranking was applied to make a hierarchy of the institutions regarding the accuracy and five important accuracy measures were taken into account at the same time: mean errors, mean squared error, root mean squared error, U1 and U2 statistics of Theil. The IMF, OECD and CBO predictions are unbiased. The combined forecasts of institutions’ predictions are a suitable strategy to improve the forecasts accuracy of IMF and OECD forecasts when all combination schemes are used, but INV one is the best. The filtered and smoothed original predictions based on Hodrick-Prescott filter, respectively Holt-Winters technique are a good strategy of improving only the BC expectations. The proposed strategies to improve the accuracy do not solve the problem of biasness. The assessment and improvement of forecasts accuracy have an important contribution in growing the quality of decisional process.

  8. BMAA extraction of cyanobacteria samples: which method to choose?

    Science.gov (United States)

    Lage, Sandra; Burian, Alfred; Rasmussen, Ulla; Costa, Pedro Reis; Annadotter, Heléne; Godhe, Anna; Rydberg, Sara

    2016-01-01

    β-N-Methylamino-L-alanine (BMAA), a neurotoxin reportedly produced by cyanobacteria, diatoms and dinoflagellates, is proposed to be linked to the development of neurological diseases. BMAA has been found in aquatic and terrestrial ecosystems worldwide, both in its phytoplankton producers and in several invertebrate and vertebrate organisms that bioaccumulate it. LC-MS/MS is the most frequently used analytical technique in BMAA research due to its high selectivity, though consensus is lacking as to the best extraction method to apply. This study accordingly surveys the efficiency of three extraction methods regularly used in BMAA research to extract BMAA from cyanobacteria samples. The results obtained provide insights into possible reasons for the BMAA concentration discrepancies in previous publications. In addition and according to the method validation guidelines for analysing cyanotoxins, the TCA protein precipitation method, followed by AQC derivatization and LC-MS/MS analysis, is now validated for extracting protein-bound (after protein hydrolysis) and free BMAA from cyanobacteria matrix. BMAA biological variability was also tested through the extraction of diatom and cyanobacteria species, revealing a high variance in BMAA levels (0.0080-2.5797 μg g(-1) DW).

  9. Field evaluation of broiler gait score using different sampling methods

    Directory of Open Access Journals (Sweden)

    AFS Cordeiro

    2009-09-01

    Full Text Available Brazil is today the world's largest broiler meat exporter; however, in order to keep this position, it must comply with welfare regulations while maintaining low production costs. Locomotion problems restrain bird movements, limiting their access to drinking and feeding equipment, and therefore their survival and productivity. The objective of this study was to evaluate locomotion deficiency in broiler chickens reared under stressful temperature conditions using three different sampling methods of birds from three different ages. The experiment consisted in determining the gait score of 28, 35, 42 and 49-day-old broilers using three different known gait scoring methods: M1, birds were randomly selected, enclosed in a circle, and then stimulated to walk out of the circle; M2, ten birds were randomly selected and gait scored; and M3, birds were randomly selected, enclosed in a circle, and then observed while walking away from the circle without stimulus to walking. Environmental temperature, relative humidity, and light intensity inside the poultry houses were recorded. No evidence of interaction between scoring method and age was found however, both method and age influenced gait score. Gait score was found to be lower at 28 days of age. The evaluation using the ten randomly selected birds within the house was the method that presented the less reliable results. Gait score results when birds were stimulated to walk were lower than when they were not simulated, independently of age. The gait scores obtained with the three tested methods and ages were higher than those considered acceptable. The highest frequency of normal gait score (0 represented 50% of the flock. These results may be related to heat stress during rearing. Average gait score incresead with average ambient temperature, relative humidity, and light intensity. The evaluation of gait score to detect locomotion problems of broilers under rearing conditions seems subjective and

  10. Selecting a Sample

    Science.gov (United States)

    Ritter, Lois A., Ed.; Sue, Valerie M., Ed.

    2007-01-01

    This chapter provides an overview of sampling methods that are appropriate for conducting online surveys. The authors review some of the basic concepts relevant to online survey sampling, present some probability and nonprobability techniques for selecting a sample, and briefly discuss sample size determination and nonresponse bias. Although some…

  11. Several common biases and control measures during sampling survey of eye diseases in China%我国眼病抽样调查中的常见偏倚问题与对策

    Institute of Scientific and Technical Information of China (English)

    管怀进

    2008-01-01

    Bias is a common artificial error during sampling survey in eye diseases, and is a major impact factor for validity and reliability of the survey. The causes and the control measures of several biases regarding current sampling survey of eye diseases in China were analyzed and discussed, including the sampling bias, non-respondent bias, and diagnostic bias. This review emphasizes that controlling bias is the key to ensure quality of sampling survey. Random sampling, sufficient sample quantity, careful examination and taking history, improving examination rate, accurate diagnosis, strict training and preliminary study, aswell as quality control can eliminate or minimize biases and improve the sampling survey quality of eye diseases in China.%偏倚是眼病抽样调查中常见的人为误差,是影响调查结果真实性与可靠性的主要原因.本文分析评论了我国眼病抽样调查中常见的几种偏倚,包括抽样偏倚、无应答偏倚及诊断偏倚等的产生原因及其对策;强调控制偏倚足确保调查研究质虽的关键;指出只要随机化抽样、样本量足够、认真检录和询问病史、提高受检率、准确检查诊断、严格培训及预试验,并做好质量控制工作,就可以将偏倚消除或减少到最低程度,从而提高我国眼病抽样调查质量.

  12. An A-T linker adapter polymerase chain reaction method for chromosome walking without restriction site cloning bias.

    Science.gov (United States)

    Trinh, Quoclinh; Xu, Wentao; Shi, Hui; Luo, Yunbo; Huang, Kunlun

    2012-06-01

    A-T linker adapter polymerase chain reaction (PCR) was modified and employed for the isolation of genomic fragments adjacent to a known DNA sequence. The improvements in the method focus on two points. The first is the modification of the PO(4) and NH(2) groups in the adapter to inhibit the self-ligation of the adapter or the generation of nonspecific products. The second improvement is the use of the capacity of rTaq DNA polymerase to add an adenosine overhang at the 3' ends of digested DNA to suppress self-ligation in the digested DNA and simultaneously resolve restriction site clone bias. The combination of modifications in the adapter and in the digested DNA leads to T/A-specific ligation, which enhances the flexibility of this method and makes it feasible to use many different restriction enzymes with a single adapter. This novel A-T linker adapter PCR overcomes the inherent limitations of the original ligation-mediated PCR method such as low specificity and a lack of restriction enzyme choice. Moreover, this method also offers higher amplification efficiency, greater flexibility, and easier manipulation compared with other PCR methods for chromosome walking. Experimental results from 143 Arabidopsis mutants illustrate that this method is reliable and efficient in high-throughput experiments.

  13. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  14. Clustering Methods with Qualitative Data: A Mixed Methods Approach for Prevention Research with Small Samples

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.

    2016-01-01

    Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

  15. Biases of chamber methods for measuring soil CO2 efflux demonstrated with a laboratory apparatus.

    Science.gov (United States)

    S. Mark Nay; Kim G. Mattson; Bernard T. Bormann

    1994-01-01

    Investigators have historically measured soil CO2 efflux as an indicator of soil microbial and root activity and more recently in calculations of carbon budgets. The most common methods estimate CO2 efflux by placing a chamber over the soil surface and quantifying the amount of CO2 entering the...

  16. An Elephant in the Room: Bias in Evaluating a Required Quantitative Methods Course

    Science.gov (United States)

    Fletcher, Joseph F.; Painter-Main, Michael A.

    2014-01-01

    Undergraduate Political Science programs often require students to take a quantitative research methods course. Such courses are typically among the most poorly rated. This can be due, in part, to the way in which courses are evaluated. Students are generally asked to provide an overall rating, which, in turn, is widely used by students, faculty,…

  17. An Elephant in the Room: Bias in Evaluating a Required Quantitative Methods Course

    Science.gov (United States)

    Fletcher, Joseph F.; Painter-Main, Michael A.

    2014-01-01

    Undergraduate Political Science programs often require students to take a quantitative research methods course. Such courses are typically among the most poorly rated. This can be due, in part, to the way in which courses are evaluated. Students are generally asked to provide an overall rating, which, in turn, is widely used by students, faculty,…

  18. Fundamentals of bias temperature instability in MOS transistors characterization methods, process and materials impact, DC and AC modeling

    CERN Document Server

    2016-01-01

    This book aims to cover different aspects of Bias Temperature Instability (BTI). BTI remains as an important reliability concern for CMOS transistors and circuits. Development of BTI resilient technology relies on utilizing artefact-free stress and measurement methods and suitable physics-based models for accurate determination of degradation at end-of-life, and understanding the gate insulator process impact on BTI. This book discusses different ultra-fast characterization techniques for recovery artefact free BTI measurements. It also covers different direct measurements techniques to access pre-existing and newly generated gate insulator traps responsible for BTI. The book provides a consistent physical framework for NBTI and PBTI respectively for p- and n- channel MOSFETs, consisting of trap generation and trapping. A physics-based compact model is presented to estimate measured BTI degradation in planar Si MOSFETs having differently processed SiON and HKMG gate insulators, in planar SiGe MOSFETs and also...

  19. The processes of neuronal and recycling under the bias of implicit learning: literacy methods in focus

    OpenAIRE

    Guaresi, Ronei

    2011-01-01

    Based on advances in neuroscience and literature resulting from these advances, this text reflects on the acquisition of writing, specifically on methods of phonetic and global literacy, under the scope of implicit and explicit learning, in the acquisition of human language, essentially complex and arbitrary. This process will occur through the recuperation of the notion of connectionist learning and understanding of implicit and explicit. This theoretical recuperation arisen from discoveries...

  20. Fitting a distribution to censored contamination data using Markov Chain Monte Carlo methods and samples selected with unequal probabilities.

    Science.gov (United States)

    Williams, Michael S; Ebel, Eric D

    2014-11-18

    The fitting of statistical distributions to chemical and microbial contamination data is a common application in risk assessment. These distributions are used to make inferences regarding even the most pedestrian of statistics, such as the population mean. The reason for the heavy reliance on a fitted distribution is the presence of left-, right-, and interval-censored observations in the data sets, with censored observations being the result of nondetects in an assay, the use of screening tests, and other practical limitations. Considerable effort has been expended to develop statistical distributions and fitting techniques for a wide variety of applications. Of the various fitting methods, Markov Chain Monte Carlo methods are common. An underlying assumption for many of the proposed Markov Chain Monte Carlo methods is that the data represent independent and identically distributed (iid) observations from an assumed distribution. This condition is satisfied when samples are collected using a simple random sampling design. Unfortunately, samples of food commodities are generally not collected in accordance with a strict probability design. Nevertheless, pseudosystematic sampling efforts (e.g., collection of a sample hourly or weekly) from a single location in the farm-to-table continuum are reasonable approximations of a simple random sample. The assumption that the data represent an iid sample from a single distribution is more difficult to defend if samples are collected at multiple locations in the farm-to-table continuum or risk-based sampling methods are employed to preferentially select samples that are more likely to be contaminated. This paper develops a weighted bootstrap estimation framework that is appropriate for fitting a distribution to microbiological samples that are collected with unequal probabilities of selection. An example based on microbial data, derived by the Most Probable Number technique, demonstrates the method and highlights the

  1. Study of biological communities subject to imperfect detection: Bias and precision of community N-mixture abundance models in small-sample situations

    Science.gov (United States)

    Yamaura, Yuichi; Kery, Marc; Royle, Andy

    2016-01-01

    Community N-mixture abundance models for replicated counts provide a powerful and novel framework for drawing inferences related to species abundance within communities subject to imperfect detection. To assess the performance of these models, and to compare them to related community occupancy models in situations with marginal information, we used simulation to examine the effects of mean abundance (λ¯: 0.1, 0.5, 1, 5), detection probability (p¯: 0.1, 0.2, 0.5), and number of sampling sites (n site : 10, 20, 40) and visits (n visit : 2, 3, 4) on the bias and precision of species-level parameters (mean abundance and covariate effect) and a community-level parameter (species richness). Bias and imprecision of estimates decreased when any of the four variables (λ¯, p¯, n site , n visit ) increased. Detection probability p¯ was most important for the estimates of mean abundance, while λ¯ was most influential for covariate effect and species richness estimates. For all parameters, increasing n site was more beneficial than increasing n visit . Minimal conditions for obtaining adequate performance of community abundance models were n site  ≥ 20, p¯ ≥ 0.2, and λ¯ ≥ 0.5. At lower abundance, the performance of community abundance and community occupancy models as species richness estimators were comparable. We then used additive partitioning analysis to reveal that raw species counts can overestimate β diversity both of species richness and the Shannon index, while community abundance models yielded better estimates. Community N-mixture abundance models thus have great potential for use with community ecology or conservation applications provided that replicated counts are available.

  2. Distinguishing Selection Bias and Confounding Bias in Comparative Effectiveness Research.

    Science.gov (United States)

    Haneuse, Sebastien

    2016-04-01

    Comparative effectiveness research (CER) aims to provide patients and physicians with evidence-based guidance on treatment decisions. As researchers conduct CER they face myriad challenges. Although inadequate control of confounding is the most-often cited source of potential bias, selection bias that arises when patients are differentially excluded from analyses is a distinct phenomenon with distinct consequences: confounding bias compromises internal validity, whereas selection bias compromises external validity. Despite this distinction, however, the label "treatment-selection bias" is being used in the CER literature to denote the phenomenon of confounding bias. Motivated by an ongoing study of treatment choice for depression on weight change over time, this paper formally distinguishes selection and confounding bias in CER. By formally distinguishing selection and confounding bias, this paper clarifies important scientific, design, and analysis issues relevant to ensuring validity. First is that the 2 types of biases may arise simultaneously in any given study; even if confounding bias is completely controlled, a study may nevertheless suffer from selection bias so that the results are not generalizable to the patient population of interest. Second is that the statistical methods used to mitigate the 2 biases are themselves distinct; methods developed to control one type of bias should not be expected to address the other. Finally, the control of selection and confounding bias will often require distinct covariate information. Consequently, as researchers plan future studies of comparative effectiveness, care must be taken to ensure that all data elements relevant to both confounding and selection bias are collected.

  3. A simple capacitive method to evaluate ethanol fuel samples

    Science.gov (United States)

    Vello, Tatiana P.; de Oliveira, Rafael F.; Silva, Gustavo O.; de Camargo, Davi H. S.; Bufon, Carlos C. B.

    2017-01-01

    Ethanol is a biofuel used worldwide. However, the presence of excessive water either during the distillation process or by fraudulent adulteration is a major concern in the use of ethanol fuel. High water levels may cause engine malfunction, in addition to being considered illegal. Here, we describe the development of a simple, fast and accurate platform based on nanostructured sensors to evaluate ethanol samples. The device fabrication is facile, based on standard microfabrication and thin-film deposition methods. The sensor operation relies on capacitance measurements employing a parallel plate capacitor containing a conformational aluminum oxide (Al2O3) thin layer (15 nm). The sensor operates over the full range water concentration, i.e., from approximately 0% to 100% vol. of water in ethanol, with water traces being detectable down to 0.5% vol. These characteristics make the proposed device unique with respect to other platforms. Finally, the good agreement between the sensor response and analyses performed by gas chromatography of ethanol biofuel endorses the accuracy of the proposed method. Due to the full operation range, the reported sensor has the technological potential for use as a point-of-care analytical tool at gas stations or in the chemical, pharmaceutical, and beverage industries, to mention a few. PMID:28240312

  4. Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method

    Science.gov (United States)

    Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.

    2005-01-01

    The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

  5. A simple capacitive method to evaluate ethanol fuel samples

    Science.gov (United States)

    Vello, Tatiana P.; de Oliveira, Rafael F.; Silva, Gustavo O.; de Camargo, Davi H. S.; Bufon, Carlos C. B.

    2017-02-01

    Ethanol is a biofuel used worldwide. However, the presence of excessive water either during the distillation process or by fraudulent adulteration is a major concern in the use of ethanol fuel. High water levels may cause engine malfunction, in addition to being considered illegal. Here, we describe the development of a simple, fast and accurate platform based on nanostructured sensors to evaluate ethanol samples. The device fabrication is facile, based on standard microfabrication and thin-film deposition methods. The sensor operation relies on capacitance measurements employing a parallel plate capacitor containing a conformational aluminum oxide (Al2O3) thin layer (15 nm). The sensor operates over the full range water concentration, i.e., from approximately 0% to 100% vol. of water in ethanol, with water traces being detectable down to 0.5% vol. These characteristics make the proposed device unique with respect to other platforms. Finally, the good agreement between the sensor response and analyses performed by gas chromatography of ethanol biofuel endorses the accuracy of the proposed method. Due to the full operation range, the reported sensor has the technological potential for use as a point-of-care analytical tool at gas stations or in the chemical, pharmaceutical, and beverage industries, to mention a few.

  6. Longitudinal relations between cognitive bias and adolescent alcohol use

    NARCIS (Netherlands)

    Janssen, T.; Larsen, H.; Vollebergh, W.A.M.; Wiers, R.W.

    2015-01-01

    Introduction: To prospectively predict the development of adolescent alcohol use with alcohol-related cognitive biases, and to predict the development of alcohol-related cognitive biases with aspects of impulsivity. Methods: Data were used from a two-year, four-wave online sample of 378 Dutch young

  7. Longitudinal relations between cognitive bias and adolescent alcohol use

    NARCIS (Netherlands)

    Janssen, Tim; Larsen, Helle; Vollebergh, Wilma A. M.; Wiers, Reinout W.

    Introduction: To prospectively predict the development of adolescent alcohol use with alcohol-related cognitive biases, and to predict the development of alcohol-related cognitive biases with aspects of impulsivity. Methods: Data were used from a two-year, four-wave online sample of 378 Dutch young

  8. Longitudinal relations between cognitive bias and adolescent alcohol use

    NARCIS (Netherlands)

    Janssen, Tim; Larsen, Helle; Vollebergh, Wilma A. M.|info:eu-repo/dai/nl/090632893; Wiers, Reinout W.

    2015-01-01

    Introduction: To prospectively predict the development of adolescent alcohol use with alcohol-related cognitive biases, and to predict the development of alcohol-related cognitive biases with aspects of impulsivity. Methods: Data were used from a two-year, four-wave online sample of 378 Dutch young

  9. Geothermal water and gas: collected methods for sampling and analysis. Comment issue. [Compilation of methods

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, J.G.; Serne, R.J.; Shannon, D.W.; Woodruff, E.M.

    1976-08-01

    A collection of methods for sampling and analysis of geothermal fluids and gases is presented. Compilations of analytic options for constituents in water and gases are given. Also, a survey of published methods of laboratory water analysis is included. It is stated that no recommendation of the applicability of the methods to geothermal brines should be assumed since the intent of the table is to encourage and solicit comments and discussion leading to recommended analytical procedures for geothermal waters and research. (WHK)

  10. Selling creates a loss while buying generates a gain:Capturing the implicit irrational bias by the IAT method

    Institute of Scientific and Technical Information of China (English)

    HUANG YunHui; SHI JunQi; WANG Lei

    2008-01-01

    The endowment effect is the tendency for a person to demand more in return for selling an object than he or she would be willing to pay for the same product. Thaler suggested that the reason of endowment effect lies in that selling creates a loss while buying generates a gain. Although prior research has demonstrsted the existence of loss aversion, few researchers have focused on the connection between "selling vs. buying" and "losing vs. gaining", and a recent research with a self-report method failed to find solid evidence for such a connection. The present research applied the implicit association test, a latency method, to confirm this connection. The results demonstrated that selling was more closely connected to losing while buying was more closely connected to gaining. Thus the bias resulting from loss aversion and gain preference was confirmed to be the underlying mechanism of the endowment effect. The previous failure to find such an association may be due to the insensitivity of the self-report method. Here the implication of the findings and method for experimental economic psychology is discussed.

  11. AN EVALUATION OF USA UNEMPLOYMENT RATE FORECASTS IN TERMS OF ACCURACY AND BIAS. EMPIRICAL METHODS TO IMPROVE THE FORECASTS ACCURACY

    Directory of Open Access Journals (Sweden)

    BRATU (SIMIONESCU MIHAELA

    2013-02-01

    Full Text Available The most accurate forecasts for USA unemployment rate on the horizon 2001-2012, according to U1 Theil’s coefficient and to multi-criteria ranking methods, were provided by International Monetary Fund (IMF, being followed by other institutions as: Organization for Economic Co-operation and Development (OECD, Congressional Budget Office (CBO and Blue Chips (BC. The multi-criteria ranking methods were applied to solve the divergence in assessing the accuracy, differences observed by computing five chosen measures of accuracy: U1 and U2 statistics of Theil, mean error, mean squared error, root mean squared error. Some strategies of improving the accuracy of the predictions provided by the four institutions, which are biased in all cases, excepting BC, were proposed. However, these methods did not generate unbiased forecasts. The predictions made by IMF and OECD for 2001-2012 can be improved by constructing combined forecasts, the INV approach and the scheme proposed by author providing the most accurate expections. The BC forecasts can be improved by smoothing the predictions using Holt-Winters method and Hodrick - Prescott filter.

  12. Selection bias in dynamically-measured super-massive black hole samples: dynamical masses and dependence on Sérsic index

    Science.gov (United States)

    Shankar, Francesco; Bernardi, Mariangela; Sheth, Ravi K.

    2017-01-01

    We extend the comparison between the set of local galaxies having dynamically measured black holes with galaxies in the Sloan Digital Sky Survey (SDSS). We first show that the most up-to-date local black hole samples of early-type galaxies with measurements of effective radii, luminosities, and Sérsic indices of the bulges of their host galaxies, have dynamical mass and Sérsic index distributions consistent with those of SDSS early-type galaxies of similar bulge stellar mass. The host galaxies of local black hole samples thus do not appear structurally different from SDSS galaxies, sharing similar dynamical masses, light profiles and light distributions. Analysis of the residuals reveals that velocity dispersion is more fundamental than Sérsic index nsph in the scaling relations between black holes and galaxies. Indeed, residuals with nsph could be ascribed to the (weak) correlation with bulge mass or even velocity dispersion. Finally, targetted Monte Carlo simulations that include the effects of the sphere of influence of the black hole, and tuned to reproduce the observed residuals and scaling relations in terms of velocity dispersion and stellar mass, show that, at least for galaxies with Mbulge ≳ 1010 M⊙ and nsph ≳ 5, the observed mean black hole mass at fixed Sérsic index is biased significantly higher than the intrinsic value.

  13. Stellar populations from spectroscopy of a large sample of quiescent galaxies at z > 1: Measuring the contribution of progenitor bias to early size growth

    CERN Document Server

    Belli, Sirio; Ellis, Richard S

    2014-01-01

    We analyze the stellar populations of a sample of 62 massive (log Mstar/Msun > 10.7) galaxies in the redshift range 1 < z < 1.6, with the main goal of investigating the role of recent quenching in the size growth of quiescent galaxies. We demonstrate that our sample is not biased toward bright, compact, or young galaxies, and thus is representative of the overall quiescent population. Our high signal-to-noise ratio Keck LRIS spectra probe the rest-frame Balmer break region which contains important absorption line diagnostics of recent star formation activity. We show that improved measures of the stellar population parameters, including the star-formation timescale tau, age and dust extinction, can be determined by fitting templates jointly to our spectroscopic and broad-band photometric data. These parameter fits allow us to backtrack the evolving trajectory of individual galaxies on the UVJ color-color plane. In addition to identifying which quiescent galaxies were recently quenched, we discover impor...

  14. Comparison of two adult mosquito sampling methods with human landing catches in south-central Ethiopia.

    Science.gov (United States)

    Kenea, Oljira; Balkew, Meshesha; Tekie, Habte; Gebre-Michael, Teshome; Deressa, Wakgari; Loha, Eskindir; Lindtjørn, Bernt; Overgaard, Hans J

    2017-01-13

    The human landing catch (HLC) is the standard reference method for measuring human exposure to mosquito bites. However, HLC is labour-intensive, exposes collectors to infectious mosquito bites and is subjected to collector bias. These necessitate local calibration and application of alternative methods. This study was undertaken to determine the relative sampling efficiency (RSE) of light traps with or without yeast-produced carbon dioxide bait vs. HLC in south-central Ethiopia. The experiment was conducted for 39 nights in a 3 × 3 Latin square randomized design with Anopheles arabiensis as the target species in the period between July and November 2014 in Edo Kontola village, south-central Ethiopia. Center for Disease Control and Prevention light trap catches (LTC) and yeast-generated carbon dioxide-baited light trap catches (CB-LTC) were each evaluated against HLC. The total nightly mosquito catches for each Anopheles species in either method was compared with HLC by Pearson correlation and simple linear regression analysis on log-transformed [log10(x + 1)] values. To test if the RSE of each alternative method was affected by mosquito density, the ratio of the number of mosquitoes in each method to the number of mosquitoes in HLC was plotted against the average mosquito abundance. Overall, 7606 Anopheles females were collected by the three sampling methods. Among these 5228 (68.7%) were Anopheles ziemanni, 1153 (15.2%) An. arabiensis, 883 (11.6%) Anopheles funestus s.l., and 342 (4.5%) Anopheles pharoensis. HLC yielded 3392 (44.6%), CB-LTC 2150 (28.3%), and LTC 2064 (27.1%) Anopheles females. The RSEs of LTC and HLC for An. arabiensis were significantly correlated (p method for sampling An. arabiensis. LTC can be used for large-scale indoor An. arabiensis surveillance and monitoring when it is difficult to use HLC. CB-LTC does not substantially improve sampling of this major vector compared to LTC in this setting. Trial registration PACTR201411000882128

  15. Method and apparatus for sensing a desired component of an incident magnetic field using magneto resistive elements biased in different directions

    Science.gov (United States)

    Pant, Bharat B. (Inventor); Wan, Hong (Inventor)

    1999-01-01

    A method and apparatus for sensing a desired component of a magnetic field using an isotropic magnetoresistive material. This is preferably accomplished by providing a bias field that is parallel to the desired component of the applied magnetic field. The bias field is applied in a first direction relative to a first set of magnetoresistive sensor elements, and in an opposite direction relative to a second set of magnetoresistive sensor elements. In this configuration, the desired component of the incident magnetic field adds to the bias field incident on the first set of magnetoresistive sensor elements, and subtracts from the bias field incident on the second set of magnetoresistive sensor elements. The magnetic field sensor may then sense the desired component of the incident magnetic field by simply sensing the difference in resistance of the first set of magnetoresistive sensor elements and the second set of magnetoresistive sensor elements.

  16. Sequential sampling: a novel method in farm animal welfare assessment.

    Science.gov (United States)

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall

  17. CoMaLit-V. Mass forecasting with proxies. Method and application to weak lensing calibrated samples

    CERN Document Server

    Sereno, Mauro

    2016-01-01

    Mass measurements of astronomical objects are most wanted but still elusive. We need them to trace the formation and evolution of cosmic structure but we can get direct measurements only for a minority. This lack can be circumvented with a proxy and a scaling relation. The twofold goal of estimating the unbiased relation and finding the right proxy value to plug in can be hampered by systematics, selection effects, Eddington/Malmquist biases and time evolution. We present a Bayesian hierarchical method which deals with these issues. Masses to be predicted are treated as missing data in the regression and are estimated together with the scaling parameters. The calibration subsample with measured masses does not need to be representative of the full sample. We apply the method to forecast weak lensing calibrated masses of the Planck, redMaPPer and MCXC clusters. Planck masses are biased low with respect to weak lensing calibrated masses, with a bias more pronounced for high redshift clusters. MCXC masses are un...

  18. Comparison of two methods of tear sampling for protein quantification by Bradford method

    Directory of Open Access Journals (Sweden)

    Eliana Farias

    2013-02-01

    Full Text Available The aim of this study was to compare two methods of tear sampling for protein quantification. Tear samples were collected from 29 healthy dogs (58 eyes using Schirmer tear test (STT strip and microcapillary tubes. The samples were frozen at -80ºC and analyzed by the Bradford method. Results were analyzed by Student's t test. The average protein concentration and standard deviation from tears collected with microcapillary tube were 4.45mg/mL ±0.35 and 4,52mg/mL ±0.29 for right and left eyes respectively. The average protein concentration and standard deviation from tears collected with Schirmer Tear Test (STT strip were and 54.5mg/mL ±0.63 and 54.15mg/mL ±0.65 to right and left eyes respectively. Statistically significant differences (p<0.001 were found between the methods. In the conditions in which this study was conducted, the average protein concentration obtained with the Bradford test from tear samples obtained by Schirmer Tear Test (STT strip showed values higher than those obtained with microcapillary tube. It is important that concentration of tear protein pattern values should be analyzed according the method used to collect tear samples.

  19. LOMA: A fast method to generate efficient tagged-random primers despite amplification bias of random PCR on pathogens

    Directory of Open Access Journals (Sweden)

    Lee Wah

    2008-09-01

    Full Text Available Abstract Background Pathogen detection using DNA microarrays has the potential to become a fast and comprehensive diagnostics tool. However, since pathogen detection chips currently utilize random primers rather than specific primers for the RT-PCR step, bias inherent in random PCR amplification becomes a serious problem that causes large inaccuracies in hybridization signals. Results In this paper, we study how the efficiency of random PCR amplification affects hybridization signals. We describe a model that predicts the amplification efficiency of a given random primer on a target viral genome. The prediction allows us to filter false-negative probes of the genome that lie in regions of poor random PCR amplification and improves the accuracy of pathogen detection. Subsequently, we propose LOMA, an algorithm to generate random primers that have good amplification efficiency. Wet-lab validation showed that the generated random primers improve the amplification efficiency significantly. Conclusion The blind use of a random primer with attached universal tag (random-tagged primer in a PCR reaction on a pathogen sample may not lead to a successful amplification. Thus, the design of random-tagged primers is an important consideration when performing PCR.

  20. The bias, accuracy and precision of faecal egg count reduction test results in cattle using McMaster, Cornell-Wisconsin and FLOTAC egg counting methods.

    Science.gov (United States)

    Levecke, B; Rinaldi, L; Charlier, J; Maurelli, M P; Bosco, A; Vercruysse, J; Cringoli, G

    2012-08-13

    The faecal egg count reduction test (FECRT) is the recommended method to monitor anthelmintic drug efficacy in cattle. There is a large variation in faecal egg count (FEC) methods applied to determine FECRT. However, it remains unclear whether FEC methods with an equal analytic sensitivity, but with different methodologies, result in equal FECRT results. We therefore, compared the bias, accuracy and precision of FECRT results for Cornell-Wisconsin (analytic sensitivity = 1 egg per gram faeces (EPG)), FLOTAC (analytic sensitivity = 1 EPG) and McMaster method (analytic sensitivity = 10 EPG) across four levels of egg excretion (1-49 EPG; 50-149 EPG; 150-299 EPG; 300-600 EPG). Finally, we assessed the sensitivity of the FEC methods to detect a truly reduced efficacy. To this end, two different criteria were used to define reduced efficacy based on FECR, including those described in the WAAVP guidelines (FECRT egg excretion increased, this effect was greatest for McMaster and least for Cornell-Wisconsin. The sensitivity of the three methods to detect a truly reduced efficacy was high (>90%). Yet, the sensitivity of McMaster and Cornell-Wisconsin may drop when drugs only show sub-optimal efficacy. Overall, the study indicates that the precision of FECRT is affected by the methodology of FEC, and that the level of egg excretion should be considered in the final interpretation of the FECRT. However, more comprehensive studies are required to provide more insights into the complex interplay of factors inherent to study design (sample size and FEC method) and host-parasite interactions (level of egg excretion and aggregation across the host population). Copyright © 2012 Elsevier B.V. All rights reserved.

  1. High-tech or field techs: Radio-telemetry is a cost-effective method for reducing bias in songbird nest searching

    Science.gov (United States)

    Peterson, Sean M.; Streby, Henry M.; Lehman, Justin A.; Kramer, Gunnar R.; Fish, Alexander C.; Andersen, David E.

    2015-01-01

    We compared the efficacy of standard nest-searching methods with finding nests via radio-tagged birds to assess how search technique influenced our determination of nest-site characteristics and nest success for Golden-winged Warblers (Vermivora chrysoptera). We also evaluated the cost-effectiveness of using radio-tagged birds to find nests. Using standard nest-searching techniques for 3 populations, we found 111 nests in locations with habitat characteristics similar to those described in previous studies: edges between forest and relatively open areas of early successional vegetation or shrubby wetlands, with 43% within 5 m of forest edge. The 83 nests found using telemetry were about half as likely (23%) to be within 5 m of forest edge. We spent little time searching >25 m into forest because published reports state that Golden-winged Warblers do not nest there. However, 14 nests found using telemetry (18%) were >25 m into forest. We modeled nest success using nest-searching method, nest age, and distance to forest edge as explanatory variables. Nest-searching method explained nest success better than nest age alone; we estimated that nests found using telemetry were 10% more likely to fledge young than nests found using standard nest-searching methods. Although radio-telemetry was more expensive than standard nest searching, the cost-effectiveness of both methods differed depending on searcher experience, amount of equipment owned, and bird population density. Our results demonstrate that telemetry can be an effective method for reducing bias in Golden-winged Warbler nest samples, can be cost competitive with standard nest-searching methods in some situations, and is likely to be a useful approach for finding nests of other forest-nesting songbirds.

  2. Method of 14C Sample Preparation for AMS Measurement

    Directory of Open Access Journals (Sweden)

    YANG Xu-ran

    2015-02-01

    Full Text Available In order to carry out the application research of 14C by accelerator mass spectrometry (AMS, the principle of sample preparation were systematically studied and more attention was paid to improve the preparation process efficiently. The set of integrated system of sample preparation was built up on the research. The experimental results showed that the sample preparation scheme was able to meet the demand of AMS measurement.

  3. Photoacoustic spectroscopy sample array vessels and photoacoustic spectroscopy methods for using the same

    Science.gov (United States)

    Amonette, James E.; Autrey, S. Thomas; Foster-Mills, Nancy S.

    2006-02-14

    Methods and apparatus for simultaneous or sequential, rapid analysis of multiple samples by photoacoustic spectroscopy are disclosed. Particularly, a photoacoustic spectroscopy sample array vessel including a vessel body having multiple sample cells connected thereto is disclosed. At least one acoustic detector is acoustically positioned near the sample cells. Methods for analyzing the multiple samples in the sample array vessels using photoacoustic spectroscopy are provided.

  4. A novel sampling method for the investigation of gut microbiota

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    AIM: In order to characterize the qualitative and quantitative microorganisms in different sites of the lower digestive tract (LDT) in healthy volunteers, a specific technique was developed for collecting mucous of the distal ileum, colon and rectum.METHODS: A polyethylene tube was designed to go through the colonoscope channel with a No. 8 French tube. In order to avoid internal contamination, the distal extremity was protected with a membrane of microfilm after being sterilized in ethilene oxid. To facilitate the aspiration of a precise volume, its interior was coated with silicone. One hundred microlliter (0.1 mL) sample of mucous was collected and transferred into an Eppenddorff tube containing nine hundred microlliter (0.9 mL) of VMGA-3 (viable medium of Goteborg). This procedure was repeated at each site of the LDT with a new sterilized catheter.RESULTS: All sites revealed the "non pathogenic"anaerobic bacteria Veillonella sp (average 105 colony forming units/mL-CFU/mL), allowing to conclude an environment of low oxidation-reduction potential (redox)in the LDT. It was also characterized the presence of Klebisiella sp with significant statistical predominance (SSP) in the ileum. Enterobacter sp was found with SSP in the sigmoid colon, BacteroideS sp non-pigmented (npg)and E. coli with SSP in the sigmoid colon and rectum,Enterococcus sp and Lactobacillus sp with SSP in the rectum, all in a mean concentration of 105 CFU/mL.CONCLUSION: This procedure is feasible and efficient and can point out a similar distribution of the aerobic and anaerobic bacteria with the presence of biological markers of normal microbiota in the LDT.

  5. Multi-Scaling Sampling: An Adaptive Sampling Method for Discovering Approximate Association Rules

    Institute of Scientific and Technical Information of China (English)

    Cai-Yan Jia; Xie-Ping Gao

    2005-01-01

    One of the obstacles of the efficient association rule mining is the explosive expansion of data sets since it is costly or impossible to scan large databases, esp., for multiple times. A popular solution to improve the speed and scalability of the association rule mining is to do the algorithm on a random sample instead of the entire database. But how to effectively define and efficiently estimate the degree of error with respect to the outcome of the algorithm, and how to determine the sample size needed are entangling researches until now. In this paper, an effective and efficient algorithm is given based on the PAC (Probably Approximate Correct) learning theory to measure and estimate sample error. Then, a new adaptive, on-line, fast sampling strategy - multi-scaling sampling - is presented inspired by MRA (Multi-Resolution Analysis) and Shannon sampling theorem, for quickly obtaining acceptably approximate association rules at appropriate sample size. Both theoretical analysis and empirical study have showed that the sampling strategy can achieve a very good speed-accuracy trade-off.

  6. Application and investigation of a bound for outcome reporting bias

    Directory of Open Access Journals (Sweden)

    Gamble Carrol

    2007-03-01

    Full Text Available Abstract Background Direct empirical evidence for the existence of outcome reporting bias is accumulating and this source of bias is recognised as a potential threat to the validity of meta-analysis of randomised clinical trials. Methods A method for calculating the maximum bias in a meta-analysis due to publication bias is adapted for the setting where within-study selective non-reporting of outcomes is suspected, and compared to the alternative approach of missing data imputation. The properties of both methods are investigated in realistic small sample situations. Results The results suggest that the adapted Copas and Jackson approach is the preferred method for reviewers to apply as an initial assessment of robustness to within-study selective non-reporting. Conclusion The Copas and Jackson approach is a useful method for systematic reviewers to apply to assess robustness to outcome reporting bias.

  7. Quantifying the degree of bias from using county-scale data in species distribution modeling: Can increasing sample size or using county-averaged environmental data reduce distributional overprediction?

    Science.gov (United States)

    Collins, Steven D; Abbott, John C; McIntyre, Nancy E

    2017-08-01

    Citizen-science databases have been used to develop species distribution models (SDMs), although many taxa may be only georeferenced to county. It is tacitly assumed that SDMs built from county-scale data should be less precise than those built with more accurate localities, but the extent of the bias is currently unknown. Our aims in this study were to illustrate the effects of using county-scale data on the spatial extent and accuracy of SDMs relative to true locality data and to compare potential compensatory methods (including increased sample size and using overall county environmental averages rather than point locality environmental data). To do so, we developed SDMs in maxent with PRISM-derived BIOCLIM parameters for 283 and 230 species of odonates (dragonflies and damselflies) and butterflies, respectively, for five subsets from the OdonataCentral and Butterflies and Moths of North America citizen-science databases: (1) a true locality dataset, (2) a corresponding sister dataset of county-centroid coordinates, (3) a dataset where the average environmental conditions within each county were assigned to each record, (4) a 50/50% mix of true localities and county-centroid coordinates, and (5) a 50/50% mix of true localities and records assigned the average environmental conditions within each county. These mixtures allowed us to quantify the degree of bias from county-scale data. Models developed with county centroids overpredicted the extent of suitable habitat by 15% on average compared to true locality models, although larger sample sizes (>100 locality records) reduced this disparity. Assigning county-averaged environmental conditions did not offer consistent improvement, however. Because county-level data are of limited value for developing SDMs except for species that are widespread and well collected or that inhabit regions where small, climatically uniform counties predominate, three means of encouraging more accurate georeferencing in citizen

  8. Validated Test Method 5030C: Purge-and-Trap for Aqueous Samples

    Science.gov (United States)

    This method describes a purge-and-trap procedure for the analysis of volatile organic compoundsin aqueous samples & water miscible liquid samples. It also describes the analysis of high concentration soil and waste sample extracts prepared in Method 5035.

  9. Theoretical analysis and an improvement method of the bias effect on the linearity of RF linear power amplifiers

    Institute of Scientific and Technical Information of China (English)

    Wu Tuo; Chen Hongyi; Qian Dahong

    2009-01-01

    Based on the Gummel-Poon model of BJT, the change of the DC bias as a function of the AC input signal in RF linear power amplifiers is theoretically derived, so that the linearity of different DC bias circuits can be interpreted and compared. According to the analysis results, a quantitative adaptive DC bias circuit is proposed,which can improve the linearity and efficiency. From the simulation and test results, we draw conclusions on how to improve the design of linear power amplifier.

  10. SAR imaging method based on coprime sampling and nested sparse sampling

    Institute of Scientific and Technical Information of China (English)

    Hongyin Shi; Baojing Jia

    2015-01-01

    As the signal bandwidth and the number of channels increase, the synthetic aperture radar (SAR) imaging system pro-duces huge amount of data according to the Shannon-Nyquist theorem, causing a huge burden for data transmission. This pa-per concerns the coprime sampling and nested sparse sampling, which are proposed recently but have never been applied to real world for target detection, and proposes a novel way which uti-lizes these new sub-Nyquist sampling structures for SAR sam-pling in azimuth and reconstructs the data of SAR sampling by compressive sensing (CS). Both the simulated and real data are processed to test the algorithm, and the results indicate the way which combines these new undersampling structures and CS is able to achieve the SAR imaging effectively with much less data than regularly ways required. Final y, the influence of a little sam-pling jitter to SAR imaging is analyzed by theoretical analysis and experimental analysis, and then it concludes a little sampling jitter have no effect on image quality of SAR.

  11. Discrete angle biasing in Monte Carlo radiation transport

    Energy Technology Data Exchange (ETDEWEB)

    Cramer, S.N.

    1988-05-01

    An angular biasing procedure is presented for use in Monte Carlo radiation transport with discretized scattering angle data. As in more general studies, the method is shown to reduce statistical weight fluctuations when it is combined with the exponential transformation. This discrete data application has a simple analytic form which is problem independent. The results from a sample problem illustrate the variance reduction and efficiency characteristics of the combined biasing procedures, and a large neutron and gamma ray integral experiment is also calculated. A proposal is given for the possible code generation of the biasing parameter p and the preferential direction /ovr/Omega///sub 0/ used in the combined biasing schemes.

  12. Method for Hot Real-Time Sampling of Gasification Products

    Energy Technology Data Exchange (ETDEWEB)

    Pomeroy, Marc D [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-29

    The Thermochemical Process Development Unit (TCPDU) at the National Renewable Energy Laboratory (NREL) is a highly instrumented half-ton/day pilot scale plant capable of demonstrating industrially relevant thermochemical technologies from lignocellulosic biomass conversion, including gasification. Gasification creates primarily Syngas (a mixture of Hydrogen and Carbon Monoxide) that can be utilized with synthesis catalysts to form transportation fuels and other valuable chemicals. Biomass derived gasification products are a very complex mixture of chemical components that typically contain Sulfur and Nitrogen species that can act as catalysis poisons for tar reforming and synthesis catalysts. Real-time hot online sampling techniques, such as Molecular Beam Mass Spectrometry (MBMS), and Gas Chromatographs with Sulfur and Nitrogen specific detectors can provide real-time analysis providing operational indicators for performance. Sampling typically requires coated sampling lines to minimize trace sulfur interactions with steel surfaces. Other materials used inline have also shown conversion of sulfur species into new components and must be minimized. Sample line Residence time within the sampling lines must also be kept to a minimum to reduce further reaction chemistries. Solids from ash and char contribute to plugging and must be filtered at temperature. Experience at NREL has shown several key factors to consider when designing and installing an analytical sampling system for biomass gasification products. They include minimizing sampling distance, effective filtering as close to source as possible, proper line sizing, proper line materials or coatings, even heating of all components, minimizing pressure drops, and additional filtering or traps after pressure drops.

  13. Comparison of DNA extraction methods for microbial community profiling with an application to pediatric bronchoalveolar lavage samples.

    Directory of Open Access Journals (Sweden)

    Dana Willner

    Full Text Available Barcoded amplicon sequencing is rapidly becoming a standard method for profiling microbial communities, including the human respiratory microbiome. While this approach has less bias than standard cultivation, several steps can introduce variation including the type of DNA extraction method used. Here we assessed five different extraction methods on pediatric bronchoalveolar lavage (BAL samples and a mock community comprised of nine bacterial genera to determine method reproducibility and detection limits for these typically low complexity communities. Additionally, using the mock community, we were able to evaluate contamination and select a relative abundance cut-off threshold based on the geometric distribution that optimizes the trade off between detecting bona fide operational taxonomic units and filtering out spurious ones. Using this threshold, the majority of genera in the mock community were predictably detected by all extraction methods including the hard-to-lyse Gram-positive genus Staphylococcus. Differences between extraction methods were significantly greater than between technical replicates for both the mock community and BAL samples emphasizing the importance of using a standardized methodology for microbiome studies. However, regardless of method used, individual patients retained unique diagnostic profiles. Furthermore, despite being stored as raw frozen samples for over five years, community profiles from BAL samples were consistent with historical culturing results. The culture-independent profiling of these samples also identified a number of anaerobic genera that are gaining acceptance as being part of the respiratory microbiome. This study should help guide researchers to formulate sampling, extraction and analysis strategies for respiratory and other human microbiome samples.

  14. Development and application of spatial and temporal statistical methods for unbiased wildlife sampling

    NARCIS (Netherlands)

    Khaemba, W.M.

    2000-01-01

    Current methods of obtaining information on wildlife populations are based on monitoring programmes using periodic surveys. In most cases aerial techniques are applied. Reported numbers are, however, often biased and imprecise, making it difficult to use this information for management purposes. Thi

  15. Reliability of a method of sampling stream invertebrates

    CSIR Research Space (South Africa)

    Chutter, FM

    1966-05-01

    Full Text Available In field ecological studies inferences must often be drawn from dissimilarities in numbers and species of organisms found in biological samples collected at different times and under various conditions....