WorldWideScience

Sample records for sampling protocol estimation

  1. Samples and Sampling Protocols for Scientific Investigations | Joel ...

    African Journals Online (AJOL)

    ... from sampling, through sample preparation, calibration to final measurement and reporting. This paper, therefore offers useful information on practical guidance on sampling protocols in line with best practice and international standards. Keywords: Sampling, sampling protocols, chain of custody, analysis, documentation ...

  2. The perils of straying from protocol: sampling bias and interviewer effects.

    Directory of Open Access Journals (Sweden)

    Carrie J Ngongo

    Full Text Available Fidelity to research protocol is critical. In a contingent valuation study in an informal urban settlement in Nairobi, Kenya, participants responded differently to the three trained interviewers. Interviewer effects were present during the survey pilot, then magnified at the start of the main survey after a seemingly slight adaptation of the survey sampling protocol allowed interviewers to speak with the "closest neighbor" in the event that no one was home at a selected household. This slight degree of interviewer choice led to inferred sampling bias. Multinomial logistic regression and post-estimation tests revealed that the three interviewers' samples differed significantly from one another according to six demographic characteristics. The two female interviewers were 2.8 and 7.7 times less likely to talk with respondents of low socio-economic status than the male interviewer. Systematic error renders it impossible to determine which of the survey responses might be "correct." This experience demonstrates why researchers must take care to strictly follow sampling protocols, consistently train interviewers, and monitor responses by interview to ensure similarity between interviewers' groups and produce unbiased estimates of the parameters of interest.

  3. Minimal sampling protocol for accurate estimation of urea production: a study with oral [13C]urea in fed and fasted piglets

    NARCIS (Netherlands)

    Oosterveld, Michiel J. S.; Gemke, Reinoud J. B. J.; Dainty, Jack R.; Kulik, Willem; Jakobs, Cornelis; de Meer, Kees

    2005-01-01

    An oral [13C]urea protocol may provide a simple method for measurement of urea production. The validity of single pool calculations in relation to a reduced sampling protocol was assessed. In eight fed and five fasted piglets, plasma urea enrichments from a 10 h sampling protocol were measured

  4. Protocol for Microplastics Sampling on the Sea Surface and Sample Analysis

    Science.gov (United States)

    Kovač Viršek, Manca; Palatinus, Andreja; Koren, Špela; Peterlin, Monika; Horvat, Petra; Kržan, Andrej

    2016-01-01

    Microplastic pollution in the marine environment is a scientific topic that has received increasing attention over the last decade. The majority of scientific publications address microplastic pollution of the sea surface. The protocol below describes the methodology for sampling, sample preparation, separation and chemical identification of microplastic particles. A manta net fixed on an »A frame« attached to the side of the vessel was used for sampling. Microplastic particles caught in the cod end of the net were separated from samples by visual identification and use of stereomicroscopes. Particles were analyzed for their size using an image analysis program and for their chemical structure using ATR-FTIR and micro FTIR spectroscopy. The described protocol is in line with recommendations for microplastics monitoring published by the Marine Strategy Framework Directive (MSFD) Technical Subgroup on Marine Litter. This written protocol with video guide will support the work of researchers that deal with microplastics monitoring all over the world. PMID:28060297

  5. Critical point relascope sampling for unbiased volume estimation of downed coarse woody debris

    Science.gov (United States)

    Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey; Mark J. Ducey

    2005-01-01

    Critical point relascope sampling is developed and shown to be design-unbiased for the estimation of log volume when used with point relascope sampling for downed coarse woody debris. The method is closely related to critical height sampling for standing trees when trees are first sampled with a wedge prism. Three alternative protocols for determining the critical...

  6. Joint estimation and contention-resolution protocol for wireless random access

    DEFF Research Database (Denmark)

    Stefanovic, Cedomir; Trillingsgaard, Kasper Fløe; Kiilerich Pratas, Nuno

    2013-01-01

    We propose a contention-based random-access protocol, designed for wireless networks where the number of users is not a priori known. The protocol operates in rounds divided into equal-duration slots, performing at the same time estimation of the number of users and resolution of their transmissi......We propose a contention-based random-access protocol, designed for wireless networks where the number of users is not a priori known. The protocol operates in rounds divided into equal-duration slots, performing at the same time estimation of the number of users and resolution...... successive interference cancellation which, coupled with the use of the optimized access probabilities, enables throughputs that are substantially higher than the traditional slotted ALOHA-like protocols. The key feature of the proposed protocol is that the round durations are not a priori set...

  7. A rapid and efficient DNA extraction protocol from fresh and frozen human blood samples.

    Science.gov (United States)

    Guha, Pokhraj; Das, Avishek; Dutta, Somit; Chaudhuri, Tapas Kumar

    2018-01-01

    Different methods available for extraction of human genomic DNA suffer from one or more drawbacks including low yield, compromised quality, cost, time consumption, use of toxic organic solvents, and many more. Herein, we aimed to develop a method to extract DNA from 500 μL of fresh or frozen human blood. Five hundred microliters of fresh and frozen human blood samples were used for standardization of the extraction procedure. Absorbance at 260 and 280 nm, respectively, (A 260 /A 280 ) were estimated to check the quality and quantity of the extracted DNA sample. Qualitative assessment of the extracted DNA was checked by Polymerase Chain reaction and double digestion of the DNA sample. Our protocol resulted in average yield of 22±2.97 μg and 20.5±3.97 μg from 500 μL of fresh and frozen blood, respectively, which were comparable to many reference protocols and kits. Besides yielding bulk amount of DNA, our protocol is rapid, economical, and avoids toxic organic solvents such as Phenol. Due to unaffected quality, the DNA is suitable for downstream applications. The protocol may also be useful for pursuing basic molecular researches in laboratories having limited funds. © 2017 Wiley Periodicals, Inc.

  8. Reliability of single aliquot regenerative protocol (SAR) for dose estimation in quartz at different burial temperatures: A simulation study

    International Nuclear Information System (INIS)

    Koul, D.K.; Pagonis, V.; Patil, P.

    2016-01-01

    The single aliquot regenerative protocol (SAR) is a well-established technique for estimating naturally acquired radiation doses in quartz. This simulation work examines the reliability of SAR protocol for samples which experienced different ambient temperatures in nature in the range of −10 to 40 °C. The contribution of various experimental variables used in SAR protocols to the accuracy and precision of the method is simulated for different ambient temperatures. Specifically the effects of paleo-dose, test dose, pre-heating temperature and cut-heat temperature on the accuracy of equivalent dose (ED) estimation are simulated by using random combinations of the concentrations of traps and centers using a previously published comprehensive quartz model. The findings suggest that the ambient temperature has a significant bearing on the reliability of natural dose estimation using SAR protocol, especially for ambient temperatures above 0 °C. The main source of these inaccuracies seems to be thermal sensitization of the quartz samples caused by the well-known thermal transfer of holes between luminescence centers in quartz. The simulations suggest that most of this inaccuracy in the dose estimation can be removed by delivering the laboratory doses in pulses (pulsed irradiation procedures). - Highlights: • Ambient temperatures affect the reliability of SAR. • It overestimates the dose with increase in burial temperature and burial time periods. • Elevated temperature irradiation does not correct for these overestimations. • Inaccuracies in dose estimation can be removed by incorporating pulsed irradiation procedures.

  9. Estimation of the Thurstonian model for the 2-AC protocol

    DEFF Research Database (Denmark)

    Christensen, Rune Haubo Bojesen; Lee, Hye-Seong; Brockhoff, Per B.

    2012-01-01

    . This relationship makes it possible to extract estimates and standard errors of δ and τ from general statistical software, and furthermore, it makes it possible to combine standard regression modelling with the Thurstonian model for the 2-AC protocol. A model for replicated 2-AC data is proposed using cumulative......The 2-AC protocol is a 2-AFC protocol with a “no-difference” option and is technically identical to the paired preference test with a “no-preference” option. The Thurstonian model for the 2-AC protocol is parameterized by δ and a decision parameter τ, the estimates of which can be obtained...... by fairly simple well-known methods. In this paper we describe how standard errors of the parameters can be obtained and how exact power computations can be performed. We also show how the Thurstonian model for the 2-AC protocol is closely related to a statistical model known as a cumulative probit model...

  10. Improved protocol and data analysis for accelerated shelf-life estimation of solid dosage forms.

    Science.gov (United States)

    Waterman, Kenneth C; Carella, Anthony J; Gumkowski, Michael J; Lukulay, Patrick; MacDonald, Bruce C; Roy, Michael C; Shamblin, Sheri L

    2007-04-01

    To propose and test a new accelerated aging protocol for solid-state, small molecule pharmaceuticals which provides faster predictions for drug substance and drug product shelf-life. The concept of an isoconversion paradigm, where times in different temperature and humidity-controlled stability chambers are set to provide a critical degradant level, is introduced for solid-state pharmaceuticals. Reliable estimates for temperature and relative humidity effects are handled using a humidity-corrected Arrhenius equation, where temperature and relative humidity are assumed to be orthogonal. Imprecision is incorporated into a Monte-Carlo simulation to propagate the variations inherent in the experiment. In early development phases, greater imprecision in predictions is tolerated to allow faster screening with reduced sampling. Early development data are then used to design appropriate test conditions for more reliable later stability estimations. Examples are reported showing that predicted shelf-life values for lower temperatures and different relative humidities are consistent with the measured shelf-life values at those conditions. The new protocols and analyses provide accurate and precise shelf-life estimations in a reduced time from current state of the art.

  11. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Science.gov (United States)

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  12. Development of bull trout sampling protocols

    Science.gov (United States)

    R. F. Thurow; J. T. Peterson; J. W. Guzevich

    2001-01-01

    This report describes results of research conducted in Washington in 2000 through Interagency Agreement #134100H002 between the U.S. Fish and Wildlife Service (USFWS) and the U.S. Forest Service Rocky Mountain Research Station (RMRS). The purpose of this agreement is to develop a bull trout (Salvelinus confluentus) sampling protocol by integrating...

  13. The impact of fecal sample processing on prevalence estimates for antibiotic-resistant Escherichia coli.

    Science.gov (United States)

    Omulo, Sylvia; Lofgren, Eric T; Mugoh, Maina; Alando, Moshe; Obiya, Joshua; Kipyegon, Korir; Kikwai, Gilbert; Gumbi, Wilson; Kariuki, Samuel; Call, Douglas R

    2017-05-01

    Investigators often rely on studies of Escherichia coli to characterize the burden of antibiotic resistance in a clinical or community setting. To determine if prevalence estimates for antibiotic resistance are sensitive to sample handling and interpretive criteria, we collected presumptive E. coli isolates (24 or 95 per stool sample) from a community in an urban informal settlement in Kenya. Isolates were tested for susceptibility to nine antibiotics using agar breakpoint assays and results were analyzed using generalized linear mixed models. We observed a 0.1). Prevalence estimates did not differ for five distinct E. coli colony morphologies on MacConkey agar plates (P>0.2). Successive re-plating of samples for up to five consecutive days had little to no impact on prevalence estimates. Finally, culturing E. coli under different conditions (with 5% CO 2 or micro-aerobic) did not affect estimates of prevalence. For the conditions tested in these experiments, minor modifications in sample processing protocols are unlikely to bias estimates of the prevalence of antibiotic-resistance for fecal E. coli. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Evaluation of storage and filtration protocols for alpine/subalpine lake water quality samples

    Science.gov (United States)

    John L. Korfmacher; Robert C. Musselman

    2007-01-01

    Many government agencies and other organizations sample natural alpine and subalpine surface waters using varying protocols for sample storage and filtration. Simplification of protocols would be beneficial if it could be shown that sample quality is unaffected. In this study, samples collected from low ionic strength waters in alpine and subalpine lake inlets...

  15. A Flow Cytometry Protocol to Estimate DNA Content in the Yellowtail Tetra Astyanax altiparanae

    Directory of Open Access Journals (Sweden)

    Pedro L. P. Xavier

    2017-09-01

    Full Text Available The production of triploid yellowtail tetra Astyanax altiparanae is a key factor to obtain permanently sterile individuals by chromosome set manipulation. Flow cytometric analysis is the main tool for confirmation of the resultant triploids individuals, but very few protocols are specific for A. altiparanae species. The current study has developed a protocol to estimate DNA content in this species. Furthermore, a protocol for long-term storage of dorsal fins used for flow cytometry analysis was established. The combination of five solutions with three detergents (Nonidet P-40 Substitute, Tween 20, and Triton X-100 at 0.1, 0.2, and 0.4% concentration was evaluated. Using the best solution from this first experiment, the addition of trypsin (0.125, 0.25, and 0.5% and sucrose (74 mM and the effects of increased concentrations of the detergents at 0.6 and 1.2% concentration were also evaluated. After adjustment of the protocol for flow cytometry, preservation of somatic tissue or isolated nuclei was also evaluated by freezing (at −20°C and fixation in saturated NaCl solution, acetic methanol (1:3, ethanol, and formalin at 10% for 30 or 60 days of storage at 25°C. Flow cytometry analysis in yellowtail tetra species was optimized using the following conditions: lysis solution: 9.53 mM MgCl2.7H20; 47.67 mM KCl; 15 mM Tris; 74 mM sucrose, 0.6% Triton X-100, pH 8.0; staining solution: Dulbecco's PBS with DAPI 1 μg mL−1; preservation procedure: somatic cells (dorsal fin samples frozen at −20°C. Using this protocol, samples may be stored up to 60 days with good accuracy for flow cytometry analysis.

  16. Sampling and estimating recreational use.

    Science.gov (United States)

    Timothy G. Gregoire; Gregory J. Buhyoff

    1999-01-01

    Probability sampling methods applicable to estimate recreational use are presented. Both single- and multiple-access recreation sites are considered. One- and two-stage sampling methods are presented. Estimation of recreational use is presented in a series of examples.

  17. A multigear protocol for sampling crayfish assemblages in Gulf of Mexico coastal streams

    Science.gov (United States)

    William R. Budnick; William E. Kelso; Susan B. Adams; Michael D. Kaller

    2018-01-01

    Identifying an effective protocol for sampling crayfish in streams that vary in habitat and physical/chemical characteristics has proven problematic. We evaluated an active, combined-gear (backpack electrofishing and dipnetting) sampling protocol in 20 Coastal Plain streams in Louisiana. Using generalized linear models and rarefaction curves, we evaluated environmental...

  18. Low-sampling-rate ultra-wideband channel estimation using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, a low-sampling-rate scheme for ultra-wideband channel estimation is proposed. The scheme exploits multiple observations generated by transmitting multiple pulses. In the proposed scheme, P pulses are transmitted to produce channel impulse response estimates at a desired sampling rate, while the ADC samples at a rate that is P times slower. To avoid loss of fidelity, the number of sampling periods (based on the desired rate) in the inter-pulse interval is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this case, and to achieve an overall good channel estimation performance, without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. It is shown that this estimator is related to the Bayesian linear minimum mean squared error (LMMSE) estimator. Channel estimation performance of the proposed sub-sampling scheme combined with the new estimator is assessed in simulation. The results show that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in almost all cases, while in the high SNR regime it also outperforms the LMMSE estimator. In addition to channel estimation, a synchronization method is also proposed that utilizes the same pulse sequence used for channel estimation. © 2014 IEEE.

  19. Lead Sampling Protocols: Why So Many and What Do They Tell You?

    Science.gov (United States)

    Sampling protocols can be broadly categorized based on their intended purpose of 1) Pb regulatory compliance/corrosion control efficacy, 2) Pb plumbing source determination or Pb type identification, and 3) Pb exposure assessment. Choosing the appropriate protocol is crucial to p...

  20. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  1. The Gas Sampling Interval Effect on V˙O2peak Is Independent of Exercise Protocol.

    Science.gov (United States)

    Scheadler, Cory M; Garver, Matthew J; Hanson, Nicholas J

    2017-09-01

    There is a plethora of gas sampling intervals available during cardiopulmonary exercise testing to measure peak oxygen consumption (V˙O2peak). Different intervals can lead to altered V˙O2peak. Whether differences are affected by the exercise protocol or subject sample is not clear. The purpose of this investigation was to determine whether V˙O2peak differed because of the manipulation of sampling intervals and whether differences were independent of the protocol and subject sample. The first subject sample (24 ± 3 yr; V˙O2peak via 15-breath moving averages: 56.2 ± 6.8 mL·kg·min) completed the Bruce and the self-paced V˙O2max protocols. The second subject sample (21.9 ± 2.7 yr; V˙O2peak via 15-breath moving averages: 54.2 ± 8.0 mL·kg·min) completed the Bruce and the modified Astrand protocols. V˙O2peak was identified using five sampling intervals: 15-s block averages, 30-s block averages, 15-breath block averages, 15-breath moving averages, and 30-s block averages aligned to the end of exercise. Differences in V˙O2peak between intervals were determined using repeated-measures ANOVAs. The influence of subject sample on the sampling effect was determined using independent t-tests. There was a significant main effect of sampling interval on V˙O2peak (first sample Bruce and self-paced V˙O2max P sample Bruce and modified Astrand P sampling intervals followed a similar pattern for each protocol and subject sample, with 15-breath moving average presenting the highest V˙O2peak. The effect of manipulating gas sampling intervals on V˙O2peak appears to be protocol and sample independent. These findings highlight our recommendation that the clinical and scientific community request and report the sampling interval whenever metabolic data are presented. The standardization of reporting would assist in the comparison of V˙O2peak.

  2. Investigations of the post-IR IRSL protocol applied to single K-feldspar grains from fluvial sediment samples

    International Nuclear Information System (INIS)

    Nian, Xiaomei; Bailey, Richard M.; Zhou, Liping

    2012-01-01

    The post-IR IRSL protocol with single K-feldspar grains was applied to three samples taken from a fluvial sedimentary sequence at the archaeological site of the Dali Man, Shaanxi Province, China. K-feldspar coarse grains were extracted for measurement. Approximately 30–40% of the grains were sufficiently bright to measure, and after application of rejection criteria based on signal strength, recuperation, recycling ratio and saturation dose, ∼10–15% of the grains were used for D e calculation. The relationship of signal decay rate and form of D e (t) with the recovery dose were investigated. The dose recovery ratios of the samples after initial bleaching with the four different light sources were within uncertainties of unity. No anomalous fading was observed. The over-dispersion of the recovered dose and D e values were similar, suggesting neither incomplete resetting of the post-IR IRSL signals nor spatially heterogeneous dose rates significantly affected the natural dose estimates. The values of D e obtained with the single K-feldspar grain post-IR IRSL protocol were in the range ∼400–490 Gy. Combining all of the measured single-grain signals for each of the individual samples (into a ‘synthetic single aliquot’) increased the D e estimates to the range ∼700–900 Gy, suggesting that the grains screened-out by the rejection criteria may have the potential to cause palaeodose over-estimation, although this finding requires a more extensive investigation. Thermally transferred signals were found in the single K-feldspar grains post-IR IRSL protocol, and the proportion of thermally transferred signal to test-dose OSL signal (stimulation at 290 °C) from the natural dose was higher than from regenerative doses, and the proportion was grain- and dose-dependent. As such, TT-post-IR IRSL signals at 290 °C have the potential to cause dose underestimation, although this may be reduced by using larger test-dose irradiations. Our study demonstrates

  3. Effects of lek count protocols on greater sage-grouse population trend estimates

    Science.gov (United States)

    Monroe, Adrian; Edmunds, David; Aldridge, Cameron L.

    2016-01-01

    Annual counts of males displaying at lek sites are an important tool for monitoring greater sage-grouse populations (Centrocercus urophasianus), but seasonal and diurnal variation in lek attendance may increase variance and bias of trend analyses. Recommendations for protocols to reduce observation error have called for restricting lek counts to within 30 minutes of sunrise, but this may limit the number of lek counts available for analysis, particularly from years before monitoring was widely standardized. Reducing the temporal window for conducting lek counts also may constrain the ability of agencies to monitor leks efficiently. We used lek count data collected across Wyoming during 1995−2014 to investigate the effect of lek counts conducted between 30 minutes before and 30, 60, or 90 minutes after sunrise on population trend estimates. We also evaluated trends across scales relevant to management, including statewide, within Working Group Areas and Core Areas, and for individual leks. To further evaluate accuracy and precision of trend estimates from lek count protocols, we used simulations based on a lek attendance model and compared simulated and estimated values of annual rate of change in population size (λ) from scenarios of varying numbers of leks, lek count timing, and count frequency (counts/lek/year). We found that restricting analyses to counts conducted within 30 minutes of sunrise generally did not improve precision of population trend estimates, although differences among timings increased as the number of leks and count frequency decreased. Lek attendance declined >30 minutes after sunrise, but simulations indicated that including lek counts conducted up to 90 minutes after sunrise can increase the number of leks monitored compared to trend estimates based on counts conducted within 30 minutes of sunrise. This increase in leks monitored resulted in greater precision of estimates without reducing accuracy. Increasing count

  4. A proportional integral estimator-based clock synchronization protocol for wireless sensor networks.

    Science.gov (United States)

    Yang, Wenlun; Fu, Minyue

    2017-11-01

    Clock synchronization is an issue of vital importance in applications of WSNs. This paper proposes a proportional integral estimator-based protocol (EBP) to achieve clock synchronization for wireless sensor networks. As each local clock skew gradually drifts, synchronization accuracy will decline over time. Compared with existing consensus-based approaches, the proposed synchronization protocol improves synchronization accuracy under time-varying clock skews. Moreover, by restricting synchronization error of clock skew into a relative small quantity, it could reduce periodic re-synchronization frequencies. At last, a pseudo-synchronous implementation for skew compensation is introduced as synchronous protocol is unrealistic in practice. Numerical simulations are shown to illustrate the performance of the proposed protocol. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  5. A modified FASP protocol for high-throughput preparation of protein samples for mass spectrometry.

    Directory of Open Access Journals (Sweden)

    Jeremy Potriquet

    Full Text Available To facilitate high-throughput proteomic analyses we have developed a modified FASP protocol which improves the rate at which protein samples can be processed prior to mass spectrometry. Adapting the original FASP protocol to a 96-well format necessitates extended spin times for buffer exchange due to the low centrifugation speeds tolerated by these devices. However, by using 96-well plates with a more robust polyethersulfone molecular weight cutoff membrane, instead of the cellulose membranes typically used in these devices, we could use isopropanol as a wetting agent, decreasing spin times required for buffer exchange from an hour to 30 minutes. In a typical work flow used in our laboratory this equates to a reduction of 3 hours per plate, providing processing times similar to FASP for the processing of up to 96 samples per plate. To test whether our modified protocol produced similar results to FASP and other FASP-like protocols we compared the performance of our modified protocol to the original FASP and the more recently described eFASP and MStern-blot. We show that all FASP-like methods, including our modified protocol, display similar performance in terms of proteins identified and reproducibility. Our results show that our modified FASP protocol is an efficient method for the high-throughput processing of protein samples for mass spectral analysis.

  6. A Robust PCR Protocol for HIV Drug Resistance Testing on Low-Level Viremia Samples

    Directory of Open Access Journals (Sweden)

    Shivani Gupta

    2017-01-01

    Full Text Available The prevalence of drug resistance (DR mutations in people with HIV-1 infection, particularly those with low-level viremia (LLV, supports the need to improve the sensitivity of amplification methods for HIV DR genotyping in order to optimize antiretroviral regimen and facilitate HIV-1 DR surveillance and relevant research. Here we report on a fully validated PCR-based protocol that achieves consistent amplification of the protease (PR and reverse transcriptase (RT regions of HIV-1 pol gene across many HIV-1 subtypes from LLV plasma samples. HIV-spiked plasma samples from the External Quality Assurance Program Oversight Laboratory (EQAPOL, covering various HIV-1 subtypes, as well as clinical specimens were used to optimize and validate the protocol. Our results demonstrate that this protocol has a broad HIV-1 subtype coverage and viral load span with high sensitivity and reproducibility. Moreover, the protocol is robust even when plasma sample volumes are limited, the HIV viral load is unknown, and/or the HIV subtype is undetermined. Thus, the protocol is applicable for the initial amplification of the HIV-1 PR and RT genes required for subsequent genotypic DR assays.

  7. A simplified field protocol for genetic sampling of birds using buccal swabs

    Science.gov (United States)

    Vilstrup, Julia T.; Mullins, Thomas D.; Miller, Mark P.; McDearman, Will; Walters, Jeffrey R.; Haig, Susan M.

    2018-01-01

    DNA sampling is an essential prerequisite for conducting population genetic studies. For many years, blood sampling has been the preferred method for obtaining DNA in birds because of their nucleated red blood cells. Nonetheless, use of buccal swabs has been gaining favor because they are less invasive yet still yield adequate amounts of DNA for amplifying mitochondrial and nuclear markers; however, buccal swab protocols often include steps (e.g., extended air-drying and storage under frozen conditions) not easily adapted to field settings. Furthermore, commercial extraction kits and swabs for buccal sampling can be expensive for large population studies. We therefore developed an efficient, cost-effective, and field-friendly protocol for sampling wild birds after comparing DNA yield among 3 inexpensive buccal swab types (2 with foam tips and 1 with a cotton tip). Extraction and amplification success was high (100% and 97.2% respectively) using inexpensive generic swabs. We found foam-tipped swabs provided higher DNA yields than cotton-tipped swabs. We further determined that omitting a drying step and storing swabs in Longmire buffer increased efficiency in the field while still yielding sufficient amounts of DNA for detailed population genetic studies using mitochondrial and nuclear markers. This new field protocol allows time- and cost-effective DNA sampling of juveniles or small-bodied birds for which drawing blood may cause excessive stress to birds and technicians alike.

  8. Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols

    DEFF Research Database (Denmark)

    Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.

    2008-01-01

    OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved...... in 1994-5 by the scientific-ethics committees for Copenhagen and Frederiksberg, Denmark (n=70). MAIN OUTCOME MEASURE: Proportion of protocols and publications that did not provide key information about sample size calculations and statistical methods; proportion of trials with discrepancies between...... of handling missing data was described in 16 protocols and 49 publications. 39/49 protocols and 42/43 publications reported the statistical test used to analyse primary outcome measures. Unacknowledged discrepancies between protocols and publications were found for sample size calculations (18/34 trials...

  9. Urine sample collection protocols for bioassay samples

    Energy Technology Data Exchange (ETDEWEB)

    MacLellan, J.A.; McFadden, K.M.

    1992-11-01

    In vitro radiobioassay analyses are used to measure the amount of radioactive material excreted by personnel exposed to the potential intake of radioactive material. The analytical results are then used with various metabolic models to estimate the amount of radioactive material in the subject`s body and the original intake of radioactive material. Proper application of these metabolic models requires knowledge of the excretion period. It is normal practice to design the bioassay program based on a 24-hour excretion sample. The Hanford bioassay program simulates a total 24-hour urine excretion sample with urine collection periods lasting from one-half hour before retiring to one-half hour after rising on two consecutive days. Urine passed during the specified periods is collected in three 1-L bottles. Because the daily excretion volume given in Publication 23 of the International Commission on Radiological Protection (ICRP 1975, p. 354) for Reference Man is 1.4 L, it was proposed to use only two 1-L bottles as a cost-saving measure. This raised the broader question of what should be the design capacity of a 24-hour urine sample kit.

  10. Urine sample collection protocols for bioassay samples

    Energy Technology Data Exchange (ETDEWEB)

    MacLellan, J.A.; McFadden, K.M.

    1992-11-01

    In vitro radiobioassay analyses are used to measure the amount of radioactive material excreted by personnel exposed to the potential intake of radioactive material. The analytical results are then used with various metabolic models to estimate the amount of radioactive material in the subject's body and the original intake of radioactive material. Proper application of these metabolic models requires knowledge of the excretion period. It is normal practice to design the bioassay program based on a 24-hour excretion sample. The Hanford bioassay program simulates a total 24-hour urine excretion sample with urine collection periods lasting from one-half hour before retiring to one-half hour after rising on two consecutive days. Urine passed during the specified periods is collected in three 1-L bottles. Because the daily excretion volume given in Publication 23 of the International Commission on Radiological Protection (ICRP 1975, p. 354) for Reference Man is 1.4 L, it was proposed to use only two 1-L bottles as a cost-saving measure. This raised the broader question of what should be the design capacity of a 24-hour urine sample kit.

  11. Ad-Hoc vs. Standardized and Optimized Arthropod Diversity Sampling

    Directory of Open Access Journals (Sweden)

    Pedro Cardoso

    2009-09-01

    Full Text Available The use of standardized and optimized protocols has been recently advocated for different arthropod taxa instead of ad-hoc sampling or sampling with protocols defined on a case-by-case basis. We present a comparison of both sampling approaches applied for spiders in a natural area of Portugal. Tests were made to their efficiency, over-collection of common species, singletons proportions, species abundance distributions, average specimen size, average taxonomic distinctness and behavior of richness estimators. The standardized protocol revealed three main advantages: (1 higher efficiency; (2 more reliable estimations of true richness; and (3 meaningful comparisons between undersampled areas.

  12. Protocol for the estimation of average indoor radon-daughter concentrations: Second edition

    International Nuclear Information System (INIS)

    Langner, G.H. Jr.; Pacer, J.C.

    1988-05-01

    The Technical Measurements Center has developed a protocol which specifies the procedures to be used for determining indoor radon-daughter concentrations in support of Department of Energy remedial action programs. This document is the central part of the protocol and is to be used in conjunction with the individual procedure manuals. The manuals contain the information and procedures required to implement the proven methods for estimating average indoor radon-daughter concentration. Proven in this case means that these methods have been determined to provide reasonable assurance that the average radon-daughter concentration within a structure is either above, at, or below the standards established for remedial action programs. This document contains descriptions of the generic aspects of methods used for estimating radon-daughter concentration and provides guidance with respect to method selection for a given situation. It is expected that the latter section of this document will be revised whenever another estimation method is proven to be capable of satisfying the criteria of reasonable assurance and cost minimization. 22 refs., 6 figs., 3 tabs

  13. Pilot studies for the North American Soil Geochemical Landscapes Project - Site selection, sampling protocols, analytical methods, and quality control protocols

    Science.gov (United States)

    Smith, D.B.; Woodruff, L.G.; O'Leary, R. M.; Cannon, W.F.; Garrett, R.G.; Kilburn, J.E.; Goldhaber, M.B.

    2009-01-01

    In 2004, the US Geological Survey (USGS) and the Geological Survey of Canada sampled and chemically analyzed soils along two transects across Canada and the USA in preparation for a planned soil geochemical survey of North America. This effort was a pilot study to test and refine sampling protocols, analytical methods, quality control protocols, and field logistics for the continental survey. A total of 220 sample sites were selected at approximately 40-km intervals along the two transects. The ideal sampling protocol at each site called for a sample from a depth of 0-5 cm and a composite of each of the O, A, and C horizons. The Ca, Fe, K, Mg, Na, S, Ti, Ag, As, Ba, Be, Bi, Cd, Ce, Co, Cr, Cs, Cu, Ga, In, La, Li, Mn, Mo, Nb, Ni, P, Pb, Rb, Sb, Sc, Sn, Sr, Te, Th, Tl, U, V, W, Y, and Zn by inductively coupled plasma-mass spectrometry and inductively coupled plasma-atomic emission spectrometry following a near-total digestion in a mixture of HCl, HNO3, HClO4, and HF. Separate methods were used for Hg, Se, total C, and carbonate-C on this same size fraction. Only Ag, In, and Te had a large percentage of concentrations below the detection limit. Quality control (QC) of the analyses was monitored at three levels: the laboratory performing the analysis, the USGS QC officer, and the principal investigator for the study. This level of review resulted in an average of one QC sample for every 20 field samples, which proved to be minimally adequate for such a large-scale survey. Additional QC samples should be added to monitor within-batch quality to the extent that no more than 10 samples are analyzed between a QC sample. Only Cr (77%), Y (82%), and Sb (80%) fell outside the acceptable limits of accuracy (% recovery between 85 and 115%) because of likely residence in mineral phases resistant to the acid digestion. A separate sample of 0-5-cm material was collected at each site for determination of organic compounds. A subset of 73 of these samples was analyzed for a suite of

  14. Evaluation of sampling strategies to estimate crown biomass

    Directory of Open Access Journals (Sweden)

    Krishna P Poudel

    2015-01-01

    Full Text Available Background Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree. Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products, fuel load assessments and fire management strategies, and wildfire modeling. However, crown biomass is difficult to predict because of the variability within and among species and sites. Thus the allometric equations used for predicting crown biomass should be based on data collected with precise and unbiased sampling strategies. In this study, we evaluate the performance different sampling strategies to estimate crown biomass and to evaluate the effect of sample size in estimating crown biomass. Methods Using data collected from 20 destructively sampled trees, we evaluated 11 different sampling strategies using six evaluation statistics: bias, relative bias, root mean square error (RMSE, relative RMSE, amount of biomass sampled, and relative biomass sampled. We also evaluated the performance of the selected sampling strategies when different numbers of branches (3, 6, 9, and 12 are selected from each tree. Tree specific log linear model with branch diameter and branch length as covariates was used to obtain individual branch biomass. Results Compared to all other methods stratified sampling with probability proportional to size estimation technique produced better results when three or six branches per tree were sampled. However, the systematic sampling with ratio estimation technique was the best when at least nine branches per tree were sampled. Under the stratified sampling strategy, selecting unequal number of branches per stratum produced approximately similar results to simple random sampling, but it further decreased RMSE when information on branch diameter is used in the design and estimation phases. Conclusions Use of

  15. Robowell: An automated process for monitoring ground water quality using established sampling protocols

    Science.gov (United States)

    Granato, G.E.; Smith, K.P.

    1999-01-01

    Robowell is an automated process for monitoring selected ground water quality properties and constituents by pumping a well or multilevel sampler. Robowell was developed and tested to provide a cost-effective monitoring system that meets protocols expected for manual sampling. The process uses commercially available electronics, instrumentation, and hardware, so it can be configured to monitor ground water quality using the equipment, purge protocol, and monitoring well design most appropriate for the monitoring site and the contaminants of interest. A Robowell prototype was installed on a sewage treatment plant infiltration bed that overlies a well-studied unconfined sand and gravel aquifer at the Massachusetts Military Reservation, Cape Cod, Massachusetts, during a time when two distinct plumes of constituents were released. The prototype was operated from May 10 to November 13, 1996, and quality-assurance/quality-control measurements demonstrated that the data obtained by the automated method was equivalent to data obtained by manual sampling methods using the same sampling protocols. Water level, specific conductance, pH, water temperature, dissolved oxygen, and dissolved ammonium were monitored by the prototype as the wells were purged according to U.S Geological Survey (USGS) ground water sampling protocols. Remote access to the data record, via phone modem communications, indicated the arrival of each plume over a few days and the subsequent geochemical reactions over the following weeks. Real-time availability of the monitoring record provided the information needed to initiate manual sampling efforts in response to changes in measured ground water quality, which proved the method and characterized the screened portion of the plume in detail through time. The methods and the case study described are presented to document the process for future use.

  16. Reducing the sampling periods required in protocols for establishing ammonia emissions from pig fattening buildings using measurements and modelling

    NARCIS (Netherlands)

    Mosquera Losada, J.; Ogink, N.W.M.

    2011-01-01

    Ammonia (NH(3)) emission factors for animal housing systems in the Netherlands are based on measurements using standardised measurement protocols. Both the original Green Label (GL) protocol and the newly developed multi-site sampling protocol are based on year-round sampling periods. The objective

  17. Validation of a protocol for the estimation of three-dimensional body center of mass kinematics in sport.

    Science.gov (United States)

    Mapelli, Andrea; Zago, Matteo; Fusini, Laura; Galante, Domenico; Colombo, Andrea; Sforza, Chiarella

    2014-01-01

    Since strictly related to balance and stability control, body center of mass (CoM) kinematics is a relevant quantity in sport surveys. Many methods have been proposed to estimate CoM displacement. Among them, segmental method appears to be suitable to investigate CoM kinematics in sport: human body is assumed as a system of rigid bodies, hence the whole-body CoM is calculated as the weighted average of the CoM of each segment. The number of landmarks represents a crucial choice in the protocol design process: one have to find the proper compromise between accuracy and invasivity. In this study, using a motion analysis system, a protocol based upon the segmental method is validated, adopting an anatomical model comprising 14 landmarks. Two sets of experiments were conducted. Firstly, our protocol was compared to the ground reaction force method (GRF), accounted as a standard in CoM estimation. In the second experiment, we investigated the aerial phase typical of many disciplines, comparing our protocol with: (1) an absolute reference, the parabolic regression of the vertical CoM trajectory during the time of flight; (2) two common approaches to estimate CoM kinematics in gait, known as sacrum and reconstructed pelvis methods. Recognized accuracy indexes proved that the results obtained were comparable to the GRF; what is more, during the aerial phases our protocol showed to be significantly more accurate than the two other methods. The protocol assessed can therefore be adopted as a reliable tool for CoM kinematics estimation in further sport researches. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Adaptive control of theophylline therapy: importance of blood sampling times.

    Science.gov (United States)

    D'Argenio, D Z; Khakmahd, K

    1983-10-01

    A two-observation protocol for estimating theophylline clearance during a constant-rate intravenous infusion is used to examine the importance of blood sampling schedules with regard to the information content of resulting concentration data. Guided by a theory for calculating maximally informative sample times, population simulations are used to assess the effect of specific sampling times on the precision of resulting clearance estimates and subsequent predictions of theophylline plasma concentrations. The simulations incorporated noise terms for intersubject variability, dosing errors, sample collection errors, and assay error. Clearance was estimated using Chiou's method, least squares, and a Bayesian estimation procedure. The results of these simulations suggest that clinically significant estimation and prediction errors may result when using the above two-point protocol for estimating theophylline clearance if the time separating the two blood samples is less than one population mean elimination half-life.

  19. Development of a protocol for sampling and analysis of ballast water in Jamaica

    Directory of Open Access Journals (Sweden)

    Achsah A Mitchell

    2014-09-01

    Full Text Available The transfer of ballast by the international shipping industry has negatively impacted the environment. To design such a protocol for the area, the ballast water tanks of seven bulk cargo vessels entering a Jamaican port were sampled between January 28, 2010 and August 17, 2010. Vessels originated from five ports and used three main routes, some of which conducted ballast water exchange. Twenty-six preserved and 22 live replicate zooplankton samples were obtained. Abundance and richness were higher than at temperate ports. Exchange did not alter the biotic composition but reduced the abundance. Two of the live sample replicates, containing 31.67 and 16.75 viable individuals m-3, were non-compliant with the International Convention for the Control and Management of Ships’ Ballast Water and Sediments. Approximately 12% of the species identified in the ballast water were present in the waters nearest the port in 1995 and 11% were present in the entire bay in 2005. The protocol designed from this study can be used to aid the establishment of a ballast water management system in the Caribbean or used as a foundation for the development of further protocols.

  20. Evaluation of sample preparation protocols for spider venom profiling by MALDI-TOF MS.

    Science.gov (United States)

    Bočánek, Ondřej; Šedo, Ondrej; Pekár, Stano; Zdráhal, Zbyněk

    2017-07-01

    Spider venoms are highly complex mixtures containing biologically active substances with potential for use in biotechnology or pharmacology. Fingerprinting of venoms by Matrix-Assisted Laser Desorption-Ionization - Time of Flight Mass Spectrometry (MALDI-TOF MS) is a thriving technology, enabling the rapid detection of peptide/protein components that can provide comparative information. In this study, we evaluated the effects of sample preparation procedures on MALDI-TOF mass spectral quality to establish a protocol providing the most reliable analytical outputs. We adopted initial sample preparation conditions from studies already published in this field. Three different MALDI matrixes, three matrix solvents, two sample deposition methods, and different acid concentrations were tested. As a model sample, venom from Brachypelma albopilosa was used. The mass spectra were evaluated on the basis of absolute and relative signal intensities, and signal resolution. By conducting three series of analyses at three weekly intervals, the reproducibility of the mass spectra were assessed as a crucial factor in the selection for optimum conditions. A sample preparation protocol based on the use of an HCCA matrix dissolved in 50% acetonitrile with 2.5% TFA deposited onto the target by the dried-droplet method was found to provide the best results in terms of information yield and repeatability. We propose that this protocol should be followed as a standard procedure, enabling the comparative assessment of MALDI-TOF MS spider venom fingerprints. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. A distance limited method for sampling downed coarse woody debris

    Science.gov (United States)

    Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine; Michael S. Williams

    2012-01-01

    A new sampling method for down coarse woody debris is proposed based on limiting the perpendicular distance from individual pieces to a randomly chosen sample point. Two approaches are presented that allow different protocols to be used to determine field measurements; estimators for each protocol are also developed. Both protocols are compared via simulation against...

  2. Design unbiased estimation in line intersect sampling using segmented transects

    Science.gov (United States)

    David L.R. Affleck; Timothy G. Gregoire; Harry T. Valentine; Harry T. Valentine

    2005-01-01

    In many applications of line intersect sampling. transects consist of multiple, connected segments in a prescribed configuration. The relationship between the transect configuration and the selection probability of a population element is illustrated and a consistent sampling protocol, applicable to populations composed of arbitrarily shaped elements, is proposed. It...

  3. Continuous sampling from distributed streams

    DEFF Research Database (Denmark)

    Graham, Cormode; Muthukrishnan, S.; Yi, Ke

    2012-01-01

    A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple distribu......A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple...... distributed sites. The main challenge is to ensure that a sample is drawn uniformly across the union of the data while minimizing the communication needed to run the protocol on the evolving data. At the same time, it is also necessary to make the protocol lightweight, by keeping the space and time costs low...... for each participant. In this article, we present communication-efficient protocols for continuously maintaining a sample (both with and without replacement) from k distributed streams. These apply to the case when we want a sample from the full streams, and to the sliding window cases of only the W most...

  4. Estimating the encounter rate variance in distance sampling

    Science.gov (United States)

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  5. A study on pre-heat conditions in equivalent-dose estimation of holocene loess using single-aliquot regenerative-dose (SAR) protocol

    International Nuclear Information System (INIS)

    Jia Yaofeng; Huang Chunchang; Pang Jiangli; Lu Xinwei; Zhang Xu

    2007-01-01

    Through various arrangements of pre-heat and cut-heat temperatures in the equivalent-dose estimation of Holocene loess using a Double-SAR dating protocol, the paper estimated the equivalent-doses from several loess samples by application of IRSL and Post-IR OSL signals, respectively. The measured results present that the equivalent-dose depends on the heat temperature, especially depends on the cut-heat temperature, showing the equivalent-dose increases with the cut-heat temperature; a plateau of equivalent-dose appears at the 200-300 degree C preheat temperatures and the 200-240 degree C cut-heat temperatures, furthermore, the equivalent-doses estimated by IRSL and Post-IR OSL signals respectively are close to each other, which resulted from the similar sensitivity change directions of optical stimulated signals and their smaller change ranges in the measurement cycles using the various temperatures of pre-heat and cut-heat. This suggests that the 200-300 degree C pre-heat temperatures and the 200-240 degree C cut-heat temperatures are fit for dating young Holocene loess samples. (authors)

  6. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  7. Comparison of Channel Estimation Protocols for Coherent AF Relaying Networks in the Presence of Additive Noise and LO Phase Noise

    Directory of Open Access Journals (Sweden)

    Stefan Berger

    2010-01-01

    Full Text Available Channel estimation protocols for wireless two-hop networks with amplify-and-forward (AF relays are compared. We consider multiuser relaying networks, where the gain factors are chosen such that the signals from all relays add up coherently at the destinations. While the destinations require channel knowledge in order to decode, our focus lies on the channel estimates that are used to calculate the relay gains. Since knowledge of the compound two-hop channels is generally not sufficient to do this, the protocols considered here measure all single-hop coefficients in the network. We start from the observation that the direction in which the channels are measured determines (1 the number of channel uses required to estimate all coefficient and (2 the need for global carrier phase reference. Four protocols are identified that differ in the direction in which the first-hop and the second-hop channels are measured. We derive a sensible measure for the accuracy of the channel estimates in the presence of additive noise and phase noise and compare the protocols based on this measure. Finally, we provide a quantitative performance comparison for a simple single-user application example. It is important to note that the results can be used to compare the channel estimation protocols for any two-hop network configuration and gain allocation scheme.

  8. On efficiency of some ratio estimators in double sampling design ...

    African Journals Online (AJOL)

    In this paper, three sampling ratio estimators in double sampling design were proposed with the intention of finding an alternative double sampling design estimator to the conventional ratio estimator in double sampling design discussed by Cochran (1997), Okafor (2002) , Raj (1972) and Raj and Chandhok (1999).

  9. Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil

    Science.gov (United States)

    Silvestri, Erin E.; Feldhake, David; Griffin, Dale; Lisle, John T.; Nichols, Tonya L.; Shah, Sanjiv; Pemberton, A; Schaefer III, Frank W

    2016-01-01

    Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14 spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries.

  10. Estimation of sample size and testing power (Part 4).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  11. Numerically stable algorithm for combining census and sample estimates with the multivariate composite estimator

    Science.gov (United States)

    R. L. Czaplewski

    2009-01-01

    The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...

  12. Estimation of population mean under systematic sampling

    Science.gov (United States)

    Noor-ul-amin, Muhammad; Javaid, Amjad

    2017-11-01

    In this study we propose a generalized ratio estimator under non-response for systematic random sampling. We also generate a class of estimators through special cases of generalized estimator using different combinations of coefficients of correlation, kurtosis and variation. The mean square errors and mathematical conditions are also derived to prove the efficiency of proposed estimators. Numerical illustration is included using three populations to support the results.

  13. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Science.gov (United States)

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  14. Zoonoses action plan Salmonella monitoring programme: an investigation of the sampling protocol.

    Science.gov (United States)

    Snary, E L; Munday, D K; Arnold, M E; Cook, A J C

    2010-03-01

    The Zoonoses Action Plan (ZAP) Salmonella Programme was established by the British Pig Executive to monitor Salmonella prevalence in quality-assured British pigs at slaughter by testing a sample of pigs with a meat juice enzyme-linked immunosorbent assay for antibodies against group B and C(1) Salmonella. Farms were assigned a ZAP level (1 to 3) depending on the monitored prevalence, and ZAP 2 or 3 farms were required to act to reduce the prevalence. The ultimate goal was to reduce the risk of human salmonellosis attributable to British pork. A mathematical model has been developed to describe the ZAP sampling protocol. Results show that the probability of assigning a farm the correct ZAP level was high, except for farms that had a seroprevalence close to the cutoff points between different ZAP levels. Sensitivity analyses identified that the probability of assigning a farm to the correct ZAP level was dependent on the sensitivity and specificity of the test, the number of batches taken to slaughter each quarter, and the number of samples taken per batch. The variability of the predicted seroprevalence was reduced as the number of batches or samples increased and, away from the cutoff points, the probability of being assigned the correct ZAP level increased as the number of batches or samples increased. In summary, the model described here provided invaluable insight into the ZAP sampling protocol. Further work is required to understand the impact of the program for Salmonella infection in British pig farms and therefore on human health.

  15. An Improvement to Interval Estimation for Small Samples

    Directory of Open Access Journals (Sweden)

    SUN Hui-Ling

    2017-02-01

    Full Text Available Because it is difficult and complex to determine the probability distribution of small samples,it is improper to use traditional probability theory to process parameter estimation for small samples. Bayes Bootstrap method is always used in the project. Although,the Bayes Bootstrap method has its own limitation,In this article an improvement is given to the Bayes Bootstrap method,This method extended the amount of samples by numerical simulation without changing the circumstances in a small sample of the original sample. And the new method can give the accurate interval estimation for the small samples. Finally,by using the Monte Carlo simulation to model simulation to the specific small sample problems. The effectiveness and practicability of the Improved-Bootstrap method was proved.

  16. A geostatistical estimation of zinc grade in bore-core samples

    International Nuclear Information System (INIS)

    Starzec, A.

    1987-01-01

    Possibilities and preliminary results of geostatistical interpretation of the XRF determination of zinc in bore-core samples are considered. For the spherical model of the variogram the estimation variance of grade in a disk-shape sample (estimated from the grade on the circumference sample) is calculated. Variograms of zinc grade in core samples are presented and examples of the grade estimation are discussed. 4 refs., 7 figs., 1 tab. (author)

  17. Estimating Return on Investment in Translational Research: Methods and Protocols

    Science.gov (United States)

    Trochim, William; Dilts, David M.; Kirk, Rosalind

    2014-01-01

    Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health and its Clinical and Translational Awards (CTSA). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This paper provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities. PMID:23925706

  18. Estimating return on investment in translational research: methods and protocols.

    Science.gov (United States)

    Grazier, Kyle L; Trochim, William M; Dilts, David M; Kirk, Rosalind

    2013-12-01

    Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health (NIH) and its Clinical and Translational Awards (CTSAs). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program, and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This article provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities.

  19. Poisson sampling - The adjusted and unadjusted estimator revisited

    Science.gov (United States)

    Michael S. Williams; Hans T. Schreuder; Gerardo H. Terrazas

    1998-01-01

    The prevailing assumption, that for Poisson sampling the adjusted estimator "Y-hat a" is always substantially more efficient than the unadjusted estimator "Y-hat u" , is shown to be incorrect. Some well known theoretical results are applicable since "Y-hat a" is a ratio-of-means estimator and "Y-hat u" a simple unbiased estimator...

  20. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  1. Desorption isotherms of cementitious materials: study of an accelerated protocol and estimation of RVE

    International Nuclear Information System (INIS)

    Wu, Qier

    2014-01-01

    In the framework of French radioactive waste management and storage, the durability evaluation and prediction of concrete structures requires the knowledge of desorption isotherm of concrete. The aim of the present study is to develop an accelerated experimental method to obtain desorption isotherm of cementitious materials more quickly and to estimate the Representative Volume Element (RVE) size related to the desorption isotherm of concrete. In order to ensure that experimental results can be statistically considered representative, a great amount of sliced samples of cementitious materials with three different thicknesses (1 mm, 2 mm and 3 mm) have been de-saturated. The effect of slice thickness and the saturation condition on the mass variation kinetics and the desorption isotherms is analyzed. The influence of the aggregate distribution on the water content and the water saturation degree is also analyzed. A method based on statistical analysis of water content and water saturation degree is proposed to estimate the RVE for water desorption experiment of concrete. The evolution of shrinkage with relative humidity is also followed for each material during the water desorption experiment. A protocol of cycle of rapid desaturation-re-saturation is applied and shows the existence of hysteresis between desorption and adsorption. (author)

  2. Power Spectrum Estimation of Randomly Sampled Signals

    DEFF Research Database (Denmark)

    Velte, C. M.; Buchhave, P.; K. George, W.

    algorithms; sample and-hold and the direct spectral estimator without residence time weighting. The computer generated signal is a Poisson process with a sample rate proportional to velocity magnitude that consist of well-defined frequency content, which makes bias easy to spot. The idea...

  3. Efficient estimation for ergodic diffusions sampled at high frequency

    DEFF Research Database (Denmark)

    Sørensen, Michael

    A general theory of efficient estimation for ergodic diffusions sampled at high fre- quency is presented. High frequency sampling is now possible in many applications, in particular in finance. The theory is formulated in term of approximate martingale estimating functions and covers a large class...

  4. Estimation of sample size and testing power (part 5).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-02-01

    Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.

  5. Protocols for the analytical characterization of therapeutic monoclonal antibodies. II - Enzymatic and chemical sample preparation.

    Science.gov (United States)

    Bobaly, Balazs; D'Atri, Valentina; Goyon, Alexandre; Colas, Olivier; Beck, Alain; Fekete, Szabolcs; Guillarme, Davy

    2017-08-15

    The analytical characterization of therapeutic monoclonal antibodies and related proteins usually incorporates various sample preparation methodologies. Indeed, quantitative and qualitative information can be enhanced by simplifying the sample, thanks to the removal of sources of heterogeneity (e.g. N-glycans) and/or by decreasing the molecular size of the tested protein by enzymatic or chemical fragmentation. These approaches make the sample more suitable for chromatographic and mass spectrometric analysis. Structural elucidation and quality control (QC) analysis of biopharmaceutics are usually performed at intact, subunit and peptide levels. In this paper, general sample preparation approaches used to attain peptide, subunit and glycan level analysis are overviewed. Protocols are described to perform tryptic proteolysis, IdeS and papain digestion, reduction as well as deglycosylation by PNGase F and EndoS2 enzymes. Both historical and modern sample preparation methods were compared and evaluated using rituximab and trastuzumab, two reference therapeutic mAb products approved by Food and Drug Administration (FDA) and European Medicines Agency (EMA). The described protocols may help analysts to develop sample preparation methods in the field of therapeutic protein analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  7. Protocol for sampling and analysis of bone specimens

    International Nuclear Information System (INIS)

    Aras, N.K.

    2000-01-01

    The iliac crest of hip bone was chosen as the most suitable sampling site for several reasons: Local variation in the elemental concentration along the iliac crest is minimal; Iliac crest biopsies are commonly taken clinically on patients; The cortical part of the sample is small (∼2 mm) and can be separated easily from the trabecular bone; The use of the trabecular part of the iliac crest for trace element analysis has the advantage of reflecting rapidly changes in the composition of bone due to external parameters, including medication. Biopsy studies, although in some ways more difficult than autopsy studies, because of the need to obtain the informed consents of the subjects, are potentially more useful than autopsy studies. Thereby many problems of postmortem migration of elements can be avoided and reliable dietary and other data can be collected simultaneously. Select the subjects among the patients undergoing orthopedic surgery due to any reason other than osteoporosis. Follow an established protocol to obtain bone biopsies. Patients undergoing synergy should fill in the 'Osteoporosis Project Questionnaire Form' including information on lifestyle variables, dietary intakes, the reason for surgery etc. If possible, measure the bone mineral density (BMD) prior to removal of the biopsy sample. However it may not possible to have BMD results on all the subjects because of difficulty of DEXA measurement after an accident

  8. Improving the Network Scale-Up Estimator: Incorporating Means of Sums, Recursive Back Estimation, and Sampling Weights.

    Directory of Open Access Journals (Sweden)

    Patrick Habecker

    Full Text Available Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations via traditional survey tools such as telephone or mail surveys--by asking a representative sample to estimate the number of people they know who are members of such a "hidden" subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation "trimming" to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights.

  9. Effect of sampling schedule on pharmacokinetic parameter estimates of promethazine in astronauts

    Science.gov (United States)

    Boyd, Jason L.; Wang, Zuwei; Putcha, Lakshmi

    2005-08-01

    Six astronauts on the Shuttle Transport System (STS) participated in an investigation on the pharmacokinetics of promethazine (PMZ), a medication used for the treatment of space motion sickness (SMS) during flight. Each crewmember completed the protocol once during flight and repeated thirty days after returned to Earth. Saliva samples were collected at scheduled times for 72 h after PMZ administration; more frequent samples were collected on the ground than during flight owing to schedule constraints in flight. PMZ concentrations in saliva were determined by a liquid chromatographic/mass spectrometric (LC-MS) assay and pharmacokinetic parameters (PKPs) were calculated using actual flight and ground-based data sets and using time-matched sampling schedule on ground to that during flight. Volume of distribution (Vc) and clearance (Cls) decreased during flight compared to that from time-matched ground data set; however, ClS and Vc estimates were higher for all subjects when partial ground data sets were used for analysis. Area under the curve (AUC) normalized with administered dose was similar in flight and partial ground data; however AUC was significantly lower using time-matched sampling compared with the full data set on ground. Half life (t1/2) was longest during flight, shorter with matched-sampling schedule on ground and shortest when complete data set from ground was used. Maximum concentration (Cmax), time for Cmax (tmax), parameters of drug absorption, depicted a similar trend with lowest and longest respectively, during flight, lower with time- matched ground data and highest and shortest with full ground data.

  10. Validation of a novel protocol for calculating estimated energy requirements and average daily physical activity ratio for the US population: 2005-2006.

    Science.gov (United States)

    Archer, Edward; Hand, Gregory A; Hébert, James R; Lau, Erica Y; Wang, Xuewen; Shook, Robin P; Fayad, Raja; Lavie, Carl J; Blair, Steven N

    2013-12-01

    To validate the PAR protocol, a novel method for calculating population-level estimated energy requirements (EERs) and average physical activity ratio (APAR), in a nationally representative sample of US adults. Estimates of EER and APAR values were calculated via a factorial equation from a nationally representative sample of 2597 adults aged 20 and 74 years (US National Health and Nutrition Examination Survey; data collected between January 1, 2005, and December 31, 2006). Validation of the PAR protocol-derived EER (EER(PAR)) values was performed via comparison with values from the Institute of Medicine EER equations (EER(IOM)). The correlation between EER(PAR) and EER(IOM) was high (0.98; Pmen to 148 kcal/d (5.7% higher) in obese women. The 2005-2006 EERs for the US population were 2940 kcal/d for men and 2275 kcal/d for women and ranged from 3230 kcal/d in obese (BMI ≥30) men to 2026 kcal/d in normal weight (BMI women. There were significant inverse relationships between APAR and both obesity and age. For men and women, the APAR values were 1.53 and 1.52, respectively. Obese men and women had lower APAR values than normal weight individuals (P¼.023 and P¼.015, respectively) [corrected], and younger individuals had higher APAR values than older individuals (Pphysical activity and health. Copyright © 2013 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  11. Estimation of creatinine in Urine sample by Jaffe's method

    International Nuclear Information System (INIS)

    Wankhede, Sonal; Arunkumar, Suja; Sawant, Pramilla D.; Rao, B.B.

    2012-01-01

    In-vitro bioassay monitoring is based on the determination of activity concentrations in biological samples excreted from the body and is most suitable for alpha and beta emitters. A truly representative bioassay sample is the one having all the voids collected during a 24-h period however, this being technically difficult, overnight urine samples collected by the workers are analyzed. These overnight urine samples are collected for 10-16 h, however in the absence of any specific information, 12 h duration is assumed and the observed results are then corrected accordingly obtain the daily excretion rate. To reduce the uncertainty due to unknown duration of sample collection, IAEA has recommended two methods viz., measurement of specific gravity and creatinine excretion rate in urine sample. Creatinine is a final metabolic product creatinine phosphate in the body and is excreted at a steady rate for people with normally functioning kidneys. It is, therefore, often used as a normalization factor for estimation of duration of sample collection. The present study reports the chemical procedure standardized and its application for the estimation of creatinine in urine samples collected from occupational workers. Chemical procedure for estimation of creatinine in bioassay samples was standardized and applied successfully for its estimation in bioassay samples collected from the workers. The creatinine excretion rate observed for these workers is lower than observed in literature. Further, work is in progress to generate a data bank of creatinine excretion rate for most of the workers and also to study the variability in creatinine coefficient for the same individual based on the analysis of samples collected for different duration

  12. An unbiased estimator of the variance of simple random sampling using mixed random-systematic sampling

    OpenAIRE

    Padilla, Alberto

    2009-01-01

    Systematic sampling is a commonly used technique due to its simplicity and ease of implementation. The drawback of this simplicity is that it is not possible to estimate the design variance without bias. There are several ways to circumvent this problem. One method is to suppose that the variable of interest has a random order in the population, so the sample variance of simple random sampling without replacement is used. By means of a mixed random - systematic sample, an unbiased estimator o...

  13. Comparison of Four Estimators under sampling without Replacement

    African Journals Online (AJOL)

    The results were obtained using a program written in Microsoft Visual C++ programming language. It was observed that the two-stage sampling under unequal probabilities without replacement is always better than the other three estimators considered. Keywords: Unequal probability sampling, two-stage sampling, ...

  14. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    Science.gov (United States)

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different

  15. Assessing respiratory pathogen communities in bighorn sheep populations: Sampling realities, challenges, and improvements.

    Directory of Open Access Journals (Sweden)

    Carson J Butler

    Full Text Available Respiratory disease has been a persistent problem for the recovery of bighorn sheep (Ovis canadensis, but has uncertain etiology. The disease has been attributed to several bacterial pathogens including Mycoplasma ovipneumoniae and Pasteurellaceae pathogens belonging to the Mannheimia, Bibersteinia, and Pasteurella genera. We estimated detection probability for these pathogens using protocols with diagnostic tests offered by a fee-for-service laboratory and not offered by a fee-for-service laboratory. We conducted 2861 diagnostic tests on swab samples collected from 476 bighorn sheep captured across Montana and Wyoming to gain inferences regarding detection probability, pathogen prevalence, and the power of different sampling methodologies to detect pathogens in bighorn sheep populations. Estimated detection probability using fee-for-service protocols was less than 0.50 for all Pasteurellaceae and 0.73 for Mycoplasma ovipneumoniae. Non-fee-for-service Pasteurellaceae protocols had higher detection probabilities, but no single protocol increased detection probability of all Pasteurellaceae pathogens to greater than 0.50. At least one protocol resulted in an estimated detection probability of 0.80 for each pathogen except Mannheimia haemolytica, for which the highest detection probability was 0.45. In general, the power to detect Pasteurellaceae pathogens at low prevalence in populations was low unless many animals were sampled or replicate samples were collected per animal. Imperfect detection also resulted in low precision when estimating prevalence for any pathogen. Low and variable detection probabilities for respiratory pathogens using live-sampling protocols may lead to inaccurate conclusions regarding pathogen community dynamics and causes of bighorn sheep respiratory disease epizootics. We recommend that agencies collect multiples samples per animal for Pasteurellaceae detection, and one sample for Mycoplasma ovipneumoniae detection from

  16. Estimating fluvial wood discharge from timelapse photography with varying sampling intervals

    Science.gov (United States)

    Anderson, N. K.

    2013-12-01

    There is recent focus on calculating wood budgets for streams and rivers to help inform management decisions, ecological studies and carbon/nutrient cycling models. Most work has measured in situ wood in temporary storage along stream banks or estimated wood inputs from banks. Little effort has been employed monitoring and quantifying wood in transport during high flows. This paper outlines a procedure for estimating total seasonal wood loads using non-continuous coarse interval sampling and examines differences in estimation between sampling at 1, 5, 10 and 15 minutes. Analysis is performed on wood transport for the Slave River in Northwest Territories, Canada. Relative to the 1 minute dataset, precision decreased by 23%, 46% and 60% for the 5, 10 and 15 minute datasets, respectively. Five and 10 minute sampling intervals provided unbiased equal variance estimates of 1 minute sampling, whereas 15 minute intervals were biased towards underestimation by 6%. Stratifying estimates by day and by discharge increased precision over non-stratification by 4% and 3%, respectively. Not including wood transported during ice break-up, the total minimum wood load estimated at this site is 3300 × 800$ m3 for the 2012 runoff season. The vast majority of the imprecision in total wood volumes came from variance in estimating average volume per log. Comparison of proportions and variance across sample intervals using bootstrap sampling to achieve equal n. Each trial was sampled for n=100, 10,000 times and averaged. All trials were then averaged to obtain an estimate for each sample interval. Dashed lines represent values from the one minute dataset.

  17. A protocol for measuring spatial variables in soft-sediment tide pools

    Directory of Open Access Journals (Sweden)

    Marina R. Brenha-Nunes

    2016-01-01

    Full Text Available ABSTRACT We present a protocol for measuring spatial variables in large (>50 m2 soft-sediment tide pool. Secondarily, we present the fish capture efficiency of a sampling protocol that based on such spatial variables to calculate relative abundances. The area of the pool is estimated by summing areas of basic geometric forms; the depth, by taken representative measurements of the depth variability of each pool's sector, previously determined according to its perimeter; and the volume, by considering the pool as a prism. These procedures were a trade-off between the acquisition of reliable estimates and the minimization of both the cost of operating and the time spent in field. The fish sampling protocol is based on two con secutive stages: 1 two people search for fishes under structures (e.g., rocks and litters on the pool and capture them with hand seines; 2 these structures are removed and then a beach-seine is hauled over the whole pool. Our method is cheaper than others and fast to operate considering the time in low tides. The method to sample fish is quite efficient resulting in a capture efficiency of 89%.

  18. Effects of systematic sampling on satellite estimates of deforestation rates

    International Nuclear Information System (INIS)

    Steininger, M K; Godoy, F; Harper, G

    2009-01-01

    Options for satellite monitoring of deforestation rates over large areas include the use of sampling. Sampling may reduce the cost of monitoring but is also a source of error in estimates of areas and rates. A common sampling approach is systematic sampling, in which sample units of a constant size are distributed in some regular manner, such as a grid. The proposed approach for the 2010 Forest Resources Assessment (FRA) of the UN Food and Agriculture Organization (FAO) is a systematic sample of 10 km wide squares at every 1 deg. intersection of latitude and longitude. We assessed the outcome of this and other systematic samples for estimating deforestation at national, sub-national and continental levels. The study is based on digital data on deforestation patterns for the five Amazonian countries outside Brazil plus the Brazilian Amazon. We tested these schemes by varying sample-unit size and frequency. We calculated two estimates of sampling error. First we calculated the standard errors, based on the size, variance and covariance of the samples, and from this calculated the 95% confidence intervals (CI). Second, we calculated the actual errors, based on the difference between the sample-based estimates and the estimates from the full-coverage maps. At the continental level, the 1 deg., 10 km scheme had a CI of 21% and an actual error of 8%. At the national level, this scheme had CIs of 126% for Ecuador and up to 67% for other countries. At this level, increasing sampling density to every 0.25 deg. produced a CI of 32% for Ecuador and CIs of up to 25% for other countries, with only Brazil having a CI of less than 10%. Actual errors were within the limits of the CIs in all but two of the 56 cases. Actual errors were half or less of the CIs in all but eight of these cases. These results indicate that the FRA 2010 should have CIs of smaller than or close to 10% at the continental level. However, systematic sampling at the national level yields large CIs unless the

  19. ISS protocol for EPR tooth dosimetry

    International Nuclear Information System (INIS)

    Onori, S.; Aragno, D.; Fattibene, P.; Petetti, E.; Pressello, M.C.

    2000-01-01

    The accuracy in Electron Paramagnetic Resonance (EPR) dose reconstruction with tooth enamel is affected by sample preparation, dosimetric signal amplitude evaluation and unknown dose estimate. Worldwide efforts in the field of EPR dose reconstruction with tooth enamel are focused on the optimization of the three mentioned steps in dose assessment. In the present work, the protocol implemented at ISS in the framework of the European Community Nuclear Fission Safety project 'Dose Reconstruction' is presented. A combined mechanical-chemical procedure for ground enamel sample preparation is used. The signal intensity evaluation is carried out with powder spectra simulation program. Finally, the unknown dose is evaluated individually for each sample with the additive dose method. The unknown dose is obtained by subtracting a mean native dose from the back-extrapolated dose. As an example of the capability of the ISS protocol in unknown dose evaluation, the results obtained in the framework of the 2nd International Intercomparison on EPR tooth enamel dosimetry are reported

  20. Comparison of Different Sample Preparation Protocols Reveals Lysis Buffer-Specific Extraction Biases in Gram-Negative Bacteria and Human Cells.

    Science.gov (United States)

    Glatter, Timo; Ahrné, Erik; Schmidt, Alexander

    2015-11-06

    We evaluated different in-solution and FASP-based sample preparation strategies for absolute protein quantification. Label-free quantification (LFQ) was employed to compare different sample preparation strategies in the bacterium Pseudomonas aeruginosa and human embryonic kidney cells (HEK), and organismal-specific differences in general performance and enrichment of specific protein classes were noted. The original FASP protocol globally enriched for most proteins in the bacterial sample, whereas the sodium deoxycholate in-solution strategy was more efficient with HEK cells. Although detergents were found to be highly suited for global proteome analysis, higher intensities were obtained for high-abundant nucleic acid-associated protein complexes, like the ribosome and histone proteins, using guanidine hydrochloride. Importantly, we show for the first time that the observable total proteome mass of a sample strongly depends on the sample preparation protocol, with some protocols resulting in a significant underestimation of protein mass due to incomplete protein extraction of biased protein groups. Furthermore, we demonstrate that some of the observed abundance biases can be overcome by incorporating a nuclease treatment step or, alternatively, a correction factor for complementary sample preparation approaches.

  1. Finite-key analysis for quantum key distribution with weak coherent pulses based on Bernoulli sampling

    Science.gov (United States)

    Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato

    2017-07-01

    An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.

  2. A Tool for Estimating Variability in Wood Preservative Treatment Retention

    Science.gov (United States)

    Patricia K. Lebow; Adam M. Taylor; Timothy M. Young

    2015-01-01

    Composite sampling is standard practice for evaluation of preservative retention levels in preservative-treated wood. Current protocols provide an average retention value but no estimate of uncertainty. Here we describe a statistical method for calculating uncertainty estimates using the standard sampling regime with minimal additional chemical analysis. This tool can...

  3. Global warming potential estimates for the C1-C3 hydrochlorofluorocarbons (HCFCs) included in the Kigali Amendment to the Montreal Protocol

    Science.gov (United States)

    Papanastasiou, Dimitrios K.; Beltrone, Allison; Marshall, Paul; Burkholder, James B.

    2018-05-01

    Hydrochlorofluorocarbons (HCFCs) are ozone depleting substances and potent greenhouse gases that are controlled under the Montreal Protocol. However, the majority of the 274 HCFCs included in Annex C of the protocol do not have reported global warming potentials (GWPs) which are used to guide the phaseout of HCFCs and the future phase down of hydrofluorocarbons (HFCs). In this study, GWPs for all C1-C3 HCFCs included in Annex C are reported based on estimated atmospheric lifetimes and theoretical methods used to calculate infrared absorption spectra. Atmospheric lifetimes were estimated from a structure activity relationship (SAR) for OH radical reactivity and estimated O(1D) reactivity and UV photolysis loss processes. The C1-C3 HCFCs display a wide range of lifetimes (0.3 to 62 years) and GWPs (5 to 5330, 100-year time horizon) dependent on their molecular structure and the H-atom content of the individual HCFC. The results from this study provide estimated policy-relevant GWP metrics for the HCFCs included in the Montreal Protocol in the absence of experimentally derived metrics.

  4. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    Science.gov (United States)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses

  5. MPLEx: a Robust and Universal Protocol for Single-Sample Integrative Proteomic, Metabolomic, and Lipidomic Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Nakayasu, Ernesto S.; Nicora, Carrie D.; Sims, Amy C.; Burnum-Johnson, Kristin E.; Kim, Young-Mo; Kyle, Jennifer E.; Matzke, Melissa M.; Shukla, Anil K.; Chu, Rosalie K.; Schepmoes, Athena A.; Jacobs, Jon M.; Baric, Ralph S.; Webb-Robertson, Bobbie-Jo; Smith, Richard D.; Metz, Thomas O.; Chia, Nicholas

    2016-05-03

    ABSTRACT

    Integrative multi-omics analyses can empower more effective investigation and complete understanding of complex biological systems. Despite recent advances in a range of omics analyses, multi-omic measurements of the same sample are still challenging and current methods have not been well evaluated in terms of reproducibility and broad applicability. Here we adapted a solvent-based method, widely applied for extracting lipids and metabolites, to add proteomics to mass spectrometry-based multi-omics measurements. Themetabolite,protein, andlipidextraction (MPLEx) protocol proved to be robust and applicable to a diverse set of sample types, including cell cultures, microbial communities, and tissues. To illustrate the utility of this protocol, an integrative multi-omics analysis was performed using a lung epithelial cell line infected with Middle East respiratory syndrome coronavirus, which showed the impact of this virus on the host glycolytic pathway and also suggested a role for lipids during infection. The MPLEx method is a simple, fast, and robust protocol that can be applied for integrative multi-omic measurements from diverse sample types (e.g., environmental,in vitro, and clinical).

    IMPORTANCEIn systems biology studies, the integration of multiple omics measurements (i.e., genomics, transcriptomics, proteomics, metabolomics, and lipidomics) has been shown to provide a more complete and informative view of biological pathways. Thus, the prospect of extracting different types of molecules (e.g., DNAs, RNAs, proteins, and metabolites) and performing multiple omics measurements on single samples is very attractive, but such studies are challenging due to the fact that the extraction conditions differ according to the molecule type. Here, we adapted an organic solvent-based extraction method that demonstrated

  6. Devices used by automated milking systems are similarly accurate in estimating milk yield and in collecting a representative milk sample compared with devices used by farms with conventional milk recording

    NARCIS (Netherlands)

    Kamphuis, Claudia; Dela Rue, B.; Turner, S.A.; Petch, S.

    2015-01-01

    Information on accuracy of milk-sampling devices used on farms with automated milking systems (AMS) is essential for development of milk recording protocols. The hypotheses of this study were (1) devices used by AMS units are similarly accurate in estimating milk yield and in collecting

  7. Estimating abundance of mountain lions from unstructured spatial sampling

    Science.gov (United States)

    Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.

    2012-01-01

    Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and

  8. Estimation after classification using lot quality assurance sampling: corrections for curtailed sampling with application to evaluating polio vaccination campaigns.

    Science.gov (United States)

    Olives, Casey; Valadez, Joseph J; Pagano, Marcello

    2014-03-01

    To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.

  9. One Sample, One Shot - Evaluation of sample preparation protocols for the mass spectrometric proteome analysis of human bile fluid without extensive fractionation.

    Science.gov (United States)

    Megger, Dominik A; Padden, Juliet; Rosowski, Kristin; Uszkoreit, Julian; Bracht, Thilo; Eisenacher, Martin; Gerges, Christian; Neuhaus, Horst; Schumacher, Brigitte; Schlaak, Jörg F; Sitek, Barbara

    2017-02-10

    The proteome analysis of bile fluid represents a promising strategy to identify biomarker candidates for various diseases of the hepatobiliary system. However, to obtain substantive results in biomarker discovery studies large patient cohorts necessarily need to be analyzed. Consequently, this would lead to an unmanageable number of samples to be analyzed if sample preparation protocols with extensive fractionation methods are applied. Hence, the performance of simple workflows allowing for "one sample, one shot" experiments have been evaluated in this study. In detail, sixteen different protocols implying modifications at the stages of desalting, delipidation, deglycosylation and tryptic digestion have been examined. Each method has been individually evaluated regarding various performance criteria and comparative analyses have been conducted to uncover possible complementarities. Here, the best performance in terms of proteome coverage has been assessed for a combination of acetone precipitation with in-gel digestion. Finally, a mapping of all obtained protein identifications with putative biomarkers for hepatocellular carcinoma (HCC) and cholangiocellular carcinoma (CCC) revealed several proteins easily detectable in bile fluid. These results can build the basis for future studies with large and well-defined patient cohorts in a more disease-related context. Human bile fluid is a proximal body fluid and supposed to be a potential source of disease markers. However, due to its biochemical composition, the proteome analysis of bile fluid still represents a challenging task and is therefore mostly conducted using extensive fractionation procedures. This in turn leads to a high number of mass spectrometric measurements for one biological sample. Considering the fact that in order to overcome the biological variability a high number of biological samples needs to be analyzed in biomarker discovery studies, this leads to the dilemma of an unmanageable number of

  10. Cost estimation of Kyoto Protocol

    International Nuclear Information System (INIS)

    Di Giulio, Enzo

    2005-01-01

    This article proposes a reflection on important aspects in the costs determination performance of Kyoto Protocol. The evaluation of the main models evidence possible impacts on the economies. A key role in the determination of the cost is represented by the relative hypothesis to emission trading and the projects CDM-JI and from the political capacity at the cost negative or equal to zero [it

  11. An alternative procedure for estimating the population mean in simple random sampling

    Directory of Open Access Journals (Sweden)

    Housila P. Singh

    2012-03-01

    Full Text Available This paper deals with the problem of estimating the finite population mean using auxiliary information in simple random sampling. Firstly we have suggested a correction to the mean squared error of the estimator proposed by Gupta and Shabbir [On improvement in estimating the population mean in simple random sampling. Jour. Appl. Statist. 35(5 (2008, pp. 559-566]. Later we have proposed a ratio type estimator and its properties are studied in simple random sampling. Numerically we have shown that the proposed class of estimators is more efficient than different known estimators including Gupta and Shabbir (2008 estimator.

  12. The UK Biobank sample handling and storage protocol for the collection, processing and archiving of human blood and urine.

    Science.gov (United States)

    Elliott, Paul; Peakman, Tim C

    2008-04-01

    UK Biobank is a large prospective study in the UK to investigate the role of genetic factors, environmental exposures and lifestyle in the causes of major diseases of late and middle age. Extensive data and biological samples are being collected from 500,000 participants aged between 40 and 69 years. The biological samples that are collected and how they are processed and stored will have a major impact on the future scientific usefulness of the UK Biobank resource. The aim of the UK Biobank sample handling and storage protocol is to specify methods for the collection and storage of participant samples that give maximum scientific return within the available budget. Processing or storage methods that, as far as can be predicted, will preclude current or future assays have been avoided. The protocol was developed through a review of the literature on sample handling and processing, wide consultation within the academic community and peer review. Protocol development addressed which samples should be collected, how and when they should be processed and how the processed samples should be stored to ensure their long-term integrity. The recommended protocol was extensively tested in a series of validation studies. UK Biobank collects about 45 ml blood and 9 ml of urine with minimal local processing from each participant using the vacutainer system. A variety of preservatives, anti-coagulants and clot accelerators is used appropriate to the expected end use of the samples. Collection of other material (hair, nails, saliva and faeces) was also considered but rejected for the full cohort. Blood and urine samples from participants are transported overnight by commercial courier to a central laboratory where they are processed and aliquots of urine, plasma, serum, white cells and red cells stored in ultra-low temperature archives. Aliquots of whole blood are also stored for potential future production of immortalized cell lines. A standard panel of haematology assays is

  13. Comparison of sampling techniques for Bayesian parameter estimation

    Science.gov (United States)

    Allison, Rupert; Dunkley, Joanna

    2014-02-01

    The posterior probability distribution for a set of model parameters encodes all that the data have to tell us in the context of a given model; it is the fundamental quantity for Bayesian parameter estimation. In order to infer the posterior probability distribution we have to decide how to explore parameter space. Here we compare three prescriptions for how parameter space is navigated, discussing their relative merits. We consider Metropolis-Hasting sampling, nested sampling and affine-invariant ensemble Markov chain Monte Carlo (MCMC) sampling. We focus on their performance on toy-model Gaussian likelihoods and on a real-world cosmological data set. We outline the sampling algorithms themselves and elaborate on performance diagnostics such as convergence time, scope for parallelization, dimensional scaling, requisite tunings and suitability for non-Gaussian distributions. We find that nested sampling delivers high-fidelity estimates for posterior statistics at low computational cost, and should be adopted in favour of Metropolis-Hastings in many cases. Affine-invariant MCMC is competitive when computing clusters can be utilized for massive parallelization. Affine-invariant MCMC and existing extensions to nested sampling naturally probe multimodal and curving distributions.

  14. Small sample GEE estimation of regression parameters for longitudinal data.

    Science.gov (United States)

    Paul, Sudhir; Zhang, Xuemao

    2014-09-28

    Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Bayesian Simultaneous Estimation for Means in k Sample Problems

    OpenAIRE

    Imai, Ryo; Kubokawa, Tatsuya; Ghosh, Malay

    2017-01-01

    This paper is concerned with the simultaneous estimation of k population means when one suspects that the k means are nearly equal. As an alternative to the preliminary test estimator based on the test statistics for testing hypothesis of equal means, we derive Bayesian and minimax estimators which shrink individual sample means toward a pooled mean estimator given under the hypothesis. Interestingly, it is shown that both the preliminary test estimator and the Bayesian minimax shrinkage esti...

  16. Turbidity-controlled sampling for suspended sediment load estimation

    Science.gov (United States)

    Jack Lewis

    2003-01-01

    Abstract - Automated data collection is essential to effectively measure suspended sediment loads in storm events, particularly in small basins. Continuous turbidity measurements can be used, along with discharge, in an automated system that makes real-time sampling decisions to facilitate sediment load estimation. The Turbidity Threshold Sampling method distributes...

  17. Estimating the dim light melatonin onset of adolescents within a 6-h sampling window: the impact of sampling rate and threshold method.

    Science.gov (United States)

    Crowley, Stephanie J; Suh, Christina; Molina, Thomas A; Fogg, Louis F; Sharkey, Katherine M; Carskadon, Mary A

    2016-04-01

    Circadian rhythm sleep-wake disorders (CRSWDs) often manifest during the adolescent years. Measurement of circadian phase such as the dim light melatonin onset (DLMO) improves diagnosis and treatment of these disorders, but financial and time costs limit the use of DLMO phase assessments in clinic. The current analysis aims to inform a cost-effective and efficient protocol to measure the DLMO in older adolescents by reducing the number of samples and total sampling duration. A total of 66 healthy adolescents (26 males) aged 14.8-17.8 years participated in a study; they were required to sleep on a fixed baseline schedule for a week before which they visited the laboratory for saliva collection in dim light (<20 lux). Two partial 6-h salivary melatonin profiles were derived for each participant. Both profiles began 5 h before bedtime and ended 1 h after bedtime, but one profile was derived from samples taken every 30 min (13 samples) and the other from samples taken every 60 min (seven samples). Three standard thresholds (first three melatonin values mean + 2 SDs, 3 pg/mL, and 4 pg/mL) were used to compute the DLMO. An agreement between DLMOs derived from 30-min and 60-min sampling rates was determined using Bland-Altman analysis; agreement between the sampling rate DLMOs was defined as ± 1 h. Within a 6-h sampling window, 60-min sampling provided DLMO estimates within ± 1 h of DLMO from 30-min sampling, but only when an absolute threshold (3 or 4 pg/mL) was used to compute the DLMO. Future analyses should be extended to include adolescents with CRSWDs. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Estimating mean change in population salt intake using spot urine samples.

    Science.gov (United States)

    Petersen, Kristina S; Wu, Jason H Y; Webster, Jacqui; Grimes, Carley; Woodward, Mark; Nowson, Caryl A; Neal, Bruce

    2017-10-01

    Spot urine samples are easier to collect than 24-h urine samples and have been used with estimating equations to derive the mean daily salt intake of a population. Whether equations using data from spot urine samples can also be used to estimate change in mean daily population salt intake over time is unknown. We compared estimates of change in mean daily population salt intake based upon 24-h urine collections with estimates derived using equations based on spot urine samples. Paired and unpaired 24-h urine samples and spot urine samples were collected from individuals in two Australian populations, in 2011 and 2014. Estimates of change in daily mean population salt intake between 2011 and 2014 were obtained directly from the 24-h urine samples and by applying established estimating equations (Kawasaki, Tanaka, Mage, Toft, INTERSALT) to the data from spot urine samples. Differences between 2011 and 2014 were calculated using mixed models. A total of 1000 participants provided a 24-h urine sample and a spot urine sample in 2011, and 1012 did so in 2014 (paired samples n = 870; unpaired samples n = 1142). The participants were community-dwelling individuals living in the State of Victoria or the town of Lithgow in the State of New South Wales, Australia, with a mean age of 55 years in 2011. The mean (95% confidence interval) difference in population salt intake between 2011 and 2014 determined from the 24-h urine samples was -0.48g/day (-0.74 to -0.21; P spot urine samples was -0.24 g/day (-0.42 to -0.06; P = 0.01) using the Tanaka equation, -0.42 g/day (-0.70 to -0.13; p = 0.004) using the Kawasaki equation, -0.51 g/day (-1.00 to -0.01; P = 0.046) using the Mage equation, -0.26 g/day (-0.42 to -0.10; P = 0.001) using the Toft equation, -0.20 g/day (-0.32 to -0.09; P = 0.001) using the INTERSALT equation and -0.27 g/day (-0.39 to -0.15; P  0.058). Separate analysis of the unpaired and paired data showed that detection of

  19. Estimating waste disposal quantities from raw waste samples

    International Nuclear Information System (INIS)

    Negin, C.A.; Urland, C.S.; Hitz, C.G.; GPU Nuclear Corp., Middletown, PA)

    1985-01-01

    Estimating the disposal quantity of waste resulting from stabilization of radioactive sludge is complex because of the many factors relating to sample analysis results, radioactive decay, allowable disposal concentrations, and options for disposal containers. To facilitate this estimation, a microcomputer spread sheet template was created. The spread sheet has saved considerable engineering hours. 1 fig., 3 tabs

  20. Iterative importance sampling algorithms for parameter estimation

    OpenAIRE

    Morzfeld, Matthias; Day, Marcus S.; Grout, Ray W.; Pau, George Shu Heng; Finsterle, Stefan A.; Bell, John B.

    2016-01-01

    In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov Chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is ...

  1. Systematic sampling of discrete and continuous populations: sample selection and the choice of estimator

    Science.gov (United States)

    Harry T. Valentine; David L. R. Affleck; Timothy G. Gregoire

    2009-01-01

    Systematic sampling is easy, efficient, and widely used, though it is not generally recognized that a systematic sample may be drawn from the population of interest with or without restrictions on randomization. The restrictions or the lack of them determine which estimators are unbiased, when using the sampling design as the basis for inference. We describe the...

  2. Gamma-H2AX biodosimetry for use in large scale radiation incidents: comparison of a rapid ‘96 well lyse/fix’ protocol with a routine method

    Directory of Open Access Journals (Sweden)

    Jayne Moquet

    2014-03-01

    Full Text Available Following a radiation incident, preliminary dose estimates made by γ-H2AX foci analysis can supplement the early triage of casualties based on clinical symptoms. Sample processing time is important when many individuals need to be rapidly assessed. A protocol was therefore developed for high sample throughput that requires less than 0.1 ml blood, thus potentially enabling finger prick sampling. The technique combines red blood cell lysis and leukocyte fixation in one step on a 96 well plate, in contrast to the routine protocol, where lymphocytes in larger blood volumes are typically separated by Ficoll density gradient centrifugation with subsequent washing and fixation steps. The rapid ‘96 well lyse/fix’ method reduced the estimated sample processing time for 96 samples to about 4 h compared to 15 h using the routine protocol. However, scoring 20 cells in 96 samples prepared by the rapid protocol took longer than for the routine method (3.1 versus 1.5 h at zero dose; 7.0 versus 6.1 h for irradiated samples. Similar foci yields were scored for both protocols and consistent dose estimates were obtained for samples exposed to 0, 0.2, 0.6, 1.1, 1.2, 2.1 and 4.3 Gy of 250 kVp X-rays at 0.5 Gy/min and incubated for 2 h. Linear regression coefficients were 0.87 ± 0.06 (R2 = 97.6% and 0.85 ± 0.05 (R2 = 98.3% for estimated versus actual doses for the routine and lyse/fix method, respectively. The lyse/fix protocol can therefore facilitate high throughput processing for γ-H2AX biodosimetry for use in large scale radiation incidents, at the cost of somewhat longer foci scoring times.

  3. [Sampling optimization for tropical invertebrates: an example using dung beetles (Coleoptera: Scarabaeinae) in Venezuela].

    Science.gov (United States)

    Ferrer-Paris, José Rafael; Sánchez-Mercado, Ada; Rodríguez, Jon Paul

    2013-03-01

    The development of efficient sampling protocols is an essential prerequisite to evaluate and identify priority conservation areas. There are f ew protocols for fauna inventory and monitoring in wide geographical scales for the tropics, where the complexity of communities and high biodiversity levels, make the implementation of efficient protocols more difficult. We proposed here a simple strategy to optimize the capture of dung beetles, applied to sampling with baited traps and generalizable to other sampling methods. We analyzed data from eight transects sampled between 2006-2008 withthe aim to develop an uniform sampling design, that allows to confidently estimate species richness, abundance and composition at wide geographical scales. We examined four characteristics of any sampling design that affect the effectiveness of the sampling effort: the number of traps, sampling duration, type and proportion of bait, and spatial arrangement of the traps along transects. We used species accumulation curves, rank-abundance plots, indicator species analysis, and multivariate correlograms. We captured 40 337 individuals (115 species/morphospecies of 23 genera). Most species were attracted by both dung and carrion, but two thirds had greater relative abundance in traps baited with human dung. Different aspects of the sampling design influenced each diversity attribute in different ways. To obtain reliable richness estimates, the number of traps was the most important aspect. Accurate abundance estimates were obtained when the sampling period was increased, while the spatial arrangement of traps was determinant to capture the species composition pattern. An optimum sampling strategy for accurate estimates of richness, abundance and diversity should: (1) set 50-70 traps to maximize the number of species detected, (2) get samples during 48-72 hours and set trap groups along the transect to reliably estimate species abundance, (3) set traps in groups of at least 10 traps to

  4. Sampling strategies for estimating brook trout effective population size

    Science.gov (United States)

    Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher

    2012-01-01

    The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...

  5. Experimental Protocol to Determine the Chloride Threshold Value for Corrosion in Samples Taken from Reinforced Concrete Structures.

    Science.gov (United States)

    Angst, Ueli M; Boschmann, Carolina; Wagner, Matthias; Elsener, Bernhard

    2017-08-31

    The aging of reinforced concrete infrastructure in developed countries imposes an urgent need for methods to reliably assess the condition of these structures. Corrosion of the embedded reinforcing steel is the most frequent cause for degradation. While it is well known that the ability of a structure to withstand corrosion depends strongly on factors such as the materials used or the age, it is common practice to rely on threshold values stipulated in standards or textbooks. These threshold values for corrosion initiation (Ccrit) are independent of the actual properties of a certain structure, which clearly limits the accuracy of condition assessments and service life predictions. The practice of using tabulated values can be traced to the lack of reliable methods to determine Ccrit on-site and in the laboratory. Here, an experimental protocol to determine Ccrit for individual engineering structures or structural members is presented. A number of reinforced concrete samples are taken from structures and laboratory corrosion testing is performed. The main advantage of this method is that it ensures real conditions concerning parameters that are well known to greatly influence Ccrit, such as the steel-concrete interface, which cannot be representatively mimicked in laboratory-produced samples. At the same time, the accelerated corrosion test in the laboratory permits the reliable determination of Ccrit prior to corrosion initiation on the tested structure; this is a major advantage over all common condition assessment methods that only permit estimating the conditions for corrosion after initiation, i.e., when the structure is already damaged. The protocol yields the statistical distribution of Ccrit for the tested structure. This serves as a basis for probabilistic prediction models for the remaining time to corrosion, which is needed for maintenance planning. This method can potentially be used in material testing of civil infrastructures, similar to established

  6. Advanced Curation Protocols for Mars Returned Sample Handling

    Science.gov (United States)

    Bell, M.; Mickelson, E.; Lindstrom, D.; Allton, J.

    Introduction: Johnson Space Center has over 30 years experience handling precious samples which include Lunar rocks and Antarctic meteorites. However, we recognize that future curation of samples from such missions as Genesis, Stardust, and Mars S mple Return, will require a high degree of biosafety combined witha extremely low levels of inorganic, organic, and biological contamination. To satisfy these requirements, research in the JSC Advanced Curation Lab is currently focused toward two major areas: preliminary examination techniques and cleaning and verification techniques . Preliminary Examination Techniques : In order to minimize the number of paths for contamination we are exploring the synergy between human &robotic sample handling in a controlled environment to help determine the limits of clean curation. Within the Advanced Curation Laboratory is a prototype, next-generation glovebox, which contains a robotic micromanipulator. The remotely operated manipulator has six degrees-of- freedom and can be programmed to perform repetitive sample handling tasks. Protocols are being tested and developed to perform curation tasks such as rock splitting, weighing, imaging, and storing. Techniques for sample transfer enabling more detailed remote examination without compromising the integrity of sample science are also being developed . The glovebox is equipped with a rapid transfer port through which samples can be passed without exposure. The transfer is accomplished by using a unique seal and engagement system which allows passage between containers while maintaining a first seal to the outside environment and a second seal to prevent the outside of the container cover and port door from becoming contaminated by the material being transferred. Cleaning and Verification Techniques: As part of the contamination control effort, innovative cleaning techniques are being identified and evaluated in conjunction with sensitive cleanliness verification methods. Towards this

  7. Estimation of AUC or Partial AUC under Test-Result-Dependent Sampling.

    Science.gov (United States)

    Wang, Xiaofei; Ma, Junling; George, Stephen; Zhou, Haibo

    2012-01-01

    The area under the ROC curve (AUC) and partial area under the ROC curve (pAUC) are summary measures used to assess the accuracy of a biomarker in discriminating true disease status. The standard sampling approach used in biomarker validation studies is often inefficient and costly, especially when ascertaining the true disease status is costly and invasive. To improve efficiency and reduce the cost of biomarker validation studies, we consider a test-result-dependent sampling (TDS) scheme, in which subject selection for determining the disease state is dependent on the result of a biomarker assay. We first estimate the test-result distribution using data arising from the TDS design. With the estimated empirical test-result distribution, we propose consistent nonparametric estimators for AUC and pAUC and establish the asymptotic properties of the proposed estimators. Simulation studies show that the proposed estimators have good finite sample properties and that the TDS design yields more efficient AUC and pAUC estimates than a simple random sampling (SRS) design. A data example based on an ongoing cancer clinical trial is provided to illustrate the TDS design and the proposed estimators. This work can find broad applications in design and analysis of biomarker validation studies.

  8. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    Science.gov (United States)

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  9. Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs

    Directory of Open Access Journals (Sweden)

    Faqir Muhammad

    2007-01-01

    Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.

  10. Estimating population salt intake in India using spot urine samples.

    Science.gov (United States)

    Petersen, Kristina S; Johnson, Claire; Mohan, Sailesh; Rogers, Kris; Shivashankar, Roopa; Thout, Sudhir Raj; Gupta, Priti; He, Feng J; MacGregor, Graham A; Webster, Jacqui; Santos, Joseph Alvin; Krishnan, Anand; Maulik, Pallab K; Reddy, K Srinath; Gupta, Ruby; Prabhakaran, Dorairaj; Neal, Bruce

    2017-11-01

    To compare estimates of mean population salt intake in North and South India derived from spot urine samples versus 24-h urine collections. In a cross-sectional survey, participants were sampled from slum, urban and rural communities in North and in South India. Participants provided 24-h urine collections, and random morning spot urine samples. Salt intake was estimated from the spot urine samples using a series of established estimating equations. Salt intake data from the 24-h urine collections and spot urine equations were weighted to provide estimates of salt intake for Delhi and Haryana, and Andhra Pradesh. A total of 957 individuals provided a complete 24-h urine collection and a spot urine sample. Weighted mean salt intake based on the 24-h urine collection, was 8.59 (95% confidence interval 7.73-9.45) and 9.46 g/day (8.95-9.96) in Delhi and Haryana, and Andhra Pradesh, respectively. Corresponding estimates based on the Tanaka equation [9.04 (8.63-9.45) and 9.79 g/day (9.62-9.96) for Delhi and Haryana, and Andhra Pradesh, respectively], the Mage equation [8.80 (7.67-9.94) and 10.19 g/day (95% CI 9.59-10.79)], the INTERSALT equation [7.99 (7.61-8.37) and 8.64 g/day (8.04-9.23)] and the INTERSALT equation with potassium [8.13 (7.74-8.52) and 8.81 g/day (8.16-9.46)] were all within 1 g/day of the estimate based upon 24-h collections. For the Toft equation, estimates were 1-2 g/day higher [9.94 (9.24-10.64) and 10.69 g/day (9.44-11.93)] and for the Kawasaki equation they were 3-4 g/day higher [12.14 (11.30-12.97) and 13.64 g/day (13.15-14.12)]. In urban and rural areas in North and South India, most spot urine-based equations provided reasonable estimates of mean population salt intake. Equations that did not provide good estimates may have failed because specimen collection was not aligned with the original method.

  11. An On-Target Desalting and Concentration Sample Preparation Protocol for MALDI-MS and MS/MS Analysis

    DEFF Research Database (Denmark)

    Zhang, Xumin; Wang, Quanhui; Lou, Xiaomin

    2012-01-01

    2DE coupled with MALDI-MS is one of the most widely used and powerful analytic technologies in proteomics study. The MALDI sample preparation method has been developed and optimized towards the combination of simplicity, sample-cleaning, and sample concentration since its introduction. Here we...... present a protocol of the so-called Sample loading, Matrix loading, and on-target Wash (SMW) method which fulfills the three criteria by taking advantage of the AnchorChip™ targets. Our method is extremely simple and no pre-desalting or concentration is needed when dealing with samples prepared from 2DE...

  12. [Sampling and measurement methods of the protocol design of the China Nine-Province Survey for blindness, visual impairment and cataract surgery].

    Science.gov (United States)

    Zhao, Jia-liang; Wang, Yu; Gao, Xue-cheng; Ellwein, Leon B; Liu, Hu

    2011-09-01

    To design the protocol of the China nine-province survey for blindness, visual impairment and cataract surgery to evaluate the prevalence and main causes of blindness and visual impairment, and the prevalence and outcomes of the cataract surgery. The protocol design was began after accepting the task for the national survey for blindness, visual impairment and cataract surgery from the Department of Medicine, Ministry of Health, China, in November, 2005. The protocol in Beijing Shunyi Eye Study in 1996 and Guangdong Doumen County Eye Study in 1997, both supported by World Health Organization, was taken as the basis for the protocol design. The relative experts were invited to discuss and prove the draft protocol. An international advisor committee was established to examine and approve the draft protocol. Finally, the survey protocol was checked and approved by the Department of Medicine, Ministry of Health, China and Prevention Program of Blindness and Deafness, WHO. The survey protocol was designed according to the characteristics and the scale of the survey. The contents of the protocol included determination of target population and survey sites, calculation of the sample size, design of the random sampling, composition and organization of the survey teams, determination of the examinee, the flowchart of the field work, survey items and methods, diagnostic criteria of blindness and moderate and sever visual impairment, the measures of the quality control, the methods of the data management. The designed protocol became the standard and practical protocol for the survey to evaluate the prevalence and main causes of blindness and visual impairment, and the prevalence and outcomes of the cataract surgery.

  13. Networked Estimation for Event-Based Sampling Systems with Packet Dropouts

    Directory of Open Access Journals (Sweden)

    Young Soo Suh

    2009-04-01

    Full Text Available This paper is concerned with a networked estimation problem in which sensor data are transmitted over the network. In the event-based sampling scheme known as level-crossing or send-on-delta (SOD, sensor data are transmitted to the estimator node if the difference between the current sensor value and the last transmitted one is greater than a given threshold. Event-based sampling has been shown to be more efficient than the time-triggered one in some situations, especially in network bandwidth improvement. However, it cannot detect packet dropout situations because data transmission and reception do not use a periodical time-stamp mechanism as found in time-triggered sampling systems. Motivated by this issue, we propose a modified event-based sampling scheme called modified SOD in which sensor data are sent when either the change of sensor output exceeds a given threshold or the time elapses more than a given interval. Through simulation results, we show that the proposed modified SOD sampling significantly improves estimation performance when packet dropouts happen.

  14. Comparison of distance sampling estimates to a known population ...

    African Journals Online (AJOL)

    Line-transect sampling was used to obtain abundance estimates of an Ant-eating Chat Myrmecocichla formicivora population to compare these with the true size of the population. The population size was determined by a long-term banding study, and abundance estimates were obtained by surveying line transects.

  15. A method to combine non-probability sample data with probability sample data in estimating spatial means of environmental variables

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    2003-01-01

    In estimating spatial means of environmental variables of a region from data collected by convenience or purposive sampling, validity of the results can be ensured by collecting additional data through probability sampling. The precision of the pi estimator that uses the probability sample can be

  16. Stage migration after minor changes in histologic estimation of tumor burden in sentinel lymph nodes: the protocol trap

    DEFF Research Database (Denmark)

    Riber-Hansen, Rikke; Nyengaard, Jens R; Hamilton-Dutoit, Stephen J

    2009-01-01

    protocol trap"). This systematical bias makes it difficult to base treatment decisions on semiquantitative metastasis size estimates. Although based on metastatic melanoma, the principles described herein will apply when measuring nodal tumor burden in other metastasizing cancers, including breast...

  17. Spatially explicit population estimates for black bears based on cluster sampling

    Science.gov (United States)

    Humm, J.; McCown, J. Walter; Scheick, B.K.; Clark, Joseph D.

    2017-01-01

    We estimated abundance and density of the 5 major black bear (Ursus americanus) subpopulations (i.e., Eglin, Apalachicola, Osceola, Ocala-St. Johns, Big Cypress) in Florida, USA with spatially explicit capture-mark-recapture (SCR) by extracting DNA from hair samples collected at barbed-wire hair sampling sites. We employed a clustered sampling configuration with sampling sites arranged in 3 × 3 clusters spaced 2 km apart within each cluster and cluster centers spaced 16 km apart (center to center). We surveyed all 5 subpopulations encompassing 38,960 km2 during 2014 and 2015. Several landscape variables, most associated with forest cover, helped refine density estimates for the 5 subpopulations we sampled. Detection probabilities were affected by site-specific behavioral responses coupled with individual capture heterogeneity associated with sex. Model-averaged bear population estimates ranged from 120 (95% CI = 59–276) bears or a mean 0.025 bears/km2 (95% CI = 0.011–0.44) for the Eglin subpopulation to 1,198 bears (95% CI = 949–1,537) or 0.127 bears/km2 (95% CI = 0.101–0.163) for the Ocala-St. Johns subpopulation. The total population estimate for our 5 study areas was 3,916 bears (95% CI = 2,914–5,451). The clustered sampling method coupled with information on land cover was efficient and allowed us to estimate abundance across extensive areas that would not have been possible otherwise. Clustered sampling combined with spatially explicit capture-recapture methods has the potential to provide rigorous population estimates for a wide array of species that are extensive and heterogeneous in their distribution.

  18. Low-sampling-rate ultra-wideband channel estimation using a bounded-data-uncertainty approach

    KAUST Repository

    Ballal, Tarig

    2014-01-01

    This paper proposes a low-sampling-rate scheme for ultra-wideband channel estimation. In the proposed scheme, P pulses are transmitted to produce P observations. These observations are exploited to produce channel impulse response estimates at a desired sampling rate, while the ADC operates at a rate that is P times less. To avoid loss of fidelity, the interpulse interval, given in units of sampling periods of the desired rate, is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this situation and to achieve good performance without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. This estimator is shown to be related to the Bayesian linear minimum mean squared error (LMMSE) estimator. The performance of the proposed sub-sampling scheme was tested in conjunction with the new estimator. It is shown that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in most cases; while in the high SNR regime, it also outperforms the LMMSE estimator. © 2014 IEEE.

  19. The use of Thompson sampling to increase estimation precision

    NARCIS (Netherlands)

    Kaptein, M.C.

    2015-01-01

    In this article, we consider a sequential sampling scheme for efficient estimation of the difference between the means of two independent treatments when the population variances are unequal across groups. The sampling scheme proposed is based on a solution to bandit problems called Thompson

  20. Establishment of a protocol for the gene expression analysis of laser microdissected rat kidney samples with affymetrix genechips

    International Nuclear Information System (INIS)

    Stemmer, Kerstin; Ellinger-Ziegelbauer, Heidrun; Lotz, Kerstin; Ahr, Hans-J.; Dietrich, Daniel R.

    2006-01-01

    Laser microdissection in conjunction with microarray technology allows selective isolation and analysis of specific cell populations, e.g., preneoplastic renal lesions. To date, only limited information is available on sample preparation and preservation techniques that result in both optimal histomorphological preservation of sections and high-quality RNA for microarray analysis. Furthermore, amplification of minute amounts of RNA from microdissected renal samples allowing analysis with genechips has only scantily been addressed to date. The objective of this study was therefore to establish a reliable and reproducible protocol for laser microdissection in conjunction with microarray technology using kidney tissue from Eker rats p.o. treated for 7 days and 6 months with 10 and 1 mg Aristolochic acid/kg bw, respectively. Kidney tissues were preserved in RNAlater or snap frozen. Cryosections were cut and stained with either H and E or cresyl violet for subsequent morphological and RNA quality assessment and laser microdissection. RNA quality was comparable in snap frozen and RNAlater-preserved samples, however, the histomorphological preservation of renal sections was much better following cryopreservation. Moreover, the different staining techniques in combination with sample processing time at room temperature can have an influence on RNA quality. Different RNA amplification protocols were shown to have an impact on gene expression profiles as demonstrated with Affymetrix Rat Genome 230 2 .0 arrays. Considering all the parameters analyzed in this study, a protocol for RNA isolation from laser microdissected samples with subsequent Affymetrix chip hybridization was established that was also successfully applied to preneoplastic lesions laser microdissected from Aristolochic acid-treated rats

  1. Estimation of river and stream temperature trends under haphazard sampling

    Science.gov (United States)

    Gray, Brian R.; Lyubchich, Vyacheslav; Gel, Yulia R.; Rogala, James T.; Robertson, Dale M.; Wei, Xiaoqiao

    2015-01-01

    Long-term temporal trends in water temperature in rivers and streams are typically estimated under the assumption of evenly-spaced space-time measurements. However, sampling times and dates associated with historical water temperature datasets and some sampling designs may be haphazard. As a result, trends in temperature may be confounded with trends in time or space of sampling which, in turn, may yield biased trend estimators and thus unreliable conclusions. We address this concern using multilevel (hierarchical) linear models, where time effects are allowed to vary randomly by day and date effects by year. We evaluate the proposed approach by Monte Carlo simulations with imbalance, sparse data and confounding by trend in time and date of sampling. Simulation results indicate unbiased trend estimators while results from a case study of temperature data from the Illinois River, USA conform to river thermal assumptions. We also propose a new nonparametric bootstrap inference on multilevel models that allows for a relatively flexible and distribution-free quantification of uncertainties. The proposed multilevel modeling approach may be elaborated to accommodate nonlinearities within days and years when sampling times or dates typically span temperature extremes.

  2. Estimating Sample Size for Usability Testing

    Directory of Open Access Journals (Sweden)

    Alex Cazañas

    2017-02-01

    Full Text Available One strategy used to assure that an interface meets user requirements is to conduct usability testing. When conducting such testing one of the unknowns is sample size. Since extensive testing is costly, minimizing the number of participants can contribute greatly to successful resource management of a project. Even though a significant number of models have been proposed to estimate sample size in usability testing, there is still not consensus on the optimal size. Several studies claim that 3 to 5 users suffice to uncover 80% of problems in a software interface. However, many other studies challenge this assertion. This study analyzed data collected from the user testing of a web application to verify the rule of thumb, commonly known as the “magic number 5”. The outcomes of the analysis showed that the 5-user rule significantly underestimates the required sample size to achieve reasonable levels of problem detection.

  3. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    Science.gov (United States)

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  4. Density meter algorithm and system for estimating sampling/mixing uncertainty

    International Nuclear Information System (INIS)

    Shine, E.P.

    1986-01-01

    The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statistical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses

  5. Density meter algorithm and system for estimating sampling/mixing uncertainty

    International Nuclear Information System (INIS)

    Shine, E.P.

    1986-01-01

    The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statisical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses

  6. Comparative analysis of five DNA isolation protocols and three drying methods for leaves samples of Nectandra megapotamica (Spreng. Mez

    Directory of Open Access Journals (Sweden)

    Leonardo Severo da Costa

    2016-06-01

    Full Text Available The aim of the study was to establish a DNA isolation protocol Nectandra megapotamica (Spreng. Mez., able to obtain samples of high yield and quality for use in genomic analysis. A commercial kit and four classical methods of DNA extraction were tested, including three cetyltrimethylammonium bromide (CTAB-based and one sodium dodecyl sulfate (SDS-based methods. Three drying methods for leaves samples were also evaluated including drying at room temperature (RT, in an oven at 40ºC (S40, and in a microwave oven (FMO. The DNA solutions obtained from different types of leaves samples using the five protocols were assessed in terms of cost, execution time, and quality and yield of extracted DNA. The commercial kit did not extract DNA with sufficient quantity or quality for successful PCR reactions. Among the classic methods, only the protocols of Dellaporta and of Khanuja yielded DNA extractions for all three types of foliar samples that resulted in successful PCR reactions and subsequent enzyme restriction assays. Based on the evaluated variables, the most appropriate DNA extraction method for Nectandra megapotamica (Spreng. Mez. was that of Dellaporta, regardless of the method used to dry the samples. The selected method has a relatively low cost and total execution time. Moreover, the quality and quantity of DNA extracted using this method was sufficient for DNA sequence amplification using PCR reactions and to get restriction fragments.

  7. Finite Sample Comparison of Parametric, Semiparametric, and Wavelet Estimators of Fractional Integration

    DEFF Research Database (Denmark)

    Nielsen, Morten Ø.; Frederiksen, Per Houmann

    2005-01-01

    In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods. The es...... the time domain parametric methods, and (4) without sufficient trimming of scales the wavelet-based estimators are heavily biased.......In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods....... The estimators are briefly introduced and compared, and the criteria adopted for measuring finite sample performance are bias and root mean squared error. Most importantly, the simulations reveal that (1) the frequency domain maximum likelihood procedure is superior to the time domain parametric methods, (2) all...

  8. Assessment of sampling strategies for estimation of site mean concentrations of stormwater pollutants.

    Science.gov (United States)

    McCarthy, David T; Zhang, Kefeng; Westerlund, Camilla; Viklander, Maria; Bertrand-Krajewski, Jean-Luc; Fletcher, Tim D; Deletic, Ana

    2018-02-01

    The estimation of stormwater pollutant concentrations is a primary requirement of integrated urban water management. In order to determine effective sampling strategies for estimating pollutant concentrations, data from extensive field measurements at seven different catchments was used. At all sites, 1-min resolution continuous flow measurements, as well as flow-weighted samples, were taken and analysed for total suspend solids (TSS), total nitrogen (TN) and Escherichia coli (E. coli). For each of these parameters, the data was used to calculate the Event Mean Concentrations (EMCs) for each event. The measured Site Mean Concentrations (SMCs) were taken as the volume-weighted average of these EMCs for each parameter, at each site. 17 different sampling strategies, including random and fixed strategies were tested to estimate SMCs, which were compared with the measured SMCs. The ratios of estimated/measured SMCs were further analysed to determine the most effective sampling strategies. Results indicate that the random sampling strategies were the most promising method in reproducing SMCs for TSS and TN, while some fixed sampling strategies were better for estimating the SMC of E. coli. The differences in taking one, two or three random samples were small (up to 20% for TSS, and 10% for TN and E. coli), indicating that there is little benefit in investing in collection of more than one sample per event if attempting to estimate the SMC through monitoring of multiple events. It was estimated that an average of 27 events across the studied catchments are needed for characterising SMCs of TSS with a 90% confidence interval (CI) width of 1.0, followed by E.coli (average 12 events) and TN (average 11 events). The coefficient of variation of pollutant concentrations was linearly and significantly correlated to the 90% confidence interval ratio of the estimated/measured SMCs (R 2  = 0.49; P sampling frequency needed to accurately estimate SMCs of pollutants. Crown

  9. Optimum sample size to estimate mean parasite abundance in fish parasite surveys

    Directory of Open Access Journals (Sweden)

    Shvydka S.

    2018-03-01

    Full Text Available To reach ethically and scientifically valid mean abundance values in parasitological and epidemiological studies this paper considers analytic and simulation approaches for sample size determination. The sample size estimation was carried out by applying mathematical formula with predetermined precision level and parameter of the negative binomial distribution estimated from the empirical data. A simulation approach to optimum sample size determination aimed at the estimation of true value of the mean abundance and its confidence interval (CI was based on the Bag of Little Bootstraps (BLB. The abundance of two species of monogenean parasites Ligophorus cephali and L. mediterraneus from Mugil cephalus across the Azov-Black Seas localities were subjected to the analysis. The dispersion pattern of both helminth species could be characterized as a highly aggregated distribution with the variance being substantially larger than the mean abundance. The holistic approach applied here offers a wide range of appropriate methods in searching for the optimum sample size and the understanding about the expected precision level of the mean. Given the superior performance of the BLB relative to formulae with its few assumptions, the bootstrap procedure is the preferred method. Two important assessments were performed in the present study: i based on CIs width a reasonable precision level for the mean abundance in parasitological surveys of Ligophorus spp. could be chosen between 0.8 and 0.5 with 1.6 and 1x mean of the CIs width, and ii the sample size equal 80 or more host individuals allows accurate and precise estimation of mean abundance. Meanwhile for the host sample size in range between 25 and 40 individuals, the median estimates showed minimal bias but the sampling distribution skewed to the low values; a sample size of 10 host individuals yielded to unreliable estimates.

  10. Development of a new protocol for rapid bacterial identification and susceptibility testing directly from urine samples.

    Science.gov (United States)

    Zboromyrska, Y; Rubio, E; Alejo, I; Vergara, A; Mons, A; Campo, I; Bosch, J; Marco, F; Vila, J

    2016-06-01

    The current gold standard method for the diagnosis of urinary tract infections (UTI) is urine culture that requires 18-48 h for the identification of the causative microorganisms and an additional 24 h until the results of antimicrobial susceptibility testing (AST) are available. The aim of this study was to shorten the time of urine sample processing by a combination of flow cytometry for screening and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) for bacterial identification followed by AST directly from urine. The study was divided into two parts. During the first part, 675 urine samples were processed by a flow cytometry device and a cut-off value of bacterial count was determined to select samples for direct identification by MALDI-TOF-MS at ≥5 × 10(6) bacteria/mL. During the second part, 163 of 1029 processed samples reached the cut-off value. The sample preparation protocol for direct identification included two centrifugation and two washing steps. Direct AST was performed by the disc diffusion method if a reliable direct identification was obtained. Direct MALDI-TOF-MS identification was performed in 140 urine samples; 125 of the samples were positive by urine culture, 12 were contaminated and 3 were negative. Reliable direct identification was obtained in 108 (86.4%) of the 125 positive samples. AST was performed in 102 identified samples, and the results were fully concordant with the routine method among 83 monomicrobial infections. In conclusion, the turnaround time of the protocol described to diagnose UTI was about 1 h for microbial identification and 18-24 h for AST. Copyright © 2016 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  11. Gray bootstrap method for estimating frequency-varying random vibration signals with small samples

    Directory of Open Access Journals (Sweden)

    Wang Yanqing

    2014-04-01

    Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.

  12. Influence of Sampling Effort on the Estimated Richness of Road-Killed Vertebrate Wildlife

    Science.gov (United States)

    Bager, Alex; da Rosa, Clarissa A.

    2011-05-01

    Road-killed mammals, birds, and reptiles were collected weekly from highways in southern Brazil in 2002 and 2005. The objective was to assess variation in estimates of road-kill impacts on species richness produced by different sampling efforts, and to provide information to aid in the experimental design of future sampling. Richness observed in weekly samples was compared with sampling for different periods. In each period, the list of road-killed species was evaluated based on estimates the community structure derived from weekly samplings, and by the presence of the ten species most subject to road mortality, and also of threatened species. Weekly samples were sufficient only for reptiles and mammals, considered separately. Richness estimated from the biweekly samples was equal to that found in the weekly samples, and gave satisfactory results for sampling the most abundant and threatened species. The ten most affected species showed constant road-mortality rates, independent of sampling interval, and also maintained their dominance structure. Birds required greater sampling effort. When the composition of road-killed species varies seasonally, it is necessary to take biweekly samples for a minimum of one year. Weekly or more-frequent sampling for periods longer than two years is necessary to provide a reliable estimate of total species richness.

  13. Fast filtration sampling protocol for mammalian suspension cells tailored for phosphometabolome profiling by capillary ion chromatography - tandem mass spectrometry.

    Science.gov (United States)

    Kvitvang, Hans F N; Bruheim, Per

    2015-08-15

    Capillary ion chromatography (capIC) is the premium separation technology for low molecular phosphometabolites and nucleotides in biological extracts. Removal of excessive amounts of salt during sample preparation stages is a prerequisite to enable high quality capIC separation in combination with reproducible and sensitive MS detection. Existing sampling protocols for mammalian cells used for GC-MS and LC-MS metabolic profiling can therefore not be directly applied to capIC separations. Here, the development of a fast filtration sampling protocol for mammalian suspension cells tailored for quantitative profiling of the phosphometabolome on capIC-MS/MS is presented. The whole procedure from sampling the culture to transfer of filter to quenching and extraction solution takes less than 10s. To prevent leakage it is critical that a low vacuum pressure is applied, and satisfactorily reproducibility was only obtained by usage of a vacuum pressure controlling device. A vacuum of 60mbar was optimal for filtration of multiple myeloma Jjn-3 cell cultures through 5μm polyvinylidene (PVDF) filters. A quick deionized water (DI-water) rinse step prior to extraction was tested, and significantly higher metabolite yields were obtained during capIC-MS/MS analyses in this extract compared to extracts prepared by saline and reduced saline (25%) washing steps only. In addition, chromatographic performance was dramatically improved. Thus, it was verified that a quick DI-water rinse is tolerated by the cells and can be included as the final stage during filtration. Over 30 metabolites were quantitated in JJN-3 cell extracts by using the optimized sampling protocol with subsequent capIC-MS/MS analysis, and up to 2 million cells can be used in a single filtration step for the chosen filter and vacuum pressure. The technical set-up is also highly advantageous for microbial metabolome filtration protocols after optimization of vacuum pressure and washing solutions, and the reduced salt

  14. Estimating rare events in biochemical systems using conditional sampling

    Science.gov (United States)

    Sundar, V. S.

    2017-01-01

    The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.

  15. Estimation for small domains in double sampling for stratification ...

    African Journals Online (AJOL)

    In this article, we investigate the effect of randomness of the size of a small domain on the precision of an estimator of mean for the domain under double sampling for stratification. The result shows that for a small domain that cuts across various strata with unknown weights, the sampling variance depends on the within ...

  16. Sampling strategies for efficient estimation of tree foliage biomass

    Science.gov (United States)

    Hailemariam Temesgen; Vicente Monleon; Aaron Weiskittel; Duncan Wilson

    2011-01-01

    Conifer crowns can be highly variable both within and between trees, particularly with respect to foliage biomass and leaf area. A variety of sampling schemes have been used to estimate biomass and leaf area at the individual tree and stand scales. Rarely has the effectiveness of these sampling schemes been compared across stands or even across species. In addition,...

  17. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  18. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Science.gov (United States)

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  19. A method for estimating radioactive cesium concentrations in cattle blood using urine samples.

    Science.gov (United States)

    Sato, Itaru; Yamagishi, Ryoma; Sasaki, Jun; Satoh, Hiroshi; Miura, Kiyoshi; Kikuchi, Kaoru; Otani, Kumiko; Okada, Keiji

    2017-12-01

    In the region contaminated by the Fukushima nuclear accident, radioactive contamination of live cattle should be checked before slaughter. In this study, we establish a precise method for estimating radioactive cesium concentrations in cattle blood using urine samples. Blood and urine samples were collected from a total of 71 cattle on two farms in the 'difficult-to-return zone'. Urine 137 Cs, specific gravity, electrical conductivity, pH, sodium, potassium, calcium, and creatinine were measured and various estimation methods for blood 137 Cs were tested. The average error rate of the estimation was 54.2% without correction. Correcting for urine creatinine, specific gravity, electrical conductivity, or potassium improved the precision of the estimation. Correcting for specific gravity using the following formula gave the most precise estimate (average error rate = 16.9%): [blood 137 Cs] = [urinary 137 Cs]/([specific gravity] - 1)/329. Urine samples are faster to measure than blood samples because urine can be obtained in larger quantities and has a higher 137 Cs concentration than blood. These advantages of urine and the estimation precision demonstrated in our study, indicate that estimation of blood 137 Cs using urine samples is a practical means of monitoring radioactive contamination in live cattle. © 2017 Japanese Society of Animal Science.

  20. TU-H-207A-09: An Automated Technique for Estimating Patient-Specific Regional Imparted Energy and Dose From TCM CT Exams Across 13 Protocols

    International Nuclear Information System (INIS)

    Sanders, J; Tian, X; Segars, P; Boone, J; Samei, E

    2016-01-01

    Purpose: To develop an automated technique for estimating patient-specific regional imparted energy and dose from tube current modulated (TCM) computed tomography (CT) exams across a diverse set of head and body protocols. Methods: A library of 58 adult computational anthropomorphic extended cardiac-torso (XCAT) phantoms were used to model a patient population. A validated Monte Carlo program was used to simulate TCM CT exams on the entire library of phantoms for three head and 10 body protocols. The net imparted energy to the phantoms, normalized by dose length product (DLP), and the net tissue mass in each of the scan regions were computed. A knowledgebase containing relationships between normalized imparted energy and scanned mass was established. An automated computer algorithm was written to estimate the scanned mass from actual clinical CT exams. The scanned mass estimate, DLP of the exam, and knowledgebase were used to estimate the imparted energy to the patient. The algorithm was tested on 20 chest and 20 abdominopelvic TCM CT exams. Results: The normalized imparted energy increased with increasing kV for all protocols. However, the normalized imparted energy was relatively unaffected by the strength of the TCM. The average imparted energy was 681 ± 376 mJ for abdominopelvic exams and 274 ± 141 mJ for chest exams. Overall, the method was successful in providing patientspecific estimates of imparted energy for 98% of the cases tested. Conclusion: Imparted energy normalized by DLP increased with increasing tube potential. However, the strength of the TCM did not have a significant effect on the net amount of energy deposited to tissue. The automated program can be implemented into the clinical workflow to provide estimates of regional imparted energy and dose across a diverse set of clinical protocols.

  1. Failure Probability Estimation Using Asymptotic Sampling and Its Dependence upon the Selected Sampling Scheme

    Directory of Open Access Journals (Sweden)

    Martinásková Magdalena

    2017-12-01

    Full Text Available The article examines the use of Asymptotic Sampling (AS for the estimation of failure probability. The AS algorithm requires samples of multidimensional Gaussian random vectors, which may be obtained by many alternative means that influence the performance of the AS method. Several reliability problems (test functions have been selected in order to test AS with various sampling schemes: (i Monte Carlo designs; (ii LHS designs optimized using the Periodic Audze-Eglājs (PAE criterion; (iii designs prepared using Sobol’ sequences. All results are compared with the exact failure probability value.

  2. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas; Juul, Anders

    2004-01-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazard...

  3. Estimating dermal transfer from PCB-contaminated porous surfaces.

    Science.gov (United States)

    Slayton, T M; Valberg, P A; Wait, A D

    1998-06-01

    Health risks posed by dermal contact with PCB-contaminated porous surfaces have not been directly demonstrated and are difficult to estimate indirectly. Surface contamination by organic compounds is commonly assessed by collecting wipe samples with hexane as the solvent. However, for porous surfaces, hexane wipe characterization is of limited direct use when estimating potential human exposure. Particularly for porous surfaces, the relationship between the amount of organic material collected by hexane and the amount actually picked up by, for example, a person's hand touch is unknown. To better mimic PCB pickup by casual hand contact with contaminated concrete surfaces, we used alternate solvents and wipe application methods that more closely mimic casual dermal contact. Our sampling results were compared to PCB pickup using hexane-wetted wipes and the standard rubbing protocol. Dry and oil-wetted samples, applied without rubbing, picked up less than 1% of the PCBs picked up by the standard hexane procedure; with rubbing, they picked up about 2%. Without rubbing, saline-wetted wipes picked up 2.5%; with rubbing, they picked up about 12%. While the nature of dermal contact with a contaminated surface cannot be perfectly reproduced with a wipe sample, our results with alternate wiping solvents and rubbing methods more closely mimic hand contact than the standard hexane wipe protocol. The relative pickup estimates presented in this paper can be used in conjunction with site-specific PCB hexane wipe results to estimate dermal pickup rates at sites with PCB-contaminated concrete.

  4. Basin Visual Estimation Technique (BVET) and Representative Reach Approaches to Wadeable Stream Surveys: Methodological Limitations and Future Directions

    Science.gov (United States)

    Lance R. Williams; Melvin L. Warren; Susan B. Adams; Joseph L. Arvai; Christopher M. Taylor

    2004-01-01

    Basin Visual Estimation Techniques (BVET) are used to estimate abundance for fish populations in small streams. With BVET, independent samples are drawn from natural habitat units in the stream rather than sampling "representative reaches." This sampling protocol provides an alternative to traditional reach-level surveys, which are criticized for their lack...

  5. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    Science.gov (United States)

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  6. The Influence of Mark-Recapture Sampling Effort on Estimates of Rock Lobster Survival.

    Directory of Open Access Journals (Sweden)

    Ziya Kordjazi

    Full Text Available Five annual capture-mark-recapture surveys on Jasus edwardsii were used to evaluate the effect of sample size and fishing effort on the precision of estimated survival probability. Datasets of different numbers of individual lobsters (ranging from 200 to 1,000 lobsters were created by random subsampling from each annual survey. This process of random subsampling was also used to create 12 datasets of different levels of effort based on three levels of the number of traps (15, 30 and 50 traps per day and four levels of the number of sampling-days (2, 4, 6 and 7 days. The most parsimonious Cormack-Jolly-Seber (CJS model for estimating survival probability shifted from a constant model towards sex-dependent models with increasing sample size and effort. A sample of 500 lobsters or 50 traps used on four consecutive sampling-days was required for obtaining precise survival estimations for males and females, separately. Reduced sampling effort of 30 traps over four sampling days was sufficient if a survival estimate for both sexes combined was sufficient for management of the fishery.

  7. The Influence of Mark-Recapture Sampling Effort on Estimates of Rock Lobster Survival

    Science.gov (United States)

    Kordjazi, Ziya; Frusher, Stewart; Buxton, Colin; Gardner, Caleb; Bird, Tomas

    2016-01-01

    Five annual capture-mark-recapture surveys on Jasus edwardsii were used to evaluate the effect of sample size and fishing effort on the precision of estimated survival probability. Datasets of different numbers of individual lobsters (ranging from 200 to 1,000 lobsters) were created by random subsampling from each annual survey. This process of random subsampling was also used to create 12 datasets of different levels of effort based on three levels of the number of traps (15, 30 and 50 traps per day) and four levels of the number of sampling-days (2, 4, 6 and 7 days). The most parsimonious Cormack-Jolly-Seber (CJS) model for estimating survival probability shifted from a constant model towards sex-dependent models with increasing sample size and effort. A sample of 500 lobsters or 50 traps used on four consecutive sampling-days was required for obtaining precise survival estimations for males and females, separately. Reduced sampling effort of 30 traps over four sampling days was sufficient if a survival estimate for both sexes combined was sufficient for management of the fishery. PMID:26990561

  8. Inter-comparison of NIOSH and IMPROVE protocols for OC and EC determination: implications for inter-protocol data conversion

    Science.gov (United States)

    Wu, Cheng; Huang, X. H. Hilda; Ng, Wai Man; Griffith, Stephen M.; Zhen Yu, Jian

    2016-09-01

    Organic carbon (OC) and elemental carbon (EC) are operationally defined by analytical methods. As a result, OC and EC measurements are protocol dependent, leading to uncertainties in their quantification. In this study, more than 1300 Hong Kong samples were analyzed using both National Institute for Occupational Safety and Health (NIOSH) thermal optical transmittance (TOT) and Interagency Monitoring of Protected Visual Environment (IMPROVE) thermal optical reflectance (TOR) protocols to explore the cause of EC disagreement between the two protocols. EC discrepancy mainly (83 %) arises from a difference in peak inert mode temperature, which determines the allocation of OC4NSH, while the rest (17 %) is attributed to a difference in the optical method (transmittance vs. reflectance) applied for the charring correction. Evidence shows that the magnitude of the EC discrepancy is positively correlated with the intensity of the biomass burning signal, whereby biomass burning increases the fraction of OC4NSH and widens the disagreement in the inter-protocol EC determination. It is also found that the EC discrepancy is positively correlated with the abundance of metal oxide in the samples. Two approaches (M1 and M2) that translate NIOSH TOT OC and EC data into IMPROVE TOR OC and EC data are proposed. M1 uses direct relationship between ECNSH_TOT and ECIMP_TOR for reconstruction: M1 : ECIMP_TOR = a × ECNSH_TOT + b; while M2 deconstructs ECIMP_TOR into several terms based on analysis principles and applies regression only on the unknown terms: M2 : ECIMP_TOR = AECNSH + OC4NSH - (a × PCNSH_TOR + b), where AECNSH, apparent EC by the NIOSH protocol, is the carbon that evolves in the He-O2 analysis stage, OC4NSH is the carbon that evolves at the fourth temperature step of the pure helium analysis stage of NIOSH, and PCNSH_TOR is the pyrolyzed carbon as determined by the NIOSH protocol. The implementation of M1 to all urban site data (without considering seasonal specificity

  9. Replication Variance Estimation under Two-phase Sampling in the Presence of Non-response

    Directory of Open Access Journals (Sweden)

    Muqaddas Javed

    2014-09-01

    Full Text Available Kim and Yu (2011 discussed replication variance estimator for two-phase stratified sampling. In this paper estimators for mean have been proposed in two-phase stratified sampling for different situation of existence of non-response at first phase and second phase. The expressions of variances of these estimators have been derived. Furthermore, replication-based jackknife variance estimators of these variances have also been derived. Simulation study has been conducted to investigate the performance of the suggested estimators.

  10. Estimation of reference intervals from small samples: an example using canine plasma creatinine.

    Science.gov (United States)

    Geffré, A; Braun, J P; Trumel, C; Concordet, D

    2009-12-01

    According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.

  11. Inverse sampled Bernoulli (ISB) procedure for estimating a population proportion, with nuclear material applications

    International Nuclear Information System (INIS)

    Wright, T.

    1982-01-01

    A new sampling procedure is introduced for estimating a population proportion. The procedure combines the ideas of inverse binomial sampling and Bernoulli sampling. An unbiased estimator is given with its variance. The procedure can be viewed as a generalization of inverse binomial sampling

  12. Limited-sampling strategy models for estimating the pharmacokinetic parameters of 4-methylaminoantipyrine, an active metabolite of dipyrone

    Directory of Open Access Journals (Sweden)

    Suarez-Kurtz G.

    2001-01-01

    Full Text Available Bioanalytical data from a bioequivalence study were used to develop limited-sampling strategy (LSS models for estimating the area under the plasma concentration versus time curve (AUC and the peak plasma concentration (Cmax of 4-methylaminoantipyrine (MAA, an active metabolite of dipyrone. Twelve healthy adult male volunteers received single 600 mg oral doses of dipyrone in two formulations at a 7-day interval in a randomized, crossover protocol. Plasma concentrations of MAA (N = 336, measured by HPLC, were used to develop LSS models. Linear regression analysis and a "jack-knife" validation procedure revealed that the AUC0-¥ and the Cmax of MAA can be accurately predicted (R²>0.95, bias 0.85 of the AUC0-¥ or Cmax for the other formulation. LSS models based on three sampling points (1.5, 4 and 24 h, but using different coefficients for AUC0-¥ and Cmax, predicted the individual values of both parameters for the enrolled volunteers (R²>0.88, bias = -0.65 and -0.37%, precision = 4.3 and 7.4% as well as for plasma concentration data sets generated by simulation (R²>0.88, bias = -1.9 and 8.5%, precision = 5.2 and 8.7%. Bioequivalence assessment of the dipyrone formulations based on the 90% confidence interval of log-transformed AUC0-¥ and Cmax provided similar results when either the best-estimated or the LSS-derived metrics were used.

  13. A test of alternative estimators for volume at time 1 from remeasured point samples

    Science.gov (United States)

    Francis A. Roesch; Edwin J. Green; Charles T. Scott

    1993-01-01

    Two estimators for volume at time 1 for use with permanent horizontal point samples are evaluated. One estimator, used traditionally, uses only the trees sampled at time 1, while the second estimator, originally presented by Roesch and coauthors (F.A. Roesch, Jr., E.J. Green, and C.T. Scott. 1989. For. Sci. 35(2):281-293). takes advantage of additional sample...

  14. Comparative of three sampling protocols for water quality assessment using macro invertebrates; Comparacion de tres protocolos de muestreo de macroinvertebrados para determinar la calidad del agua

    Energy Technology Data Exchange (ETDEWEB)

    Puertolas Domenech, L.; Rieradevall Sant, M.; Prat Fornells, N.

    2007-07-01

    The implementation of the Water Framework directive (WFD, Directive 2000/60/CE) requires the establishment of standardized sampling protocols for the assessment of benthic fauna. In this paper, a comparative study of several sampling protocols that are used currently in Spain and Europe (AQEM, EPA and Guadalmed) has been carried out. Evaluating the three protocols with a list of 12 criteria, Guadalmed fits better to the most of them. therefore it appears as an efficient tool in the determination of Ecological Status. (Author)

  15. Turbidity-controlled suspended sediment sampling for runoff-event load estimation

    Science.gov (United States)

    Jack Lewis

    1996-01-01

    Abstract - For estimating suspended sediment concentration (SSC) in rivers, turbidity is generally a much better predictor than water discharge. Although it is now possible to collect continuous turbidity data even at remote sites, sediment sampling and load estimation are still conventionally based on discharge. With frequent calibration the relation of turbidity to...

  16. Performance of sampling methods to estimate log characteristics for wildlife.

    Science.gov (United States)

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton

    2004-01-01

    Accurate estimation of the characteristics of log resources, or coarse woody debris (CWD), is critical to effective management of wildlife and other forest resources. Despite the importance of logs as wildlife habitat, methods for sampling logs have traditionally focused on silvicultural and fire applications. These applications have emphasized estimates of log volume...

  17. Triangulation based inclusion probabilities: a design-unbiased sampling approach

    OpenAIRE

    Fehrmann, Lutz; Gregoire, Timothy; Kleinn, Christoph

    2011-01-01

    A probabilistic sampling approach for design-unbiased estimation of area-related quantitative characteristics of spatially dispersed population units is proposed. The developed field protocol includes a fixed number of 3 units per sampling location and is based on partial triangulations over their natural neighbors to derive the individual inclusion probabilities. The performance of the proposed design is tested in comparison to fixed area sample plots in a simulation with two forest stands. ...

  18. Integration of GC-MSD and ER-Calux® assay into a single protocol for determining steroid estrogens in environmental samples.

    Science.gov (United States)

    Avberšek, Miha; Žegura, Bojana; Filipič, Metka; Heath, Ester

    2011-11-01

    There are many published studies that use either chemical or biological methods to investigate steroid estrogens in the aquatic environment, but rarer are those that combine both. In this study, gas chromatography with mass selective detection (GC-MSD) and the ER-Calux(®) estrogenicity assay were integrated into a single protocol for simultaneous determination of natural (estrone--E1, 17β-estradiol--E2, estriol--E3) and synthetic (17α-ethinylestradiol--EE2) steroid estrogens concentrations and the total estrogenic potential of environmental samples. For integration purposes, several solvents were investigated and the commonly used dimethyl sulphoxide (DMSO) in the ER-Calux(®) assay was replaced by ethyl acetate, which is more compatible with gas chromatography and enables the same sample to be analysed by both GC-MSD and the ER-Calux(®) assay. The integrated protocol was initially tested using a standard mixture of estrogens. The results for pure standards showed that the estrogenicity calculated on the basis of GC-MSD and the ER-Calux(®) assay exhibited good correlation (r(2)=0.96; α=0.94). The result remained the same when spiked waste water extracts were tested (r(2)=0.92, α=1.02). When applied to real waste water influent and effluent samples the results proved (r(2)=0.93; α=0.99) the applicability of the protocol. The main advantages of this newly developed protocol are simple sample handling for both methods, and reduced material consumption and labour. In addition, it can be applied as either a complete or sequential analysis where the ER-Calux(®) assay is used as a pre-screening method prior to the chemical analysis. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Estimation of Sensitive Proportion by Randomized Response Data in Successive Sampling

    Directory of Open Access Journals (Sweden)

    Bo Yu

    2015-01-01

    Full Text Available This paper considers the problem of estimation for binomial proportions of sensitive or stigmatizing attributes in the population of interest. Randomized response techniques are suggested for protecting the privacy of respondents and reducing the response bias while eliciting information on sensitive attributes. In many sensitive question surveys, the same population is often sampled repeatedly on each occasion. In this paper, we apply successive sampling scheme to improve the estimation of the sensitive proportion on current occasion.

  20. Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses

    Science.gov (United States)

    Lanfear, Robert; Hua, Xia; Warren, Dan L.

    2016-01-01

    Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794

  1. An empirical analysis of the precision of estimating the numbers of neurons and glia in human neocortex using a fractionator-design with sub-sampling

    DEFF Research Database (Denmark)

    Lyck, L.; Santamaria, I.D.; Pakkenberg, B.

    2009-01-01

    Improving histomorphometric analysis of the human neocortex by combining stereological cell counting with immunchistochemical visualisation of specific neuronal and glial cell populations is a methodological challenge. To enable standardized immunohistochemical staining, the amount of brain tissue...... to be stained and analysed by cell counting was efficiently reduced using a fractionator protocol involving several steps of sub-sampling. Since no mathematical or statistical tools exist to predict the variance originating from repeated sampling in complex structures like the human neocortex, the variance....... The results showed that it was possible, but not straight forward, to combine immunohistochemistry and the optical fractionator for estimation of specific subpopulations of brain cells in human neocortex. (C) 2009 Elsevier B.V. All rights reserved Udgivelsesdato: 2009/9/15...

  2. Sampling designs and methods for estimating fish-impingement losses at cooling-water intakes

    International Nuclear Information System (INIS)

    Murarka, I.P.; Bodeau, D.J.

    1977-01-01

    Several systems for estimating fish impingement at power plant cooling-water intakes are compared to determine the most statistically efficient sampling designs and methods. Compared to a simple random sampling scheme the stratified systematic random sampling scheme, the systematic random sampling scheme, and the stratified random sampling scheme yield higher efficiencies and better estimators for the parameters in two models of fish impingement as a time-series process. Mathematical results and illustrative examples of the applications of the sampling schemes to simulated and real data are given. Some sampling designs applicable to fish-impingement studies are presented in appendixes

  3. Lactate minimum in a ramp protocol and its validity to estimate the maximal lactate steady state

    Directory of Open Access Journals (Sweden)

    Emerson Pardono

    2009-01-01

    Full Text Available http://dx.doi.org/10.5007/1980-0037.2009v11n2p174   The objectives of this study were to evaluate the validity of the lactate minimum (LM using a ramp protocol for the determination of LM intensity (LMI, and to estimate the exercise intensity corresponding to maximal blood lactate steady state (MLSS. In addition, the possibility of determining aerobic and anaerobic fitness was investigated. Fourteen male cyclists of regional level performed one LM protocol on a cycle ergometer (Excalibur–Lode consisting of an incremental test at an initial workload of 75 Watts, with increments of 1 Watt every 6 seconds. Hyperlactatemia was induced by a 30-second Wingate anaerobic test (WAT (Monark–834E at a workload corresponding to 8.57% of the volunteer’s body weight. Peak power (11.5±2 Watts/kg, mean power output (9.8±1.7 Watts/kg, fatigue index (33.7±2.3% and lactate 7 min after WAT (10.5±2.3 mmol/L were determined. The incremental test identified LMI (207.8±17.7 Watts and its respective blood lactate concentration (2.9±0.7 mmol/L, heart rate (153.6±10.6 bpm, and also maximal aerobic power (305.2±31.0 Watts. MLSS intensity was identified by 2 to 4 constant exercise tests (207.8±17.7 Watts, with no difference compared to LMI and good agreement between the two parameters. The LM test using a ramp protocol seems to be a valid method for the identification of LMI and estimation of MLSS intensity in regional cyclists. In addition, both anaerobic and aerobic fitness parameters were identified during a single session.

  4. A 172 $\\mu$W Compressively Sampled Photoplethysmographic (PPG) Readout ASIC With Heart Rate Estimation Directly From Compressively Sampled Data.

    Science.gov (United States)

    Pamula, Venkata Rajesh; Valero-Sarmiento, Jose Manuel; Yan, Long; Bozkurt, Alper; Hoof, Chris Van; Helleputte, Nick Van; Yazicioglu, Refet Firat; Verhelst, Marian

    2017-06-01

    A compressive sampling (CS) photoplethysmographic (PPG) readout with embedded feature extraction to estimate heart rate (HR) directly from compressively sampled data is presented. It integrates a low-power analog front end together with a digital back end to perform feature extraction to estimate the average HR over a 4 s interval directly from compressively sampled PPG data. The application-specified integrated circuit (ASIC) supports uniform sampling mode (1x compression) as well as CS modes with compression ratios of 8x, 10x, and 30x. CS is performed through nonuniformly subsampling the PPG signal, while feature extraction is performed using least square spectral fitting through Lomb-Scargle periodogram. The ASIC consumes 172  μ W of power from a 1.2 V supply while reducing the relative LED driver power consumption by up to 30 times without significant loss of relevant information for accurate HR estimation.

  5. Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests

    Directory of Open Access Journals (Sweden)

    Bruno Giacomini Sari

    2017-09-01

    Full Text Available ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix between them. Sixty eight sample sizes were planned for one greenhouse and 48 for another, with the initial sample size of 10 plants, and the others were obtained by adding five plants. For each planned sample size, 3000 estimates of the Pearson correlation coefficient were obtained through bootstrap re-samplings with replacement. The sample size for each correlation coefficient was determined when the 95% confidence interval amplitude value was less than or equal to 0.4. Obtaining estimates of the Pearson correlation coefficient with high precision is difficult for parameters with a weak linear relation. Accordingly, a larger sample size is necessary to estimate them. Linear relations involving variables dealing with size and number of fruits per plant have less precision. To estimate the coefficient of correlation between productivity variables of cherry tomato, with a confidence interval of 95% equal to 0.4, it is necessary to sample 275 plants in a 250m² greenhouse, and 200 plants in a 200m² greenhouse.

  6. Fixed-location hydroacoustic monitoring designs for estimating fish passage using stratified random and systematic sampling

    International Nuclear Information System (INIS)

    Skalski, J.R.; Hoffman, A.; Ransom, B.H.; Steig, T.W.

    1993-01-01

    Five alternate sampling designs are compared using 15 d of 24-h continuous hydroacoustic data to identify the most favorable approach to fixed-location hydroacoustic monitoring of salmonid outmigrants. Four alternative aproaches to systematic sampling are compared among themselves and with stratified random sampling (STRS). Stratifying systematic sampling (STSYS) on a daily basis is found to reduce sampling error in multiday monitoring studies. Although sampling precision was predictable with varying levels of effort in STRS, neither magnitude nor direction of change in precision was predictable when effort was varied in systematic sampling (SYS). Furthermore, modifying systematic sampling to include replicated (e.g., nested) sampling (RSYS) is further shown to provide unbiased point and variance estimates as does STRS. Numerous short sampling intervals (e.g., 12 samples of 1-min duration per hour) must be monitored hourly using RSYS to provide efficient, unbiased point and interval estimates. For equal levels of effort, STRS outperformed all variations of SYS examined. Parametric approaches to confidence interval estimates are found to be superior to nonparametric interval estimates (i.e., bootstrap and jackknife) in estimating total fish passage. 10 refs., 1 fig., 8 tabs

  7. A single-aliquot OSL protocol using bracketing regenerative doses to accurately determine equivalent doses in quartz

    CERN Document Server

    Folz, E

    1999-01-01

    In most cases, sediments show inherent heterogeneity in their luminescence behaviours and bleaching histories, and identical aliquots are not available: single-aliquot determination of the equivalent dose (ED) is then the approach of choice and the advantages of using regenerative protocols are outlined. Experiments on five laboratory bleached and dosed quartz samples, following the protocol described by Murray and Roberts (1998. Measurement of the equivalent dose in quartz using a regenerative-dose single aliquot protocol. Radiation Measurements 27, 171-184), showed the hazards of using a single regeneration dose: a 10% variation in the regenerative dose yielded some equivalent dose estimates that differed from the expected value by more than 5%. A protocol is proposed that allows the use of different regenerative doses to bracket the estimated equivalent dose. The measured ED is found to be in excellent agreement with the known value when the main regeneration dose is within 10% of the true equivalent dose.

  8. A single-aliquot OSL protocol using bracketing regenerative doses to accurately determine equivalent doses in quartz

    International Nuclear Information System (INIS)

    Folz, Elise; Mercier, Norbert

    1999-01-01

    In most cases, sediments show inherent heterogeneity in their luminescence behaviours and bleaching histories, and identical aliquots are not available: single-aliquot determination of the equivalent dose (ED) is then the approach of choice and the advantages of using regenerative protocols are outlined. Experiments on five laboratory bleached and dosed quartz samples, following the protocol described by Murray and Roberts (1998. Measurement of the equivalent dose in quartz using a regenerative-dose single aliquot protocol. Radiation Measurements 27, 171-184), showed the hazards of using a single regeneration dose: a 10% variation in the regenerative dose yielded some equivalent dose estimates that differed from the expected value by more than 5%. A protocol is proposed that allows the use of different regenerative doses to bracket the estimated equivalent dose. The measured ED is found to be in excellent agreement with the known value when the main regeneration dose is within 10% of the true equivalent dose

  9. Protocol converter for serial communication between digital rectifier controllers and a power plant SCADA system

    Directory of Open Access Journals (Sweden)

    Vukić Vladimir Đ.

    2016-01-01

    Full Text Available The paper describes the protocol converter INT-485-MBRTU, developed for serial communication between the thyristor rectifier (based on the proprietary protocol "INT-CPD-05", according to standard RS-485 and the SCADA system (based on protocol "Modbus RTU", of the same standard in the thermal power plant "Nikola Tesla B1". Elementary data on industrial communication protocols and communication gateways were provided. The basic technical characteristics of the "Omron" programmable logic controller CJ series were described, as well as the developed device INT-485-MBRTU. Protocol converters with two versions of communication software were tested, differing only in one control word, intended for a forced successive change of communication sequences, in opposite to automatic sequence relieve. The device iNT-485-MBRTU, with the program for forced successive change of communication sequences, demonstrated the reliability of data transfer of 100 %, in a sample of approximately 480 messages. For nearly the same sample, the same protocol converter, with a version of the program without any type of message identifiers, transferred less than 60 % of the foreseen data. During multiple sixty-hour tests, the reliability of data transfer of at least 99.9979% was recorded, in 100% of the analysed cases, and for a sample of nearly 96,000 pairs of the send and receive messages. We analysed the results and estimated the additional possibilities for application of the INT-485-MBRTU protocol converter.

  10. Toxoplasma gondii and pre-treatment protocols for polymerase chain reaction analysis of milk samples: a field trial in sheep from Southern Italy

    Directory of Open Access Journals (Sweden)

    Alice Vismarra

    2017-02-01

    Full Text Available Toxoplasmosis is a zoonotic disease caused by the protozoan Toxoplasma gondii. Ingestion of raw milk has been suggested as a risk for transmission to humans. Here the authors evaluated pre-treatment protocols for DNA extraction on T. gondii tachyzoite-spiked sheep milk with the aim of identifying the method that resulted in the most rapid and reliable polymerase chain reaction (PCR positivity. This protocol was then used to analyse milk samples from sheep of three different farms in Southern Italy, including real time PCR for DNA quantification and PCR-restriction fragment length polymorphism for genotyping. The pre-treatment protocol using ethylenediaminetetraacetic acid and Tris-HCl to remove casein gave the best results in the least amount of time compared to the others on spiked milk samples. One sample of 21 collected from sheep farms was positive on one-step PCR, real time PCR and resulted in a Type I genotype at one locus (SAG3. Milk usually contains a low number of tachyzoites and this could be a limiting factor for molecular identification. Our preliminary data has evaluated a rapid, cost-effective and sensitive protocol to treat milk before DNA extraction. The results of the present study also confirm the possibility of T. gondii transmission through consumption of raw milk and its unpasteurised derivatives.

  11. Accurate Frequency Estimation Based On Three-Parameter Sine-Fitting With Three FFT Samples

    Directory of Open Access Journals (Sweden)

    Liu Xin

    2015-09-01

    Full Text Available This paper presents a simple DFT-based golden section searching algorithm (DGSSA for the single tone frequency estimation. Because of truncation and discreteness in signal samples, Fast Fourier Transform (FFT and Discrete Fourier Transform (DFT are inevitable to cause the spectrum leakage and fence effect which lead to a low estimation accuracy. This method can improve the estimation accuracy under conditions of a low signal-to-noise ratio (SNR and a low resolution. This method firstly uses three FFT samples to determine the frequency searching scope, then – besides the frequency – the estimated values of amplitude, phase and dc component are obtained by minimizing the least square (LS fitting error of three-parameter sine fitting. By setting reasonable stop conditions or the number of iterations, the accurate frequency estimation can be realized. The accuracy of this method, when applied to observed single-tone sinusoid samples corrupted by white Gaussian noise, is investigated by different methods with respect to the unbiased Cramer-Rao Low Bound (CRLB. The simulation results show that the root mean square error (RMSE of the frequency estimation curve is consistent with the tendency of CRLB as SNR increases, even in the case of a small number of samples. The average RMSE of the frequency estimation is less than 1.5 times the CRLB with SNR = 20 dB and N = 512.

  12. Estimates and sampling schemes for the instrumentation of accountability systems

    International Nuclear Information System (INIS)

    Jewell, W.S.; Kwiatkowski, J.W.

    1976-10-01

    The problem of estimation of a physical quantity from a set of measurements is considered, where the measurements are made on samples with a hierarchical error structure, and where within-groups error variances may vary from group to group at each level of the structure; minimum mean squared-error estimators are developed, and the case where the physical quantity is a random variable with known prior mean and variance is included. Estimators for the error variances are also given, and optimization of experimental design is considered

  13. Increasing fMRI sampling rate improves Granger causality estimates.

    Directory of Open Access Journals (Sweden)

    Fa-Hsuan Lin

    Full Text Available Estimation of causal interactions between brain areas is necessary for elucidating large-scale functional brain networks underlying behavior and cognition. Granger causality analysis of time series data can quantitatively estimate directional information flow between brain regions. Here, we show that such estimates are significantly improved when the temporal sampling rate of functional magnetic resonance imaging (fMRI is increased 20-fold. Specifically, healthy volunteers performed a simple visuomotor task during blood oxygenation level dependent (BOLD contrast based whole-head inverse imaging (InI. Granger causality analysis based on raw InI BOLD data sampled at 100-ms resolution detected the expected causal relations, whereas when the data were downsampled to the temporal resolution of 2 s typically used in echo-planar fMRI, the causality could not be detected. An additional control analysis, in which we SINC interpolated additional data points to the downsampled time series at 0.1-s intervals, confirmed that the improvements achieved with the real InI data were not explainable by the increased time-series length alone. We therefore conclude that the high-temporal resolution of InI improves the Granger causality connectivity analysis of the human brain.

  14. Zinc estimates in ore and slag samples and analysis of ash in coal samples

    International Nuclear Information System (INIS)

    Umamaheswara Rao, K.; Narayana, D.G.S.; Subrahmanyam, Y.

    1984-01-01

    Zinc estimates in ore and slag samples were made using the radioisotope X-ray fluorescence method. A 10 mCi 238 Pu was employed as the primary source of radiation and a thin crystal NaI(Ti) spectrometer was used to accomplish the detection of the 8.64 keV Zinc K-characteristic X-ray line. The results are reported. Ash content of coal concerning about 100 samples from Ravindra Khani VI and VII mines in Andhra Pradesh were measured using X-ray backscattering method with compensation for varying concentrations of iron in different coal samples through iron-X-ray fluorescent intensity measurements. The ash percent is found to range from 10 to 40. (author)

  15. Low incidence of clonality in cold water corals revealed through the novel use of a standardized protocol adapted to deep sea sampling

    Science.gov (United States)

    Becheler, Ronan; Cassone, Anne-Laure; Noël, Philippe; Mouchel, Olivier; Morrison, Cheryl L.; Arnaud-Haond, Sophie

    2017-11-01

    Sampling in the deep sea is a technical challenge, which has hindered the acquisition of robust datasets that are necessary to determine the fine-grained biological patterns and processes that may shape genetic diversity. Estimates of the extent of clonality in deep-sea species, despite the importance of clonality in shaping the local dynamics and evolutionary trajectories, have been largely obscured by such limitations. Cold-water coral reefs along European margins are formed mainly by two reef-building species, Lophelia pertusa and Madrepora oculata. Here we present a fine-grained analysis of the genotypic and genetic composition of reefs occurring in the Bay of Biscay, based on an innovative deep-sea sampling protocol. This strategy was designed to be standardized, random, and allowed the georeferencing of all sampled colonies. Clonal lineages discriminated through their Multi-Locus Genotypes (MLG) at 6-7 microsatellite markers could thus be mapped to assess the level of clonality and the spatial spread of clonal lineages. High values of clonal richness were observed for both species across all sites suggesting a limited occurrence of clonality, which likely originated through fragmentation. Additionally, spatial autocorrelation analysis underlined the possible occurrence of fine-grained genetic structure in several populations of both L. pertusa and M. oculata. The two cold-water coral species examined had contrasting patterns of connectivity among canyons, with among-canyon genetic structuring detected in M. oculata, whereas L. pertusa was panmictic at the canyon scale. This study exemplifies that a standardized, random and georeferenced sampling strategy, while challenging, can be applied in the deep sea, and associated benefits outlined here include improved estimates of fine grained patterns of clonality and dispersal that are comparable across sites and among species.

  16. Estimation of Listeria monocytogenes and Escherichia coli O157:H7 prevalence and levels in naturally contaminated rocket and cucumber samples by deterministic and stochastic approaches.

    Science.gov (United States)

    Hadjilouka, Agni; Mantzourani, Kyriaki-Sofia; Katsarou, Anastasia; Cavaiuolo, Marina; Ferrante, Antonio; Paramithiotis, Spiros; Mataragas, Marios; Drosinos, Eleftherios H

    2015-02-01

    The aims of the present study were to determine the prevalence and levels of Listeria monocytogenes and Escherichia coli O157:H7 in rocket and cucumber samples by deterministic (estimation of a single value) and stochastic (estimation of a range of values) approaches. In parallel, the chromogenic media commonly used for the recovery of these microorganisms were evaluated and compared, and the efficiency of an enzyme-linked immunosorbent assay (ELISA)-based protocol was validated. L. monocytogenes and E. coli O157:H7 were detected and enumerated using agar Listeria according to Ottaviani and Agosti plus RAPID' L. mono medium and Fluorocult plus sorbitol MacConkey medium with cefixime and tellurite in parallel, respectively. Identity was confirmed with biochemical and molecular tests and the ELISA. Performance indices of the media and the prevalence of both pathogens were estimated using Bayesian inference. In rocket, prevalence of both L. monocytogenes and E. coli O157:H7 was estimated at 7% (7 of 100 samples). In cucumber, prevalence was 6% (6 of 100 samples) and 3% (3 of 100 samples) for L. monocytogenes and E. coli O157:H7, respectively. The levels derived from the presence-absence data using Bayesian modeling were estimated at 0.12 CFU/25 g (0.06 to 0.20) and 0.09 CFU/25 g (0.04 to 0.170) for L. monocytogenes in rocket and cucumber samples, respectively. The corresponding values for E. coli O157:H7 were 0.59 CFU/25 g (0.43 to 0.78) and 1.78 CFU/25 g (1.38 to 2.24), respectively. The sensitivity and specificity of the culture media differed for rocket and cucumber samples. The ELISA technique had a high level of cross-reactivity. Parallel testing with at least two culture media was required to achieve a reliable result for L. monocytogenes or E. coli O157:H7 prevalence in rocket and cucumber samples.

  17. Estimation of sample size and testing power (Part 3).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2011-12-01

    This article introduces the definition and sample size estimation of three special tests (namely, non-inferiority test, equivalence test and superiority test) for qualitative data with the design of one factor with two levels having a binary response variable. Non-inferiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is not clinically inferior to that of the positive control drug. Equivalence test refers to the research design of which the objective is to verify that the experimental drug and the control drug have clinically equivalent efficacy. Superiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is clinically superior to that of the control drug. By specific examples, this article introduces formulas of sample size estimation for the three special tests, and their SAS realization in detail.

  18. Graph Sampling for Covariance Estimation

    KAUST Repository

    Chepuri, Sundeep Prabhakar

    2017-04-25

    In this paper the focus is on subsampling as well as reconstructing the second-order statistics of signals residing on nodes of arbitrary undirected graphs. Second-order stationary graph signals may be obtained by graph filtering zero-mean white noise and they admit a well-defined power spectrum whose shape is determined by the frequency response of the graph filter. Estimating the graph power spectrum forms an important component of stationary graph signal processing and related inference tasks such as Wiener prediction or inpainting on graphs. The central result of this paper is that by sampling a significantly smaller subset of vertices and using simple least squares, we can reconstruct the second-order statistics of the graph signal from the subsampled observations, and more importantly, without any spectral priors. To this end, both a nonparametric approach as well as parametric approaches including moving average and autoregressive models for the graph power spectrum are considered. The results specialize for undirected circulant graphs in that the graph nodes leading to the best compression rates are given by the so-called minimal sparse rulers. A near-optimal greedy algorithm is developed to design the subsampling scheme for the non-parametric and the moving average models, whereas a particular subsampling scheme that allows linear estimation for the autoregressive model is proposed. Numerical experiments on synthetic as well as real datasets related to climatology and processing handwritten digits are provided to demonstrate the developed theory.

  19. Infusion and sampling site effects on two-pool model estimates of leucine metabolism

    International Nuclear Information System (INIS)

    Helland, S.J.; Grisdale-Helland, B.; Nissen, S.

    1988-01-01

    To assess the effect of site of isotope infusion on estimates of leucine metabolism infusions of alpha-[4,5-3H]ketoisocaproate (KIC) and [U- 14 C]leucine were made into the left or right ventricles of sheep and pigs. Blood was sampled from the opposite ventricle. In both species, left ventricular infusions resulted in significantly lower specific radioactivities (SA) of [ 14 C]leucine and [ 3 H]KIC. [ 14 C]KIC SA was found to be insensitive to infusion and sampling sites. [ 14 C]KIC was in addition found to be equal to the SA of [ 14 C]leucine only during the left heart infusions. Therefore, [ 14 C]KIC SA was used as the only estimate for [ 14 C]SA in the equations for the two-pool model. This model eliminated the influence of site of infusion and blood sampling on the estimates for leucine entry and reduced the impact on the estimates for proteolysis and oxidation. This two-pool model could not compensate for the underestimation of transamination reactions occurring during the traditional venous isotope infusion and arterial blood sampling

  20. Protocols of radiocontaminant air monitoring for inhalation exposure estimates

    International Nuclear Information System (INIS)

    Shinn, J.H.

    1995-09-01

    Monitoring the plutonium and americium particle emissions from soils contaminated during atmospheric nuclear testing or due to accidental releases is important for several reasons. First, it is important to quantify the extent of potential human exposure from inhalation of alpha-emitting particles, which is the major exposure pathway from transuranic radionuclides. Second, the information provided by resuspension monitoring is the basis of criteria that determine the target soil concentrations for management and cleanup of contaminated soil sites. There are other radioactive aerosols, such as the fission products (cesium and strontium) and neutron-activation products (europium isotopes), which may be resuspended and therefore necessary to monitor as well. This Standard Protocol (SP) provides the method used for radiocontaminant air monitoring by the Health and Ecological Assessment Division (formerly Environmental Sciences Division), Lawrence Livermore National Laboratory, as developed and tested at Nevada Test Site (NTS) and in the Marshall Islands. The objective of this SP is to document the applications and methods of monitoring of all the relevant variables. This protocol deals only with measuring air concentrations of radionuclides and total suspended particulates (TSP, or open-quotes dustclose quotes). A separate protocol presents the more difficult measurements required to determine transuranic aerosol emission rates, or open-quotes resuspension rateclose quotes

  1. A simple approach to estimate soil organic carbon and soil co/sub 2/ emission

    International Nuclear Information System (INIS)

    Abbas, F.

    2013-01-01

    SOC (Soil Organic Carbon) and soil CO/sub 2/ (Carbon Dioxide) emission are among the indicator of carbon sequestration and hence global climate change. Researchers in developed countries benefit from advance technologies to estimate C (Carbon) sequestration. However, access to the latest technologies has always been challenging in developing countries to conduct such estimates. This paper presents a simple and comprehensive approach for estimating SOC and soil CO/sub 2/ emission from arable- and forest soils. The approach includes various protocols that can be followed in laboratories of the research organizations or academic institutions equipped with basic research instruments and technology. The protocols involve soil sampling, sample analysis for selected properties, and the use of a worldwide tested Rothamsted carbon turnover model. With this approach, it is possible to quantify SOC and soil CO/sub 2/ emission over short- and long-term basis for global climate change assessment studies. (author)

  2. Effective dose comparison between protocols stitched and usual protocols in dental cone beam CT for complete arcade

    International Nuclear Information System (INIS)

    Soares, M. R.; Maia, A. F.; Batista, W. O. G.; Lara, P. A.

    2014-08-01

    To visualization a complete dental radiology dental lives together with two separate proposals: [1] protocols diameter encompassing the entire arch (single) or [2] protocol with multiple fields of view (Fov) which together encompass the entire arch (stitched Fov s). The objective of this study is to evaluate effective dose values in examination protocols for all dental arcade available in different outfits with these two options. For this, a female anthropomorphic phantom manufactured by Radiology Support Devices twenty six thermoluminescent dosimeters inserted in relevant bodies and positions was used. Irradiate the simulator in the clinical conditions. The protocols were averaged and compared: [a] 14.0 cm x 8.5 cm and [b] 8.5 cm x 8.5 cm (Gendex Tomography GXCB 500), [c] protocol stitched for jaw combination of three volumes of 5.0 cm x 3.7 cm (Kodak 9000 3D scanner) [d] protocol stitched Fov s 5.0 cm x 8.0 cm (Planmeca Pro Max 3D) and [e] single technical Fov 14 cm x 8 cm (i-CAT Classical). Our results for the effective dose were: a range between 43.1 and 111.1 micro Sv for technical single Fov and 44.5 and 236.2 for technical stitched Fov s. The protocol presented the highest estimated effective dose was [d] and showed that lowest index was registered [a]. These results demonstrate that the protocol stitched Fov generated in Kodak 9000 3D machine applied the upper dental arch has practically equal value effective dose obtained by protocol extended diameter of, [a], which evaluates in a single image upper and lower arcade. It also demonstrates that the protocol [d] gives an estimate of five times higher than the protocol [a]. Thus, we conclude that in practical terms the protocol [c] stitched Fov s, not presents dosimetric advantages over other protocols. (Author)

  3. Effective dose comparison between protocols stitched and usual protocols in dental cone beam CT for complete arcade

    Energy Technology Data Exchange (ETDEWEB)

    Soares, M. R.; Maia, A. F. [Universidade Federal de Sergipe, Departamento de Fisica, Cidade Universitaria Prof. Jose Aloisio de Campos, Marechal Rondon s/n, Jardim Rosa Elze, 49-100000 Sao Cristovao, Sergipe (Brazil); Batista, W. O. G. [Instituto Federal da Bahia, Rua Emidio dos Santos s/n, Barbalho, Salvador, 40301015 Bahia (Brazil); Lara, P. A., E-mail: wilsonottobatista@gmail.com [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)

    2014-08-15

    To visualization a complete dental radiology dental lives together with two separate proposals: [1] protocols diameter encompassing the entire arch (single) or [2] protocol with multiple fields of view (Fov) which together encompass the entire arch (stitched Fov s). The objective of this study is to evaluate effective dose values in examination protocols for all dental arcade available in different outfits with these two options. For this, a female anthropomorphic phantom manufactured by Radiology Support Devices twenty six thermoluminescent dosimeters inserted in relevant bodies and positions was used. Irradiate the simulator in the clinical conditions. The protocols were averaged and compared: [a] 14.0 cm x 8.5 cm and [b] 8.5 cm x 8.5 cm (Gendex Tomography GXCB 500), [c] protocol stitched for jaw combination of three volumes of 5.0 cm x 3.7 cm (Kodak 9000 3D scanner) [d] protocol stitched Fov s 5.0 cm x 8.0 cm (Planmeca Pro Max 3D) and [e] single technical Fov 14 cm x 8 cm (i-CAT Classical). Our results for the effective dose were: a range between 43.1 and 111.1 micro Sv for technical single Fov and 44.5 and 236.2 for technical stitched Fov s. The protocol presented the highest estimated effective dose was [d] and showed that lowest index was registered [a]. These results demonstrate that the protocol stitched Fov generated in Kodak 9000 3D machine applied the upper dental arch has practically equal value effective dose obtained by protocol extended diameter of, [a], which evaluates in a single image upper and lower arcade. It also demonstrates that the protocol [d] gives an estimate of five times higher than the protocol [a]. Thus, we conclude that in practical terms the protocol [c] stitched Fov s, not presents dosimetric advantages over other protocols. (Author)

  4. Conditional estimation of exponential random graph models from snowball sampling designs

    NARCIS (Netherlands)

    Pattison, Philippa E.; Robins, Garry L.; Snijders, Tom A. B.; Wang, Peng

    2013-01-01

    A complete survey of a network in a large population may be prohibitively difficult and costly. So it is important to estimate models for networks using data from various network sampling designs, such as link-tracing designs. We focus here on snowball sampling designs, designs in which the members

  5. Estimates of laboratory accuracy and precision on Hanford waste tank samples

    International Nuclear Information System (INIS)

    Dodd, D.A.

    1995-01-01

    A review was performed on three sets of analyses generated in Battelle, Pacific Northwest Laboratories and three sets generated by Westinghouse Hanford Company, 222-S Analytical Laboratory. Laboratory accuracy and precision was estimated by analyte and is reported in tables. The sources used to generate this estimate is of limited size but does include the physical forms, liquid and solid, which are representative of samples from tanks to be characterized. This estimate was published as an aid to programs developing data quality objectives in which specified limits are established. Data resulting from routine analyses of waste matrices can be expected to be bounded by the precision and accuracy estimates of the tables. These tables do not preclude or discourage direct negotiations between program and laboratory personnel while establishing bounding conditions. Programmatic requirements different than those listed may be reliably met on specific measurements and matrices. It should be recognized, however, that these are specific to waste tank matrices and may not be indicative of performance on samples from other sources

  6. A Probabilistic Mass Estimation Algorithm for a Novel 7- Channel Capacitive Sample Verification Sensor

    Science.gov (United States)

    Wolf, Michael

    2012-01-01

    A document describes an algorithm created to estimate the mass placed on a sample verification sensor (SVS) designed for lunar or planetary robotic sample return missions. A novel SVS measures the capacitance between a rigid bottom plate and an elastic top membrane in seven locations. As additional sample material (soil and/or small rocks) is placed on the top membrane, the deformation of the membrane increases the capacitance. The mass estimation algorithm addresses both the calibration of each SVS channel, and also addresses how to combine the capacitances read from each of the seven channels into a single mass estimate. The probabilistic approach combines the channels according to the variance observed during the training phase, and provides not only the mass estimate, but also a value for the certainty of the estimate. SVS capacitance data is collected for known masses under a wide variety of possible loading scenarios, though in all cases, the distribution of sample within the canister is expected to be approximately uniform. A capacitance-vs-mass curve is fitted to this data, and is subsequently used to determine the mass estimate for the single channel s capacitance reading during the measurement phase. This results in seven different mass estimates, one for each SVS channel. Moreover, the variance of the calibration data is used to place a Gaussian probability distribution function (pdf) around this mass estimate. To blend these seven estimates, the seven pdfs are combined into a single Gaussian distribution function, providing the final mean and variance of the estimate. This blending technique essentially takes the final estimate as an average of the estimates of the seven channels, weighted by the inverse of the channel s variance.

  7. Estimation of tritium activity in bioassay samples having chemiluminescence

    International Nuclear Information System (INIS)

    Dwivedi, R.K.; Manu, Kumar; Kumar, Vinay; Soni, Ashish; Kaushik, A.K.; Tiwari, S.K.; Gupta, Ashok

    2008-01-01

    Tritium is recognized as major internal dose contributor in PHWR type of reactors. Estimation of internal dose due to tritium is carried out by analyzing urine samples in liquid scintillation analyzer (LSA). Presence of residual biochemical species in urine samples of some individuals under medical administration shows significant amount of chemiluminescence. If appropriate care is not taken the results obtained by liquid scintillation counter may be mistaken as genuine uptake of tritium. The distillation method described in this paper is used at RAPS-3 and 4 to assess correct tritium uptake. (author)

  8. Thermally assisted OSL application for equivalent dose estimation; comparison of multiple equivalent dose values as well as saturation levels determined by luminescence and ESR techniques for a sedimentary sample collected from a fault gouge

    Energy Technology Data Exchange (ETDEWEB)

    Şahiner, Eren, E-mail: sahiner@ankara.edu.tr; Meriç, Niyazi, E-mail: meric@ankara.edu.tr; Polymeris, George S., E-mail: gspolymeris@ankara.edu.tr

    2017-02-01

    Highlights: • Multiple equivalent dose estimations were carried out. • Additive ESR and regenerative luminescence were applied. • Preliminary SAR results employing TA-OSL signal were discussed. • Saturation levels of ESR and luminescence were investigated. • IRSL{sub 175} and SAR TA-OSL stand as very promising for large doses. - Abstract: Equivalent dose estimation (D{sub e}) constitutes the most important part of either trap-charge dating techniques or dosimetry applications. In the present work, multiple, independent equivalent dose estimation approaches were adopted, using both luminescence and ESR techniques; two different minerals were studied, namely quartz as well as feldspathic polymineral samples. The work is divided into three independent parts, depending on the type of signal employed. Firstly, different D{sub e} estimation approaches were carried out on both polymineral and contaminated quartz, using single aliquot regenerative dose protocols employing conventional OSL and IRSL signals, acquired at different temperatures. Secondly, ESR equivalent dose estimations using the additive dose procedure both at room temperature and at 90 K were discussed. Lastly, for the first time in the literature, a single aliquot regenerative protocol employing a thermally assisted OSL signal originating from Very Deep Traps was applied for natural minerals. Rejection criteria such as recycling and recovery ratios are also presented. The SAR protocol, whenever applied, provided with compatible D{sub e} estimations with great accuracy, independent on either the type of mineral or the stimulation temperature. Low temperature ESR signals resulting from Al and Ti centers indicate very large D{sub e} values due to bleaching in-ability, associated with large uncertainty values. Additionally, dose saturation of different approaches was investigated. For the signal arising from Very Deep Traps in quartz saturation is extended almost by one order of magnitude. It is

  9. Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation

    International Nuclear Information System (INIS)

    Helgesson, P.; Sjöstrand, H.; Koning, A.J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.

    2016-01-01

    In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also

  10. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  11. Bioinspired Security Analysis of Wireless Protocols

    DEFF Research Database (Denmark)

    Petrocchi, Marinella; Spognardi, Angelo; Santi, Paolo

    2016-01-01

    work, this paper investigates feasibility of adopting fraglets as model for specifying security protocols and analysing their properties. In particular, we give concrete sample analyses over a secure RFID protocol, showing evolution of the protocol run as chemical dynamics and simulating an adversary...

  12. Assessment of cerebrospinal fluid system dynamics : novel infusion protocol, mathematical modelling and parameter estimation for hydrocephalus investigations

    OpenAIRE

    Andersson, Kennet

    2011-01-01

    Patients with idiopathic normal pressure hydrocephalus (INPH) have a disturbance in the cerebrospinal fluid (CSF) system. The treatment is neurosurgical – a shunt is placed in the CSF system. The infusion test is used to assess CSF system dynamics and to aid in the selection of patients that will benefit from shunt surgery. The infusion test can be divided into three parts: a mathematical model, an infusion protocol and a parameter estimation method. A non-linear differential equation is used...

  13. Measurement of the equivalent dose in quartz using a regenerative-dose single-aliquot protocol

    International Nuclear Information System (INIS)

    Murray, A.S.; Roberts, R.G.

    1998-01-01

    The principles behind a regenerative-dose single-aliquot protocol are outlined. It is shown for three laboratory-bleached Australian sedimentary quartz samples that the relative change in sensitivity of the optically stimulated luminescence (OSL) during a repeated measurement cycle (consisting of a dose followed by a 10 s preheat at a given temperature and then a 100 s exposure to blue/green light at 125 deg. C) is very similar to that of the 110 deg. C thermoluminescence (TL) peak measured during the preheat cycle. The absolute change in the TL sensitivity with preheat temperature is different for samples containing a natural or a regenerative dose. Furthermore, the absolute change in sensitivity in both the OSL and TL signals is non-linear with regeneration cycle, but the relative change in the OSL signal compared to the following 110 deg. C TL measurement is well approximated by a straight line. Both signals are thought to use the same luminescence centres, and so some common behaviour is not unexpected. A new regenerative-dose protocol is presented which makes use of this linear relationship to correct for sensitivity changes with regeneration cycle, and requires only one aliquot for the estimation of the equivalent dose (D e ). The protocol has been applied to quartz from nine Australian sites. To illustrate the value of the regenerative-dose single-aliquot approach, the apparent values of D e for 13 samples, containing doses of between 0.01 and 100 Gy, have been measured at various preheat temperatures of between 160 and 300 deg. C, using a single aliquot for each D e measurement. Excellent agreement is found between these single-aliquot estimates of D e and those obtained from additive-dose multiple-aliquot and single-aliquot protocols, over the entire dose range

  14. Reliability estimation system: its application to the nuclear geophysical sampling of ore deposits

    International Nuclear Information System (INIS)

    Khaykovich, I.M.; Savosin, S.I.

    1992-01-01

    The reliability estimation system accepted in the Soviet Union for sampling data in nuclear geophysics is based on unique requirements in metrology and methodology. It involves estimating characteristic errors in calibration, as well as errors in measurement and interpretation. This paper describes the methods of estimating the levels of systematic and random errors at each stage of the problem. The data of nuclear geophysics sampling are considered to be reliable if there are no statistically significant, systematic differences between ore intervals determined by this method and by geological control, or by other methods of sampling; the reliability of the latter having been verified. The difference between the random errors is statistically insignificant. The system allows one to obtain information on the parameters of ore intervals with a guaranteed random error and without systematic errors. (Author)

  15. Estimating time to pregnancy from current durations in a cross-sectional sample

    DEFF Research Database (Denmark)

    Keiding, Niels; Kvist, Kajsa; Hartvig, Helle

    2002-01-01

    A new design for estimating the distribution of time to pregnancy is proposed and investigated. The design is based on recording current durations in a cross-sectional sample of women, leading to statistical problems similar to estimating renewal time distributions from backward recurrence times....

  16. Statistical Methods and Sampling Design for Estimating Step Trends in Surface-Water Quality

    Science.gov (United States)

    Hirsch, Robert M.

    1988-01-01

    This paper addresses two components of the problem of estimating the magnitude of step trends in surface water quality. The first is finding a robust estimator appropriate to the data characteristics expected in water-quality time series. The J. L. Hodges-E. L. Lehmann class of estimators is found to be robust in comparison to other nonparametric and moment-based estimators. A seasonal Hodges-Lehmann estimator is developed and shown to have desirable properties. Second, the effectiveness of various sampling strategies is examined using Monte Carlo simulation coupled with application of this estimator. The simulation is based on a large set of total phosphorus data from the Potomac River. To assure that the simulated records have realistic properties, the data are modeled in a multiplicative fashion incorporating flow, hysteresis, seasonal, and noise components. The results demonstrate the importance of balancing the length of the two sampling periods and balancing the number of data values between the two periods.

  17. Is a 'convenience' sample useful for estimating immunization coverage in a small population?

    Science.gov (United States)

    Weir, Jean E; Jones, Carrie

    2008-01-01

    Rapid survey methodologies are widely used for assessing immunization coverage in developing countries, approximating true stratified random sampling. Non-random ('convenience') sampling is not considered appropriate for estimating immunization coverage rates but has the advantages of low cost and expediency. We assessed the validity of a convenience sample of children presenting to a travelling clinic by comparing the coverage rate in the convenience sample to the true coverage established by surveying each child in three villages in rural Papua New Guinea. The rate of DTF immunization coverage as estimated by the convenience sample was within 10% of the true coverage when the proportion of children in the sample was two-thirds or when only children over the age of one year were counted, but differed by 11% when the sample included only 53% of the children and when all eligible children were included. The convenience sample may be sufficiently accurate for reporting purposes and is useful for identifying areas of low coverage.

  18. Testing the accuracy of a Bayesian central-dose model for single-grain OSL, using known-age samples

    DEFF Research Database (Denmark)

    Guerin, Guillaume; Combès, Benoit; Lahaye, Christelle

    2015-01-01

    on multi-grain OSL age estimates, these samples are presumed to have been both well-bleached at burial, and unaffected by mixing after deposition. Two ways of estimating single-grain ages are then compared: the standard approach on the one hand, consisting of applying the Central Age Model to De values...... for well-bleached samples; (ii) dose recovery experiments do not seem to be a very reliable tool to estimate the accuracy of a SAR measurement protocol for age determination....

  19. Evaluation of a lateral flow-based technology card for blood typing using a simplified protocol in a model of extreme blood sampling conditions.

    Science.gov (United States)

    Clavier, Benoît; Pouget, Thomas; Sailliol, Anne

    2018-02-01

    Life-threatening situations requiring blood transfusion under extreme conditions or in remote and austere locations, such as the battlefield or in traffic accidents, would benefit from reliable blood typing practices that are easily understood by a nonscientist or nonlaboratory technician and provide quick results. A simplified protocol was developed for the lateral flow-based device MDmulticard ABO-D-Rh subgroups-K. Its performance was compared to a reference method (PK7300, Beckman Coulter) in native blood samples from donors. The method was tested on blood samples stressed in vitro as a model of hemorrhage cases (through hemodilution using physiologic serum) and dehydration (through hemoconcentration by removing an aliquot of plasma after centrifugation), respectively. A total of 146 tests were performed on 52 samples; 126 in the hemodilution group (42 for each native, diluted 1/2, and diluted 1/4 samples) and 20 in the hemoconcentration group (10 for each native and 10% concentrated samples). Hematocrit in the tested samples ranged from 9.8% to 57.6% while hemoglobin levels ranged from 3.2 to 20.1 g/dL. The phenotype profile detected with the MDmulticard using the simplified protocol resulted in 22 A, seven B, 20 O, and three AB, of which nine were D- and five were Kell positive. No discrepancies were found with respect to the results obtained with the reference method. The simplified protocol for MDmulticard use could be considered a reliable method for blood typing in extreme environment or emergency situations, worsened by red blood cell dilution or concentration. © 2017 AABB.

  20. Evaluating the performance of species richness estimators: sensitivity to sample grain size

    DEFF Research Database (Denmark)

    Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara

    2006-01-01

    and several recent estimators [proposed by Rosenzweig et al. (Conservation Biology, 2003, 17, 864-874), and Ugland et al. (Journal of Animal Ecology, 2003, 72, 888-897)] performed poorly. 3.  Estimations developed using the smaller grain sizes (pair of traps, traps, records and individuals) presented similar....... Data obtained with standardized sampling of 78 transects in natural forest remnants of five islands were aggregated in seven different grains (i.e. ways of defining a single sample): islands, natural areas, transects, pairs of traps, traps, database records and individuals to assess the effect of using...

  1. Bayesian Estimation of Fish Disease Prevalence from Pooled Samples Incorporating Sensitivity and Specificity

    Science.gov (United States)

    Williams, Christopher J.; Moffitt, Christine M.

    2003-03-01

    An important emerging issue in fisheries biology is the health of free-ranging populations of fish, particularly with respect to the prevalence of certain pathogens. For many years, pathologists focused on captive populations and interest was in the presence or absence of certain pathogens, so it was economically attractive to test pooled samples of fish. Recently, investigators have begun to study individual fish prevalence from pooled samples. Estimation of disease prevalence from pooled samples is straightforward when assay sensitivity and specificity are perfect, but this assumption is unrealistic. Here we illustrate the use of a Bayesian approach for estimating disease prevalence from pooled samples when sensitivity and specificity are not perfect. We also focus on diagnostic plots to monitor the convergence of the Gibbs-sampling-based Bayesian analysis. The methods are illustrated with a sample data set.

  2. Porosity estimation by semi-supervised learning with sparsely available labeled samples

    Science.gov (United States)

    Lima, Luiz Alberto; Görnitz, Nico; Varella, Luiz Eduardo; Vellasco, Marley; Müller, Klaus-Robert; Nakajima, Shinichi

    2017-09-01

    This paper addresses the porosity estimation problem from seismic impedance volumes and porosity samples located in a small group of exploratory wells. Regression methods, trained on the impedance as inputs and the porosity as output labels, generally suffer from extremely expensive (and hence sparsely available) porosity samples. To optimally make use of the valuable porosity data, a semi-supervised machine learning method was proposed, Transductive Conditional Random Field Regression (TCRFR), showing good performance (Görnitz et al., 2017). TCRFR, however, still requires more labeled data than those usually available, which creates a gap when applying the method to the porosity estimation problem in realistic situations. In this paper, we aim to fill this gap by introducing two graph-based preprocessing techniques, which adapt the original TCRFR for extremely weakly supervised scenarios. Our new method outperforms the previous automatic estimation methods on synthetic data and provides a comparable result to the manual labored, time-consuming geostatistics approach on real data, proving its potential as a practical industrial tool.

  3. Comparison of chlorzoxazone one-sample methods to estimate CYP2E1 activity in humans

    DEFF Research Database (Denmark)

    Kramer, Iza; Dalhoff, Kim; Clemmesen, Jens O

    2003-01-01

    OBJECTIVE: Comparison of a one-sample with a multi-sample method (the metabolic fractional clearance) to estimate CYP2E1 activity in humans. METHODS: Healthy, male Caucasians ( n=19) were included. The multi-sample fractional clearance (Cl(fe)) of chlorzoxazone was compared with one...... estimates, Cl(est) at 3 h or 6 h, and MR at 3 h, can serve as reliable markers of CYP2E1 activity. The one-sample clearance method is an accurate, renal function-independent measure of the intrinsic activity; it is simple to use and easily applicable to humans.......-time-point clearance estimation (Cl(est)) at 3, 4, 5 and 6 h. Furthermore, the metabolite/drug ratios (MRs) estimated from one-time-point samples at 1, 2, 3, 4, 5 and 6 h were compared with Cl(fe). RESULTS: The concordance between Cl(est) and Cl(fe) was highest at 6 h. The minimal mean prediction error (MPE) of Cl...

  4. Estimation of Uncertainty in Aerosol Concentration Measured by Aerosol Sampling System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Chan; Song, Yong Jae; Jung, Woo Young; Lee, Hyun Chul; Kim, Gyu Tae; Lee, Doo Yong [FNC Technology Co., Yongin (Korea, Republic of)

    2016-10-15

    FNC Technology Co., Ltd has been developed test facilities for the aerosol generation, mixing, sampling and measurement under high pressure and high temperature conditions. The aerosol generation system is connected to the aerosol mixing system which injects SiO{sub 2}/ethanol mixture. In the sampling system, glass fiber membrane filter has been used to measure average mass concentration. Based on the experimental results using main carrier gas of steam and air mixture, the uncertainty estimation of the sampled aerosol concentration was performed by applying Gaussian error propagation law. FNC Technology Co., Ltd. has been developed the experimental facilities for the aerosol measurement under high pressure and high temperature. The purpose of the tests is to develop commercial test module for aerosol generation, mixing and sampling system applicable to environmental industry and safety related system in nuclear power plant. For the uncertainty calculation of aerosol concentration, the value of the sampled aerosol concentration is not measured directly, but must be calculated from other quantities. The uncertainty of the sampled aerosol concentration is a function of flow rates of air and steam, sampled mass, sampling time, condensed steam mass and its absolute errors. These variables propagate to the combination of variables in the function. Using operating parameters and its single errors from the aerosol test cases performed at FNC, the uncertainty of aerosol concentration evaluated by Gaussian error propagation law is less than 1%. The results of uncertainty estimation in the aerosol sampling system will be utilized for the system performance data.

  5. Protocol Fuel Mix reporting

    International Nuclear Information System (INIS)

    2002-07-01

    The protocol in this document describes a method for an Electricity Distribution Company (EDC) to account for the fuel mix of electricity that it delivers to its customers, based on the best available information. Own production, purchase and sale of electricity, and certificates trading are taken into account. In chapter 2 the actual protocol is outlined. In the appendixes additional (supporting) information is given: (A) Dutch Standard Fuel Mix, 2000; (B) Calculation of the Dutch Standard fuel mix; (C) Procedures to estimate and benchmark the fuel mix; (D) Quality management; (E) External verification; (F) Recommendation for further development of the protocol; (G) Reporting examples

  6. inverse gaussian model for small area estimation via gibbs sampling

    African Journals Online (AJOL)

    ADMIN

    For example, MacGibbon and Tomberlin. (1989) have considered estimating small area rates and binomial parameters using empirical Bayes methods. Stroud (1991) used hierarchical Bayes approach for univariate natural exponential families with quadratic variance functions in sample survey applications, while Chaubey ...

  7. Limited sampling hampers "big data" estimation of species richness in a tropical biodiversity hotspot.

    Science.gov (United States)

    Engemann, Kristine; Enquist, Brian J; Sandel, Brody; Boyle, Brad; Jørgensen, Peter M; Morueta-Holme, Naia; Peet, Robert K; Violle, Cyrille; Svenning, Jens-Christian

    2015-02-01

    Macro-scale species richness studies often use museum specimens as their main source of information. However, such datasets are often strongly biased due to variation in sampling effort in space and time. These biases may strongly affect diversity estimates and may, thereby, obstruct solid inference on the underlying diversity drivers, as well as mislead conservation prioritization. In recent years, this has resulted in an increased focus on developing methods to correct for sampling bias. In this study, we use sample-size-correcting methods to examine patterns of tropical plant diversity in Ecuador, one of the most species-rich and climatically heterogeneous biodiversity hotspots. Species richness estimates were calculated based on 205,735 georeferenced specimens of 15,788 species using the Margalef diversity index, the Chao estimator, the second-order Jackknife and Bootstrapping resampling methods, and Hill numbers and rarefaction. Species richness was heavily correlated with sampling effort, and only rarefaction was able to remove this effect, and we recommend this method for estimation of species richness with "big data" collections.

  8. Beamforming using subspace estimation from a diagonally averaged sample covariance.

    Science.gov (United States)

    Quijano, Jorge E; Zurk, Lisa M

    2017-08-01

    The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.

  9. The effects of parameter estimation on minimizing the in-control average sample size for the double sampling X bar chart

    Directory of Open Access Journals (Sweden)

    Michael B.C. Khoo

    2013-11-01

    Full Text Available The double sampling (DS X bar chart, one of the most widely-used charting methods, is superior for detecting small and moderate shifts in the process mean. In a right skewed run length distribution, the median run length (MRL provides a more credible representation of the central tendency than the average run length (ARL, as the mean is greater than the median. In this paper, therefore, MRL is used as the performance criterion instead of the traditional ARL. Generally, the performance of the DS X bar chart is investigated under the assumption of known process parameters. In practice, these parameters are usually estimated from an in-control reference Phase-I dataset. Since the performance of the DS X bar chart is significantly affected by estimation errors, we study the effects of parameter estimation on the MRL-based DS X bar chart when the in-control average sample size is minimised. This study reveals that more than 80 samples are required for the MRL-based DS X bar chart with estimated parameters to perform more favourably than the corresponding chart with known parameters.

  10. SU-F-207-16: CT Protocols Optimization Using Model Observer

    International Nuclear Information System (INIS)

    Tseng, H; Fan, J; Kupinski, M

    2015-01-01

    Purpose: To quantitatively evaluate the performance of different CT protocols using task-based measures of image quality. This work studies the task of size and the contrast estimation of different iodine concentration rods inserted in head- and body-sized phantoms using different imaging protocols. These protocols are designed to have the same dose level (CTDIvol) but using different X-ray tube voltage settings (kVp). Methods: Different concentrations of iodine objects inserted in a head size phantom and a body size phantom are imaged on a 64-slice commercial CT scanner. Scanning protocols with various tube voltages (80, 100, and 120 kVp) and current settings are selected, which output the same absorbed dose level (CTDIvol). Because the phantom design (size of the iodine objects, the air gap between the inserted objects and the phantom) is not ideal for a model observer study, the acquired CT images are used to generate simulation images with four different sizes and five different contracts iodine objects. For each type of the objects, 500 images (100 x 100 pixels) are generated for the observer study. The observer selected in this study is the channelized scanning linear observer which could be applied to estimate the size and the contrast. The figure of merit used is the correct estimation ratio. The mean and the variance are estimated by the shuffle method. Results: The results indicate that the protocols with 100 kVp tube voltage setting provides the best performance for iodine insert size and contrast estimation for both head and body phantom cases. Conclusion: This work presents a practical and robust quantitative approach using channelized scanning linear observer to study contrast and size estimation performance from different CT protocols. Different protocols at same CTDIvol setting could Result in different image quality performance. The relationship between the absorbed dose and the diagnostic image quality is not linear

  11. SU-F-207-16: CT Protocols Optimization Using Model Observer

    Energy Technology Data Exchange (ETDEWEB)

    Tseng, H [University of Arizona, Tucson, AZ (United States); Fan, J [CT Systems Engineering, GE Healthcare, Waukesha, Wisconsin (United States); Kupinski, M [Univ Arizona, Tucson, AZ (United States)

    2015-06-15

    Purpose: To quantitatively evaluate the performance of different CT protocols using task-based measures of image quality. This work studies the task of size and the contrast estimation of different iodine concentration rods inserted in head- and body-sized phantoms using different imaging protocols. These protocols are designed to have the same dose level (CTDIvol) but using different X-ray tube voltage settings (kVp). Methods: Different concentrations of iodine objects inserted in a head size phantom and a body size phantom are imaged on a 64-slice commercial CT scanner. Scanning protocols with various tube voltages (80, 100, and 120 kVp) and current settings are selected, which output the same absorbed dose level (CTDIvol). Because the phantom design (size of the iodine objects, the air gap between the inserted objects and the phantom) is not ideal for a model observer study, the acquired CT images are used to generate simulation images with four different sizes and five different contracts iodine objects. For each type of the objects, 500 images (100 x 100 pixels) are generated for the observer study. The observer selected in this study is the channelized scanning linear observer which could be applied to estimate the size and the contrast. The figure of merit used is the correct estimation ratio. The mean and the variance are estimated by the shuffle method. Results: The results indicate that the protocols with 100 kVp tube voltage setting provides the best performance for iodine insert size and contrast estimation for both head and body phantom cases. Conclusion: This work presents a practical and robust quantitative approach using channelized scanning linear observer to study contrast and size estimation performance from different CT protocols. Different protocols at same CTDIvol setting could Result in different image quality performance. The relationship between the absorbed dose and the diagnostic image quality is not linear.

  12. Effects of sampling conditions on DNA-based estimates of American black bear abundance

    Science.gov (United States)

    Laufenberg, Jared S.; Van Manen, Frank T.; Clark, Joseph D.

    2013-01-01

    DNA-based capture-mark-recapture techniques are commonly used to estimate American black bear (Ursus americanus) population abundance (N). Although the technique is well established, many questions remain regarding study design. In particular, relationships among N, capture probability of heterogeneity mixtures A and B (pA and pB, respectively, or p, collectively), the proportion of each mixture (π), number of capture occasions (k), and probability of obtaining reliable estimates of N are not fully understood. We investigated these relationships using 1) an empirical dataset of DNA samples for which true N was unknown and 2) simulated datasets with known properties that represented a broader array of sampling conditions. For the empirical data analysis, we used the full closed population with heterogeneity data type in Program MARK to estimate N for a black bear population in Great Smoky Mountains National Park, Tennessee. We systematically reduced the number of those samples used in the analysis to evaluate the effect that changes in capture probabilities may have on parameter estimates. Model-averaged N for females and males were 161 (95% CI = 114–272) and 100 (95% CI = 74–167), respectively (pooled N = 261, 95% CI = 192–419), and the average weekly p was 0.09 for females and 0.12 for males. When we reduced the number of samples of the empirical data, support for heterogeneity models decreased. For the simulation analysis, we generated capture data with individual heterogeneity covering a range of sampling conditions commonly encountered in DNA-based capture-mark-recapture studies and examined the relationships between those conditions and accuracy (i.e., probability of obtaining an estimated N that is within 20% of true N), coverage (i.e., probability that 95% confidence interval includes true N), and precision (i.e., probability of obtaining a coefficient of variation ≤20%) of estimates using logistic regression. The capture probability

  13. Validation of a standard forensic anthropology examination protocol by measurement of applicability and reliability on exhumed and archive samples of known biological attribution.

    Science.gov (United States)

    Francisco, Raffaela Arrabaça; Evison, Martin Paul; Costa Junior, Moacyr Lobo da; Silveira, Teresa Cristina Pantozzi; Secchieri, José Marcelo; Guimarães, Marco Aurelio

    2017-10-01

    Forensic anthropology makes an important contribution to human identification and assessment of the causes and mechanisms of death and body disposal in criminal and civil investigations, including those related to atrocity, disaster and trafficking victim identification. The methods used are comparative, relying on assignment of questioned material to categories observed in standard reference material of known attribution. Reference collections typically originate in Europe and North America, and are not necessarily representative of contemporary global populations. Methods based on them must be validated when applied to novel populations. This study describes the validation of a standardized forensic anthropology examination protocol by application to two contemporary Brazilian skeletal samples of known attribution. One sample (n=90) was collected from exhumations following 7-35 years of burial and the second (n=30) was collected following successful investigations following routine case work. The study presents measurement of (1) the applicability of each of the methods: used and (2) the reliability with which the biographic parameters were assigned in each case. The results are discussed with reference to published assessments of methodological reliability regarding sex, age and-in particular-ancestry estimation. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Sampling point selection for energy estimation in the quasicontinuum method

    NARCIS (Netherlands)

    Beex, L.A.A.; Peerlings, R.H.J.; Geers, M.G.D.

    2010-01-01

    The quasicontinuum (QC) method reduces computational costs of atomistic calculations by using interpolation between a small number of so-called repatoms to represent the displacements of the complete lattice and by selecting a small number of sampling atoms to estimate the total potential energy of

  15. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    Science.gov (United States)

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  16. Biological Sampling Variability Study

    Energy Technology Data Exchange (ETDEWEB)

    Amidan, Brett G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hutchison, Janine R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-11-08

    There are many sources of variability that exist in the sample collection and analysis process. This paper addresses many, but not all, sources of variability. The main focus of this paper was to better understand and estimate variability due to differences between samplers. Variability between days was also studied, as well as random variability within each sampler. Experiments were performed using multiple surface materials (ceramic and stainless steel), multiple contaminant concentrations (10 spores and 100 spores), and with and without the presence of interfering material. All testing was done with sponge sticks using 10-inch by 10-inch coupons. Bacillus atrophaeus was used as the BA surrogate. Spores were deposited using wet deposition. Grime was coated on the coupons which were planned to include the interfering material (Section 3.3). Samples were prepared and analyzed at PNNL using CDC protocol (Section 3.4) and then cultured and counted. Five samplers were trained so that samples were taken using the same protocol. Each sampler randomly sampled eight coupons each day, four coupons with 10 spores deposited and four coupons with 100 spores deposited. Each day consisted of one material being tested. The clean samples (no interfering materials) were run first, followed by the dirty samples (coated with interfering material). There was a significant difference in recovery efficiency between the coupons with 10 spores deposited (mean of 48.9%) and those with 100 spores deposited (mean of 59.8%). There was no general significant difference between the clean and dirty (containing interfering material) coupons or between the two surface materials; however, there was a significant interaction between concentration amount and presence of interfering material. The recovery efficiency was close to the same for coupons with 10 spores deposited, but for the coupons with 100 spores deposited, the recovery efficiency for the dirty samples was significantly larger (65

  17. Estimating fish swimming metrics and metabolic rates with accelerometers: the influence of sampling frequency.

    Science.gov (United States)

    Brownscombe, J W; Lennox, R J; Danylchuk, A J; Cooke, S J

    2018-06-21

    Accelerometry is growing in popularity for remotely measuring fish swimming metrics, but appropriate sampling frequencies for accurately measuring these metrics are not well studied. This research examined the influence of sampling frequency (1-25 Hz) with tri-axial accelerometer biologgers on estimates of overall dynamic body acceleration (ODBA), tail-beat frequency, swimming speed and metabolic rate of bonefish Albula vulpes in a swim-tunnel respirometer and free-swimming in a wetland mesocosm. In the swim tunnel, sampling frequencies of ≥ 5 Hz were sufficient to establish strong relationships between ODBA, swimming speed and metabolic rate. However, in free-swimming bonefish, estimates of metabolic rate were more variable below 10 Hz. Sampling frequencies should be at least twice the maximum tail-beat frequency to estimate this metric effectively, which is generally higher than those required to estimate ODBA, swimming speed and metabolic rate. While optimal sampling frequency probably varies among species due to tail-beat frequency and swimming style, this study provides a reference point with a medium body-sized sub-carangiform teleost fish, enabling researchers to measure these metrics effectively and maximize study duration. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  18. Estimation of plant sampling uncertainty: an example based on chemical analysis of moss samples.

    Science.gov (United States)

    Dołęgowska, Sabina

    2016-11-01

    In order to estimate the level of uncertainty arising from sampling, 54 samples (primary and duplicate) of the moss species Pleurozium schreberi (Brid.) Mitt. were collected within three forested areas (Wierna Rzeka, Piaski, Posłowice Range) in the Holy Cross Mountains (south-central Poland). During the fieldwork, each primary sample composed of 8 to 10 increments (subsamples) was taken over an area of 10 m 2 whereas duplicate samples were collected in the same way at a distance of 1-2 m. Subsequently, all samples were triple rinsed with deionized water, dried, milled, and digested (8 mL HNO 3 (1:1) + 1 mL 30 % H 2 O 2 ) in a closed microwave system Multiwave 3000. The prepared solutions were analyzed twice for Cu, Fe, Mn, and Zn using FAAS and GFAAS techniques. All datasets were checked for normality and for normally distributed elements (Cu from Piaski, Zn from Posłowice, Fe, Zn from Wierna Rzeka). The sampling uncertainty was computed with (i) classical ANOVA, (ii) classical RANOVA, (iii) modified RANOVA, and (iv) range statistics. For the remaining elements, the sampling uncertainty was calculated with traditional and/or modified RANOVA (if the amount of outliers did not exceed 10 %) or classical ANOVA after Box-Cox transformation (if the amount of outliers exceeded 10 %). The highest concentrations of all elements were found in moss samples from Piaski, whereas the sampling uncertainty calculated with different statistical methods ranged from 4.1 to 22 %.

  19. Semiparametric efficient and robust estimation of an unknown symmetric population under arbitrary sample selection bias

    KAUST Repository

    Ma, Yanyuan

    2013-09-01

    We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.

  20. Critical length sampling: a method to estimate the volume of downed coarse woody debris

    Science.gov (United States)

    G& #246; ran St& #229; hl; Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey

    2010-01-01

    In this paper, critical length sampling for estimating the volume of downed coarse woody debris is presented. Using this method, the volume of downed wood in a stand can be estimated by summing the critical lengths of down logs included in a sample obtained using a relascope or wedge prism; typically, the instrument should be tilted 90° from its usual...

  1. Impact of sampling strategy on stream load estimates in till landscape of the Midwest

    Science.gov (United States)

    Vidon, P.; Hubbard, L.E.; Soyeux, E.

    2009-01-01

    Accurately estimating various solute loads in streams during storms is critical to accurately determine maximum daily loads for regulatory purposes. This study investigates the impact of sampling strategy on solute load estimates in streams in the US Midwest. Three different solute types (nitrate, magnesium, and dissolved organic carbon (DOC)) and three sampling strategies are assessed. Regardless of the method, the average error on nitrate loads is higher than for magnesium or DOC loads, and all three methods generally underestimate DOC loads and overestimate magnesium loads. Increasing sampling frequency only slightly improves the accuracy of solute load estimates but generally improves the precision of load calculations. This type of investigation is critical for water management and environmental assessment so error on solute load calculations can be taken into account by landscape managers, and sampling strategies optimized as a function of monitoring objectives. ?? 2008 Springer Science+Business Media B.V.

  2. Comparison of prevalence estimation of Mycobacterium avium subsp. paratuberculosis infection by sampling slaughtered cattle with macroscopic lesions vs. systematic sampling.

    Science.gov (United States)

    Elze, J; Liebler-Tenorio, E; Ziller, M; Köhler, H

    2013-07-01

    The objective of this study was to identify the most reliable approach for prevalence estimation of Mycobacterium avium ssp. paratuberculosis (MAP) infection in clinically healthy slaughtered cattle. Sampling of macroscopically suspect tissue was compared to systematic sampling. Specimens of ileum, jejunum, mesenteric and caecal lymph nodes were examined for MAP infection using bacterial microscopy, culture, histopathology and immunohistochemistry. MAP was found most frequently in caecal lymph nodes, but sampling more tissues optimized the detection rate. Examination by culture was most efficient while combination with histopathology increased the detection rate slightly. MAP was detected in 49/50 animals with macroscopic lesions representing 1.35% of the slaughtered cattle examined. Of 150 systematically sampled macroscopically non-suspect cows, 28.7% were infected with MAP. This indicates that the majority of MAP-positive cattle are slaughtered without evidence of macroscopic lesions and before clinical signs occur. For reliable prevalence estimation of MAP infection in slaughtered cattle, systematic random sampling is essential.

  3. Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators

    International Nuclear Information System (INIS)

    Flammia, Steven T; Gross, David; Liu, Yi-Kai; Eisert, Jens

    2012-01-01

    Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. Firstly, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e. the sample complexity of tomography decreases with the rank. Secondly, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. In this paper, we present a new theoretical analysis of compressed tomography, based on the restricted isometry property for low-rank matrices. Using these tools, we obtain near-optimal error bounds for the realistic situation where the data contain noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators—the matrix Dantzig selector and the matrix Lasso—with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low-rank estimate using direct fidelity estimation, and describe a method for compressed quantum process tomography that works for processes with small Kraus rank and requires only Pauli eigenstate preparations

  4. The use of importance sampling in a trial assessment to obtain converged estimates of radiological risk

    International Nuclear Information System (INIS)

    Johnson, K.; Lucas, R.

    1986-12-01

    In developing a methodology for assessing potential sites for the disposal of radioactive wastes, the Department of the Environment has conducted a series of trial assessment exercises. In order to produce converged estimates of radiological risk using the SYVAC A/C simulation system an efficient sampling procedure is required. Previous work has demonstrated that importance sampling can substantially increase sampling efficiency. This study used importance sampling to produce converged estimates of risk for the first DoE trial assessment. Four major nuclide chains were analysed. In each case importance sampling produced converged risk estimates with between 10 and 170 times fewer runs of the SYVAC A/C model. This increase in sampling efficiency can reduce the total elapsed time required to obtain a converged estimate of risk from one nuclide chain by a factor of 20. The results of this study suggests that the use of importance sampling could reduce the elapsed time required to perform a risk assessment of a potential site by a factor of ten. (author)

  5. An efficient modularized sample-based method to estimate the first-order Sobol' index

    International Nuclear Information System (INIS)

    Li, Chenzhao; Mahadevan, Sankaran

    2016-01-01

    Sobol' index is a prominent methodology in global sensitivity analysis. This paper aims to directly estimate the Sobol' index based only on available input–output samples, even if the underlying model is unavailable. For this purpose, a new method to calculate the first-order Sobol' index is proposed. The innovation is that the conditional variance and mean in the formula of the first-order index are calculated at an unknown but existing location of model inputs, instead of an explicit user-defined location. The proposed method is modularized in two aspects: 1) index calculations for different model inputs are separate and use the same set of samples; and 2) model input sampling, model evaluation, and index calculation are separate. Due to this modularization, the proposed method is capable to compute the first-order index if only input–output samples are available but the underlying model is unavailable, and its computational cost is not proportional to the dimension of the model inputs. In addition, the proposed method can also estimate the first-order index with correlated model inputs. Considering that the first-order index is a desired metric to rank model inputs but current methods can only handle independent model inputs, the proposed method contributes to fill this gap. - Highlights: • An efficient method to estimate the first-order Sobol' index. • Estimate the index from input–output samples directly. • Computational cost is not proportional to the number of model inputs. • Handle both uncorrelated and correlated model inputs.

  6. A new unbiased stochastic derivative estimator for discontinuous sample performances with structural parameters

    NARCIS (Netherlands)

    Peng, Yijie; Fu, Michael C.; Hu, Jian Qiang; Heidergott, Bernd

    In this paper, we propose a new unbiased stochastic derivative estimator in a framework that can handle discontinuous sample performances with structural parameters. This work extends the three most popular unbiased stochastic derivative estimators: (1) infinitesimal perturbation analysis (IPA), (2)

  7. Estimating species – area relationships by modeling abundance and frequency subject to incomplete sampling

    Science.gov (United States)

    Yamaura, Yuichi; Connor, Edward F.; Royle, Andy; Itoh, Katsuo; Sato, Kiyoshi; Taki, Hisatomo; Mishima, Yoshio

    2016-01-01

    Models and data used to describe species–area relationships confound sampling with ecological process as they fail to acknowledge that estimates of species richness arise due to sampling. This compromises our ability to make ecological inferences from and about species–area relationships. We develop and illustrate hierarchical community models of abundance and frequency to estimate species richness. The models we propose separate sampling from ecological processes by explicitly accounting for the fact that sampled patches are seldom completely covered by sampling plots and that individuals present in the sampling plots are imperfectly detected. We propose a multispecies abundance model in which community assembly is treated as the summation of an ensemble of species-level Poisson processes and estimate patch-level species richness as a derived parameter. We use sampling process models appropriate for specific survey methods. We propose a multispecies frequency model that treats the number of plots in which a species occurs as a binomial process. We illustrate these models using data collected in surveys of early-successional bird species and plants in young forest plantation patches. Results indicate that only mature forest plant species deviated from the constant density hypothesis, but the null model suggested that the deviations were too small to alter the form of species–area relationships. Nevertheless, results from simulations clearly show that the aggregate pattern of individual species density–area relationships and occurrence probability–area relationships can alter the form of species–area relationships. The plant community model estimated that only half of the species present in the regional species pool were encountered during the survey. The modeling framework we propose explicitly accounts for sampling processes so that ecological processes can be examined free of sampling artefacts. Our modeling approach is extensible and could be applied

  8. Sensitivity of postplanning target and OAR coverage estimates to dosimetric margin distribution sampling parameters.

    Science.gov (United States)

    Xu, Huijun; Gordon, J James; Siebers, Jeffrey V

    2011-02-01

    A dosimetric margin (DM) is the margin in a specified direction between a structure and a specified isodose surface, corresponding to a prescription or tolerance dose. The dosimetric margin distribution (DMD) is the distribution of DMs over all directions. Given a geometric uncertainty model, representing inter- or intrafraction setup uncertainties or internal organ motion, the DMD can be used to calculate coverage Q, which is the probability that a realized target or organ-at-risk (OAR) dose metric D, exceeds the corresponding prescription or tolerance dose. Postplanning coverage evaluation quantifies the percentage of uncertainties for which target and OAR structures meet their intended dose constraints. The goal of the present work is to evaluate coverage probabilities for 28 prostate treatment plans to determine DMD sampling parameters that ensure adequate accuracy for postplanning coverage estimates. Normally distributed interfraction setup uncertainties were applied to 28 plans for localized prostate cancer, with prescribed dose of 79.2 Gy and 10 mm clinical target volume to planning target volume (CTV-to-PTV) margins. Using angular or isotropic sampling techniques, dosimetric margins were determined for the CTV, bladder and rectum, assuming shift invariance of the dose distribution. For angular sampling, DMDs were sampled at fixed angular intervals w (e.g., w = 1 degree, 2 degrees, 5 degrees, 10 degrees, 20 degrees). Isotropic samples were uniformly distributed on the unit sphere resulting in variable angular increments, but were calculated for the same number of sampling directions as angular DMDs, and accordingly characterized by the effective angular increment omega eff. In each direction, the DM was calculated by moving the structure in radial steps of size delta (=0.1, 0.2, 0.5, 1 mm) until the specified isodose was crossed. Coverage estimation accuracy deltaQ was quantified as a function of the sampling parameters omega or omega eff and delta. The

  9. Pierre Gy's sampling theory and sampling practice heterogeneity, sampling correctness, and statistical process control

    CERN Document Server

    Pitard, Francis F

    1993-01-01

    Pierre Gy's Sampling Theory and Sampling Practice, Second Edition is a concise, step-by-step guide for process variability management and methods. Updated and expanded, this new edition provides a comprehensive study of heterogeneity, covering the basic principles of sampling theory and its various applications. It presents many practical examples to allow readers to select appropriate sampling protocols and assess the validity of sampling protocols from others. The variability of dynamic process streams using variography is discussed to help bridge sampling theory with statistical process control. Many descriptions of good sampling devices, as well as descriptions of poor ones, are featured to educate readers on what to look for when purchasing sampling systems. The book uses its accessible, tutorial style to focus on professional selection and use of methods. The book will be a valuable guide for mineral processing engineers; metallurgists; geologists; miners; chemists; environmental scientists; and practit...

  10. Bridging the gaps between non-invasive genetic sampling and population parameter estimation

    Science.gov (United States)

    Francesca Marucco; Luigi Boitani; Daniel H. Pletscher; Michael K. Schwartz

    2011-01-01

    Reliable estimates of population parameters are necessary for effective management and conservation actions. The use of genetic data for capture­recapture (CR) analyses has become an important tool to estimate population parameters for elusive species. Strong emphasis has been placed on the genetic analysis of non-invasive samples, or on the CR analysis; however,...

  11. Estimating an appropriate sampling frequency for monitoring ground water well contamination

    International Nuclear Information System (INIS)

    Tuckfield, R.C.

    1994-01-01

    Nearly 1,500 ground water wells at the Savannah River Site (SRS) are sampled quarterly to monitor contamination by radionuclides and other hazardous constituents from nearby waste sites. Some 10,000 water samples were collected in 1993 at a laboratory analysis cost of $10,000,000. No widely accepted statistical method has been developed, to date, for estimating a technically defensible ground water sampling frequency consistent and compliant with federal regulations. Such a method is presented here based on the concept of statistical independence among successively measured contaminant concentrations in time

  12. Dried blood spot measurement: application in tacrolimus monitoring using limited sampling strategy and abbreviated AUC estimation.

    Science.gov (United States)

    Cheung, Chi Yuen; van der Heijden, Jaques; Hoogtanders, Karin; Christiaans, Maarten; Liu, Yan Lun; Chan, Yiu Han; Choi, Koon Shing; van de Plas, Afke; Shek, Chi Chung; Chau, Ka Foon; Li, Chun Sang; van Hooff, Johannes; Stolk, Leo

    2008-02-01

    Dried blood spot (DBS) sampling and high-performance liquid chromatography tandem-mass spectrometry have been developed in monitoring tacrolimus levels. Our center favors the use of limited sampling strategy and abbreviated formula to estimate the area under concentration-time curve (AUC(0-12)). However, it is inconvenient for patients because they have to wait in the center for blood sampling. We investigated the application of DBS method in tacrolimus level monitoring using limited sampling strategy and abbreviated AUC estimation approach. Duplicate venous samples were obtained at each time point (C(0), C(2), and C(4)). To determine the stability of blood samples, one venous sample was sent to our laboratory immediately. The other duplicate venous samples, together with simultaneous fingerprick blood samples, were sent to the University of Maastricht in the Netherlands. Thirty six patients were recruited and 108 sets of blood samples were collected. There was a highly significant relationship between AUC(0-12), estimated from venous blood samples, and fingerprick blood samples (r(2) = 0.96, P AUC(0-12) strategy as drug monitoring.

  13. How many tigers Panthera tigris are there in Huai Kha Khaeng Wildlife Sanctuary, Thailand? An estimate using photographic capture-recapture sampling

    Science.gov (United States)

    Simcharoen, S.; Pattanavibool, A.; Karanth, K.U.; Nichols, J.D.; Kumar, N.S.

    2007-01-01

    We used capture-recapture analyses to estimate the density of a tiger Panthera tigris population in the tropical forests of Huai Kha Khaeng Wildlife Sanctuary, Thailand, from photographic capture histories of 15 distinct individuals. The closure test results (z = 0.39, P = 0.65) provided some evidence in support of the demographic closure assumption. Fit of eight plausible closed models to the data indicated more support for model Mh, which incorporates individual heterogeneity in capture probabilities. This model generated an average capture probability $\\hat p$ = 0.42 and an abundance estimate of $\\widehat{N}(\\widehat{SE}[\\widehat{N}])$ = 19 (9.65) tigers. The sampled area of $\\widehat{A}(W)(\\widehat{SE}[\\widehat{A}(W)])$ = 477.2 (58.24) km2 yielded a density estimate of $\\widehat{D}(\\widehat{SE}[\\widehat{D}])$ = 3.98 (0.51) tigers per 100 km2. Huai Kha Khaeng Wildlife Sanctuary could therefore hold 113 tigers and the entire Western Forest Complex c. 720 tigers. Although based on field protocols that constrained us to use sub-optimal analyses, this estimated tiger density is comparable to tiger densities in Indian reserves that support moderate prey abundances. However, tiger densities in well-protected Indian reserves with high prey abundances are three times higher. If given adequate protection we believe that the Western Forest Complex of Thailand could potentially harbour >2,000 wild tigers, highlighting its importance for global tiger conservation. The monitoring approaches we recommend here would be useful for managing this tiger population.

  14. Baysian estimation of P(X > x) from a small sample of Gaussian data

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    2017-01-01

    The classical statistical uncertainty problem of estimation of upper tail probabilities on the basis of a small sample of observations of a Gaussian random variable is considered. Predictive posterior estimation is discussed, adopting the standard statistical model with diffuse priors of the two...

  15. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used......Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...

  16. A Convenient Method for Estimation of the Isotopic Abundance in Uranium Bearing Samples

    International Nuclear Information System (INIS)

    AI -Saleh, F.S.; AI-Mukren, Alj.H.; Farouk, M.A.

    2008-01-01

    A convenient and simple method for estimation of the isotopic abundance in some uranium bearing samples using gamma-ray spectrometry is developed using a hyper pure germanium spectrometer and a standard uranium sample with known isotopic abundance

  17. A rapid method for estimation of Pu-isotopes in urine samples using high volume centrifuge.

    Science.gov (United States)

    Kumar, Ranjeet; Rao, D D; Dubla, Rupali; Yadav, J R

    2017-07-01

    The conventional radio-analytical technique used for estimation of Pu-isotopes in urine samples involves anion exchange/TEVA column separation followed by alpha spectrometry. This sequence of analysis consumes nearly 3-4 days for completion. Many a times excreta analysis results are required urgently, particularly under repeat and incidental/emergency situations. Therefore, there is need to reduce the analysis time for the estimation of Pu-isotopes in bioassay samples. This paper gives the details of standardization of a rapid method for estimation of Pu-isotopes in urine samples using multi-purpose centrifuge, TEVA resin followed by alpha spectrometry. The rapid method involves oxidation of urine samples, co-precipitation of plutonium along with calcium phosphate followed by sample preparation using high volume centrifuge and separation of Pu using TEVA resin. Pu-fraction was electrodeposited and activity estimated using 236 Pu tracer recovery by alpha spectrometry. Ten routine urine samples of radiation workers were analyzed and consistent radiochemical tracer recovery was obtained in the range 47-88% with a mean and standard deviation of 64.4% and 11.3% respectively. With this newly standardized technique, the whole analytical procedure is completed within 9h (one working day hour). Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Optimal sampling designs for estimation of Plasmodium falciparum clearance rates in patients treated with artemisinin derivatives

    Science.gov (United States)

    2013-01-01

    Background The emergence of Plasmodium falciparum resistance to artemisinins in Southeast Asia threatens the control of malaria worldwide. The pharmacodynamic hallmark of artemisinin derivatives is rapid parasite clearance (a short parasite half-life), therefore, the in vivo phenotype of slow clearance defines the reduced susceptibility to the drug. Measurement of parasite counts every six hours during the first three days after treatment have been recommended to measure the parasite clearance half-life, but it remains unclear whether simpler sampling intervals and frequencies might also be sufficient to reliably estimate this parameter. Methods A total of 2,746 parasite density-time profiles were selected from 13 clinical trials in Thailand, Cambodia, Mali, Vietnam, and Kenya. In these studies, parasite densities were measured every six hours until negative after treatment with an artemisinin derivative (alone or in combination with a partner drug). The WWARN Parasite Clearance Estimator (PCE) tool was used to estimate “reference” half-lives from these six-hourly measurements. The effect of four alternative sampling schedules on half-life estimation was investigated, and compared to the reference half-life (time zero, 6, 12, 24 (A1); zero, 6, 18, 24 (A2); zero, 12, 18, 24 (A3) or zero, 12, 24 (A4) hours and then every 12 hours). Statistical bootstrap methods were used to estimate the sampling distribution of half-lives for parasite populations with different geometric mean half-lives. A simulation study was performed to investigate a suite of 16 potential alternative schedules and half-life estimates generated by each of the schedules were compared to the “true” half-life. The candidate schedules in the simulation study included (among others) six-hourly sampling, schedule A1, schedule A4, and a convenience sampling schedule at six, seven, 24, 25, 48 and 49 hours. Results The median (range) parasite half-life for all clinical studies combined was 3.1 (0

  19. A structured sparse regression method for estimating isoform expression level from multi-sample RNA-seq data.

    Science.gov (United States)

    Zhang, L; Liu, X J

    2016-06-03

    With the rapid development of next-generation high-throughput sequencing technology, RNA-seq has become a standard and important technique for transcriptome analysis. For multi-sample RNA-seq data, the existing expression estimation methods usually deal with each single-RNA-seq sample, and ignore that the read distributions are consistent across multiple samples. In the current study, we propose a structured sparse regression method, SSRSeq, to estimate isoform expression using multi-sample RNA-seq data. SSRSeq uses a non-parameter model to capture the general tendency of non-uniformity read distribution for all genes across multiple samples. Additionally, our method adds a structured sparse regularization, which not only incorporates the sparse specificity between a gene and its corresponding isoform expression levels, but also reduces the effects of noisy reads, especially for lowly expressed genes and isoforms. Four real datasets were used to evaluate our method on isoform expression estimation. Compared with other popular methods, SSRSeq reduced the variance between multiple samples, and produced more accurate isoform expression estimations, and thus more meaningful biological interpretations.

  20. Network protocols and sockets

    OpenAIRE

    BALEJ, Marek

    2010-01-01

    My work will deal with network protocols and sockets and their use in programming language C#. It will therefore deal programming network applications on the platform .NET from Microsoft and instruments, which C# provides to us. There will describe the tools and methods for programming network applications, and shows a description and sample applications that work with sockets and application protocols.

  1. Estimation of uranium isotope in urine samples using extraction chromatography resin

    International Nuclear Information System (INIS)

    Thakur, Smita S.; Yadav, J.R.; Rao, D.D.

    2012-01-01

    Internal exposure monitoring for alpha emitting radionuclides is carried out by bioassay samples analysis. For occupational radiation workers handling uranium in reprocessing or fuel fabrication facilities, there exists a possibility of internal exposure and urine assay is the preferred method for monitoring such exposure. Estimation of lower concentration of uranium at mBq level by alpha spectrometry requires preconcentration and its separation from large volume of urine sample. For this purpose, urine samples collected from non radiation workers were spiked with 232 U tracer at mBq level to estimate the chemical yield. Uranium in urine sample was pre-concentrated by calcium phosphate coprecipitation and separated by extraction chromatography resin U/TEVA. In this resin extractant was DAAP (Diamylamylphosphonate) supported on inert Amberlite XAD-7 support material. After co-precipitation, precipitate was centrifuged and dissolved in 10 ml of 1M Al(NO 3 ) 3 prepared in 3M HNO 3 . The sample thus prepared was loaded on extraction chromatography resin, pre-conditioned with 10 ml of 3M HNO 3 . Column was washed with 10 ml of 3M HNO 3 . Column was again rinsed with 5 ml of 9M HCl followed by 20 ml of 0.05 M oxalic acid prepared in 5M HCl to remove interference due to Th and Np if present in the sample. Uranium was eluted from U/TEVA column with 15 ml of 0.01M HCl. The eluted uranium fraction was electrodeposited on stainless steel planchet and counted by alpha spectrometry for 360000 sec. Approximate analysis time involved from sample loading to stripping is 2 hours when compared with the time involved of 3.5 hours by conventional ion exchange method. Seven urine samples from non radiation worker were radio chemically analyzed by this technique and the radiochemical yield was found in the range of 69-91 %. Efficacy of this method against conventional anion exchange technique earlier standardized at this laboratory is also being highlighted. Minimum detectable activity

  2. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range

    OpenAIRE

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-01-01

    Background In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. Methods In this paper, we propose to improve the existing literature in ...

  3. Near-native protein loop sampling using nonparametric density estimation accommodating sparcity.

    Science.gov (United States)

    Joo, Hyun; Chavan, Archana G; Day, Ryan; Lennox, Kristin P; Sukhanov, Paul; Dahl, David B; Vannucci, Marina; Tsai, Jerry

    2011-10-01

    Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å), this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/.

  4. Near-native protein loop sampling using nonparametric density estimation accommodating sparcity.

    Directory of Open Access Journals (Sweden)

    Hyun Joo

    2011-10-01

    Full Text Available Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM. Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å, this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/.

  5. Near-Native Protein Loop Sampling Using Nonparametric Density Estimation Accommodating Sparcity

    Science.gov (United States)

    Day, Ryan; Lennox, Kristin P.; Sukhanov, Paul; Dahl, David B.; Vannucci, Marina; Tsai, Jerry

    2011-01-01

    Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å), this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/. PMID:22028638

  6. Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.

    Science.gov (United States)

    Župunski, Vesna; Jevtić, Radivoje; Jokić, Vesna Spasić; Župunski, Ljubica; Lalošević, Mirjana; Ćirić, Mihajlo; Ćurčić, Živko

    2017-06-01

    Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with χ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.

  7. Estimation of functional failure probability of passive systems based on adaptive importance sampling method

    International Nuclear Information System (INIS)

    Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)

  8. Evaluation of design flood estimates with respect to sample size

    Science.gov (United States)

    Kobierska, Florian; Engeland, Kolbjorn

    2016-04-01

    Estimation of design floods forms the basis for hazard management related to flood risk and is a legal obligation when building infrastructure such as dams, bridges and roads close to water bodies. Flood inundation maps used for land use planning are also produced based on design flood estimates. In Norway, the current guidelines for design flood estimates give recommendations on which data, probability distribution, and method to use dependent on length of the local record. If less than 30 years of local data is available, an index flood approach is recommended where the local observations are used for estimating the index flood and regional data are used for estimating the growth curve. For 30-50 years of data, a 2 parameter distribution is recommended, and for more than 50 years of data, a 3 parameter distribution should be used. Many countries have national guidelines for flood frequency estimation, and recommended distributions include the log Pearson II, generalized logistic and generalized extreme value distributions. For estimating distribution parameters, ordinary and linear moments, maximum likelihood and Bayesian methods are used. The aim of this study is to r-evaluate the guidelines for local flood frequency estimation. In particular, we wanted to answer the following questions: (i) Which distribution gives the best fit to the data? (ii) Which estimation method provides the best fit to the data? (iii) Does the answer to (i) and (ii) depend on local data availability? To answer these questions we set up a test bench for local flood frequency analysis using data based cross-validation methods. The criteria were based on indices describing stability and reliability of design flood estimates. Stability is used as a criterion since design flood estimates should not excessively depend on the data sample. The reliability indices describe to which degree design flood predictions can be trusted.

  9. Evaluation protocol for amusia: Portuguese sample.

    Science.gov (United States)

    Peixoto, Maria Conceição; Martins, Jorge; Teixeira, Pedro; Alves, Marisa; Bastos, José; Ribeiro, Carlos

    2012-12-01

    Amusia is a disorder that affects the processing of music. Part of this processing happens in the primary auditory cortex. The study of this condition allows us to evaluate the central auditory pathways. To explore the diagnostic evaluation tests of amusia. The authors propose an evaluation protocol for patients with suspected amusia (after brain injury or complaints of poor musical perception), in parallel with the assessment of central auditory processing, already implemented in the department. The Montreal Evaluation of Battery of amusia was the basis for the selection of the tests. From this comprehensive battery of tests we selected some of the musical examples to evaluate different musical aspects, including memory and perception of music, ability concerning musical recognition and discrimination. In terms of memory there is a test for assessing delayed memory, adapted to the Portuguese culture. Prospective study. Although still experimental, with the possibility of adjustments in the assessment, we believe that this assessment, combined with the study of central auditory processing, will allow us to understand some central lesions, congenital or acquired hearing perception limitations.

  10. Estimation of sample size and testing power (part 6).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-03-01

    The design of one factor with k levels (k ≥ 3) refers to the research that only involves one experimental factor with k levels (k ≥ 3), and there is no arrangement for other important non-experimental factors. This paper introduces the estimation of sample size and testing power for quantitative data and qualitative data having a binary response variable with the design of one factor with k levels (k ≥ 3).

  11. A two-hypothesis approach to establishing a life detection/biohazard protocol for planetary samples

    Science.gov (United States)

    Conley, Catharine; Steele, Andrew

    2016-07-01

    The COSPAR policy on performing a biohazard assessment on samples brought from Mars to Earth is framed in the context of a concern for false-positive results. However, as noted during the 2012 Workshop for Life Detection in Samples from Mars (ref. Kminek et al., 2014), a more significant concern for planetary samples brought to Earth is false-negative results, because an undetected biohazard could increase risk to the Earth. This is the reason that stringent contamination control must be a high priority for all Category V Restricted Earth Return missions. A useful conceptual framework for addressing these concerns involves two complementary 'null' hypotheses: testing both of them, together, would allow statistical and community confidence to be developed regarding one or the other conclusion. As noted above, false negatives are of primary concern for safety of the Earth, so the 'Earth Safety null hypothesis' -- that must be disproved to assure low risk to the Earth from samples introduced by Category V Restricted Earth Return missions -- is 'There is native life in these samples.' False positives are of primary concern for Astrobiology, so the 'Astrobiology null hypothesis' -- that must be disproved in order to demonstrate the existence of extraterrestrial life is 'There is no life in these samples.' The presence of Earth contamination would render both of these hypotheses more difficult to disprove. Both these hypotheses can be tested following a strict science protocol; analyse, interprete, test the hypotheses and repeat. The science measurements undertaken are then done in an iterative fashion that responds to discovery with both hypotheses testable from interpretation of the scientific data. This is a robust, community involved activity that ensures maximum science return with minimal sample use.

  12. A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.

    Science.gov (United States)

    Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa

    2016-05-17

    Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.

  13. Ambient organic carbon to elemental carbon ratios: Influence of the thermal–optical temperature protocol and implications

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Yuan, E-mail: ycheng@mail.tsinghua.edu.cn [State Key Joint Laboratory of Environment Simulation and Pollution Control, School of Environment, Tsinghua University, Beijing (China); He, Ke-bin, E-mail: hekb@tsinghua.edu.cn [State Key Joint Laboratory of Environment Simulation and Pollution Control, School of Environment, Tsinghua University, Beijing (China); State Environmental Protection Key Laboratory of Sources and Control of Air Pollution Complex, Beijing (China); Duan, Feng-kui; Du, Zhen-yu [State Key Joint Laboratory of Environment Simulation and Pollution Control, School of Environment, Tsinghua University, Beijing (China); Zheng, Mei [College of Environmental Sciences and Engineering, Peking University, Beijing (China); Ma, Yong-liang [State Key Joint Laboratory of Environment Simulation and Pollution Control, School of Environment, Tsinghua University, Beijing (China)

    2014-01-01

    Ambient organic carbon (OC) to elemental carbon (EC) ratios are strongly associated with not only the radiative forcing due to aerosols but also the extent of secondary organic aerosol (SOA) formation. An inter-comparison study was conducted based on fine particulate matter samples collected during summer in Beijing to investigate the influence of the thermal–optical temperature protocol on the OC to EC ratio. Five temperature protocols were used such that the NIOSH (National Institute for Occupational Safety and Health) and EUSAAR (European Supersites for Atmospheric Aerosol Research) protocols were run by the Sunset carbon analyzer while the IMPROVE (the Interagency Monitoring of Protected Visual Environments network)-A protocol and two alternative protocols designed based on NIOSH and EUSAAR were run by the DRI analyzer. The optical attenuation measured by the Sunset carbon analyzer was more easily biased by the shadowing effect, whereas total carbon agreed well between the Sunset and DRI analyzers. The EC{sub IMPROVE-A} (EC measured by the IMPROVE-A protocol; similar hereinafter) to EC{sub NIOSH} ratio and the EC{sub IMPROVE-A} to EC{sub EUSAAR} ratio averaged 1.36 ± 0.21 and 0.91 ± 0.10, respectively, both of which exhibited little dependence on the biomass burning contribution. Though the temperature protocol had substantial influence on the OC to EC ratio, the contributions of secondary organic carbon (SOC) to OC, which were predicted by the EC-tracer method, did not differ significantly among the five protocols. Moreover, the SOC contributions obtained in this study were comparable with previous results based on field observation (typically between 45 and 65%), but were substantially higher than the estimation provided by an air quality model (only 18%). The comparison of SOC and WSOC suggests that when using the transmittance charring correction, all of the three common protocols (i.e., IMPROVE-A, NIOSH and EUSAAR) could be reliable for the estimation

  14. Ambient organic carbon to elemental carbon ratios: Influence of the thermal–optical temperature protocol and implications

    International Nuclear Information System (INIS)

    Cheng, Yuan; He, Ke-bin; Duan, Feng-kui; Du, Zhen-yu; Zheng, Mei; Ma, Yong-liang

    2014-01-01

    Ambient organic carbon (OC) to elemental carbon (EC) ratios are strongly associated with not only the radiative forcing due to aerosols but also the extent of secondary organic aerosol (SOA) formation. An inter-comparison study was conducted based on fine particulate matter samples collected during summer in Beijing to investigate the influence of the thermal–optical temperature protocol on the OC to EC ratio. Five temperature protocols were used such that the NIOSH (National Institute for Occupational Safety and Health) and EUSAAR (European Supersites for Atmospheric Aerosol Research) protocols were run by the Sunset carbon analyzer while the IMPROVE (the Interagency Monitoring of Protected Visual Environments network)-A protocol and two alternative protocols designed based on NIOSH and EUSAAR were run by the DRI analyzer. The optical attenuation measured by the Sunset carbon analyzer was more easily biased by the shadowing effect, whereas total carbon agreed well between the Sunset and DRI analyzers. The EC IMPROVE-A (EC measured by the IMPROVE-A protocol; similar hereinafter) to EC NIOSH ratio and the EC IMPROVE-A to EC EUSAAR ratio averaged 1.36 ± 0.21 and 0.91 ± 0.10, respectively, both of which exhibited little dependence on the biomass burning contribution. Though the temperature protocol had substantial influence on the OC to EC ratio, the contributions of secondary organic carbon (SOC) to OC, which were predicted by the EC-tracer method, did not differ significantly among the five protocols. Moreover, the SOC contributions obtained in this study were comparable with previous results based on field observation (typically between 45 and 65%), but were substantially higher than the estimation provided by an air quality model (only 18%). The comparison of SOC and WSOC suggests that when using the transmittance charring correction, all of the three common protocols (i.e., IMPROVE-A, NIOSH and EUSAAR) could be reliable for the estimation of SOC by the EC

  15. Estimation of uranium in different types of water and sand samples by adsorptive stripping voltammetry

    International Nuclear Information System (INIS)

    Bhalke, Sunil; Raghunath, Radha; Mishra, Suchismita; Suseela, B.; Tripathi, R.M.; Pandit, G.G.; Shukla, V.K.; Puranik, V.D.

    2005-01-01

    A method is standardized for the estimation of uranium by adsorptive stripping voltammetry using chloranilic acid (CAA) as complexing agent. The optimum parameters to get best sensitivity and good reproducibility for uranium were 60s adsorption time, pH 1.8, chloranilic acid (2x10 -4 M) and 0.002M EDTA. The peak potential under this condition was found to be -0.03 V. With these optimum parameters a sensitivity of 1.19 nA/nM uranium was observed. Detection limit for this optimum parameter was found to be 0.55 nM. This can be further improved by increasing adsorption time. Using this method, uranium was estimated in different type of water samples such as seawater, synthetic seawater, stream water, tap water, well water, bore well water and process water. This method has also been used for estimation of uranium in sand, organic solvent used for extraction of uranium from phosphoric acid and its raffinate. Sample digestion procedures used for estimation of uranium in various matrices are discussed. It has been observed from the analysis that the uranium peak potentials changes with matrix of the sample, hence, standard addition method is the best method to get reliable and accurate results. Quality assurance of the standardized method is verified by analyzing certified reference water sample from USDOE, participating intercomparison exercises and also by estimating uranium content in water samples both by differential pulse adsorptive stripping voltammetric and laser fluorimetric techniques. (author)

  16. Per tree estimates with n-tree distance sampling: an application to increment core data

    Science.gov (United States)

    Thomas B. Lynch; Robert F. Wittwer

    2002-01-01

    Per tree estimates using the n trees nearest a point can be obtained by using a ratio of per unit area estimates from n-tree distance sampling. This ratio was used to estimate average age by d.b.h. classes for cottonwood trees (Populus deltoides Bartr. ex Marsh.) on the Cimarron National Grassland. Increment...

  17. Counting Cats: Spatially Explicit Population Estimates of Cheetah (Acinonyx jubatus Using Unstructured Sampling Data.

    Directory of Open Access Journals (Sweden)

    Femke Broekhuis

    Full Text Available Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed 'hotspots' of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species.

  18. Counting Cats: Spatially Explicit Population Estimates of Cheetah (Acinonyx jubatus) Using Unstructured Sampling Data.

    Science.gov (United States)

    Broekhuis, Femke; Gopalaswamy, Arjun M

    2016-01-01

    Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed 'hotspots' of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species.

  19. Evaluation of species richness estimators based on quantitative performance measures and sensitivity to patchiness and sample grain size

    Science.gov (United States)

    Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc

    2012-11-01

    Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling

  20. Estimation of photosynthesis in cyanobacteria by pulse-amplitude modulation chlorophyll fluorescence: problems and solutions.

    Science.gov (United States)

    Ogawa, Takako; Misumi, Masahiro; Sonoike, Kintake

    2017-09-01

    Cyanobacteria are photosynthetic prokaryotes and widely used for photosynthetic research as model organisms. Partly due to their prokaryotic nature, however, estimation of photosynthesis by chlorophyll fluorescence measurements is sometimes problematic in cyanobacteria. For example, plastoquinone pool is reduced in the dark-acclimated samples in many cyanobacterial species so that conventional protocol developed for land plants cannot be directly applied for cyanobacteria. Even for the estimation of the simplest chlorophyll fluorescence parameter, F v /F m , some additional protocol such as addition of DCMU or illumination of weak blue light is necessary. In this review, those problems in the measurements of chlorophyll fluorescence in cyanobacteria are introduced, and solutions to those problems are given.

  1. Estimating dead wood during national forest inventories: a review of inventory methodologies and suggestions for harmonization.

    Science.gov (United States)

    Woodall, Christopher W; Rondeux, Jacques; Verkerk, Pieter J; Ståhl, Göran

    2009-10-01

    Efforts to assess forest ecosystem carbon stocks, biodiversity, and fire hazards have spurred the need for comprehensive assessments of forest ecosystem dead wood (DW) components around the world. Currently, information regarding the prevalence, status, and methods of DW inventories occurring in the world's forested landscapes is scattered. The goal of this study is to describe the status, DW components measured, sample methods employed, and DW component thresholds used by national forest inventories that currently inventory DW around the world. Study results indicate that most countries do not inventory forest DW. Globally, we estimate that about 13% of countries inventory DW using a diversity of sample methods and DW component definitions. A common feature among DW inventories was that most countries had only just begun DW inventories and employ very low sample intensities. There are major hurdles to harmonizing national forest inventories of DW: differences in population definitions, lack of clarity on sample protocols/estimation procedures, and sparse availability of inventory data/reports. Increasing database/estimation flexibility, developing common dimensional thresholds of DW components, publishing inventory procedures/protocols, releasing inventory data/reports to international peer review, and increasing communication (e.g., workshops) among countries inventorying DW are suggestions forwarded by this study to increase DW inventory harmonization.

  2. Estimation of the sugar cane cultivated area from LANDSAT images using the two phase sampling method

    Science.gov (United States)

    Parada, N. D. J. (Principal Investigator); Cappelletti, C. A.; Mendonca, F. J.; Lee, D. C. L.; Shimabukuro, Y. E.

    1982-01-01

    A two phase sampling method and the optimal sampling segment dimensions for the estimation of sugar cane cultivated area were developed. This technique employs visual interpretations of LANDSAT images and panchromatic aerial photographs considered as the ground truth. The estimates, as a mean value of 100 simulated samples, represent 99.3% of the true value with a CV of approximately 1%; the relative efficiency of the two phase design was 157% when compared with a one phase aerial photographs sample.

  3. Cow-specific diet digestibility predictions based on near-infrared reflectance spectroscopy scans of faecal samples.

    Science.gov (United States)

    Mehtiö, T; Rinne, M; Nyholm, L; Mäntysaari, P; Sairanen, A; Mäntysaari, E A; Pitkänen, T; Lidauer, M H

    2016-04-01

    This study was designed to obtain information on prediction of diet digestibility from near-infrared reflectance spectroscopy (NIRS) scans of faecal spot samples from dairy cows at different stages of lactation and to develop a faecal sampling protocol. NIRS was used to predict diet organic matter digestibility (OMD) and indigestible neutral detergent fibre content (iNDF) from faecal samples, and dry matter digestibility (DMD) using iNDF in feed and faecal samples as an internal marker. Acid-insoluble ash (AIA) as an internal digestibility marker was used as a reference method to evaluate the reliability of NIRS predictions. Feed and composite faecal samples were collected from 44 cows at approximately 50, 150 and 250 days in milk (DIM). The estimated standard deviation for cow-specific organic matter digestibility analysed by AIA was 12.3 g/kg, which is small considering that the average was 724 g/kg. The phenotypic correlation between direct faecal OMD prediction by NIRS and OMD by AIA over the lactation was 0.51. The low repeatability and small variability estimates for direct OMD predictions by NIRS were not accurate enough to quantify small differences in OMD between cows. In contrast to OMD, the repeatability estimates for DMD by iNDF and especially for direct faecal iNDF predictions were 0.32 and 0.46, respectively, indicating that developing of NIRS predictions for cow-specific digestibility is possible. A data subset of 20 cows with daily individual faecal samples was used to develop an on-farm sampling protocol. Based on the assessment of correlations between individual sample combinations and composite samples as well as repeatability estimates for individual sample combinations, we found that collecting up to three individual samples yields a representative composite sample. Collection of samples from all the cows of a herd every third month might be a good choice, because it would yield a better accuracy. © 2015 Blackwell Verlag GmbH.

  4. A sampling strategy for estimating plot average annual fluxes of chemical elements from forest soils

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.; Vries, de W.

    2010-01-01

    A sampling strategy for estimating spatially averaged annual element leaching fluxes from forest soils is presented and tested in three Dutch forest monitoring plots. In this method sampling locations and times (days) are selected by probability sampling. Sampling locations were selected by

  5. Development and validation of a protocol for field validation of passive dosimeters for ethylene oxide excursion limit monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Puskar, M.A.; Szopinski, F.G.; Hecker, L.H. (Corporate Industrial Hygiene Laboratory, Abbott Laboratories, North Chicago, IL (USA))

    1991-04-01

    An exposure and analysis protocol is described for the field validation of passive dosimeters for ethylene oxide (EtO) excursion limit monitoring. The protocol calls for the use of a field exposure chamber with concurrent sampling using Tedlar air-sampling bags. The bags are analyzed immediately after sampling by gas chromatography with flame ionization detection (GC-FID). The chamber design allows all monitors to be exposed for the exact same time in the field. The sampling and analysis procedure not only determines the actual concentration of EtO present during the monitor's exposure but estimates if concentrations of EtO vary from point to point in the monitor array during the exposure. In chamber operation, the accuracy of the standard generator used to calibrate the GC-FID was independently verified in the field by the standard additions method. The sampling bias of the sampling train was determined to be -3.5% in the 2.4 ppm to 14.3 ppm concentration range. To estimate the stability of collected EtO samples in Tedlar bags, the rate of EtO loss in the bags was determined to be 0.011 ppm/hr at 2.57 ppm and 0.066 ppm/hr at 8.07 ppm. Sampling bias of the passive methods by additional EtO exposure of the monitors in the closed chamber after sampling and during purging was determined to be +1.5%. The Tedlar bag sampling method with subsequent GC-FID determination demonstrated a coefficient of variation of 1.8% at 2.43 ppm.

  6. Method for estimating modulation transfer function from sample images.

    Science.gov (United States)

    Saiga, Rino; Takeuchi, Akihisa; Uesugi, Kentaro; Terada, Yasuko; Suzuki, Yoshio; Mizutani, Ryuta

    2018-02-01

    The modulation transfer function (MTF) represents the frequency domain response of imaging modalities. Here, we report a method for estimating the MTF from sample images. Test images were generated from a number of images, including those taken with an electron microscope and with an observation satellite. These original images were convolved with point spread functions (PSFs) including those of circular apertures. The resultant test images were subjected to a Fourier transformation. The logarithm of the squared norm of the Fourier transform was plotted against the squared distance from the origin. Linear correlations were observed in the logarithmic plots, indicating that the PSF of the test images can be approximated with a Gaussian. The MTF was then calculated from the Gaussian-approximated PSF. The obtained MTF closely coincided with the MTF predicted from the original PSF. The MTF of an x-ray microtomographic section of a fly brain was also estimated with this method. The obtained MTF showed good agreement with the MTF determined from an edge profile of an aluminum test object. We suggest that this approach is an alternative way of estimating the MTF, independently of the image type. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. A novel recursive Fourier transform for nonuniform sampled signals: application to heart rate variability spectrum estimation.

    Science.gov (United States)

    Holland, Alexander; Aboy, Mateo

    2009-07-01

    We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.

  8. PERIOD ESTIMATION FOR SPARSELY SAMPLED QUASI-PERIODIC LIGHT CURVES APPLIED TO MIRAS

    Energy Technology Data Exchange (ETDEWEB)

    He, Shiyuan; Huang, Jianhua Z.; Long, James [Department of Statistics, Texas A and M University, College Station, TX (United States); Yuan, Wenlong; Macri, Lucas M., E-mail: lmacri@tamu.edu [George P. and Cynthia W. Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A and M University, College Station, TX (United States)

    2016-12-01

    We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequency parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period–luminosity relations.

  9. Estimation of salt intake from spot urine samples in patients with chronic kidney disease

    Directory of Open Access Journals (Sweden)

    Ogura Makoto

    2012-06-01

    Full Text Available Abstract Background High salt intake in patients with chronic kidney disease (CKD may cause high blood pressure and increased albuminuria. Although, the estimation of salt intake is essential, there are no easy methods to estimate real salt intake. Methods Salt intake was assessed by determining urinary sodium excretion from the collected urine samples. Estimation of salt intake by spot urine was calculated by Tanaka’s formula. The correlation between estimated and measured sodium excretion was evaluated by Pearson´s correlation coefficients. Performance of equation was estimated by median bias, interquartile range (IQR, proportion of estimates within 30% deviation of measured sodium excretion (P30 and root mean square error (RMSE.The sensitivity and specificity of estimated against measured sodium excretion were separately assessed by receiver-operating characteristic (ROC curves. Results A total of 334 urine samples from 96 patients were examined. Mean age was 58 ± 16 years, and estimated glomerular filtration rate (eGFR was 53 ± 27 mL/min. Among these patients, 35 had CKD stage 1 or 2, 39 had stage 3, and 22 had stage 4 or 5. Estimated sodium excretion significantly correlated with measured sodium excretion (R = 0.52, P 170 mEq/day (AUC 0.835. Conclusions The present study demonstrated that spot urine can be used to estimate sodium excretion, especially in patients with low eGFR.

  10. Evaluating sampling strategies for larval cisco (Coregonus artedi)

    Science.gov (United States)

    Myers, J.T.; Stockwell, J.D.; Yule, D.L.; Black, J.A.

    2008-01-01

    To improve our ability to assess larval cisco (Coregonus artedi) populations in Lake Superior, we conducted a study to compare several sampling strategies. First, we compared density estimates of larval cisco concurrently captured in surface waters with a 2 x 1-m paired neuston net and a 0.5-m (diameter) conical net. Density estimates obtained from the two gear types were not significantly different, suggesting that the conical net is a reasonable alternative to the more cumbersome and costly neuston net. Next, we assessed the effect of tow pattern (sinusoidal versus straight tows) to examine if propeller wash affected larval density. We found no effect of propeller wash on the catchability of larval cisco. Given the availability of global positioning systems, we recommend sampling larval cisco using straight tows to simplify protocols and facilitate straightforward measurements of volume filtered. Finally, we investigated potential trends in larval cisco density estimates by sampling four time periods during the light period of a day at individual sites. Our results indicate no significant trends in larval density estimates during the day. We conclude estimates of larval cisco density across space are not confounded by time at a daily timescale. Well-designed, cost effective surveys of larval cisco abundance will help to further our understanding of this important Great Lakes forage species.

  11. Automated CBED processing: Sample thickness estimation based on analysis of zone-axis CBED pattern

    Energy Technology Data Exchange (ETDEWEB)

    Klinger, M., E-mail: klinger@post.cz; Němec, M.; Polívka, L.; Gärtnerová, V.; Jäger, A.

    2015-03-15

    An automated processing of convergent beam electron diffraction (CBED) patterns is presented. The proposed methods are used in an automated tool for estimating the thickness of transmission electron microscopy (TEM) samples by matching an experimental zone-axis CBED pattern with a series of patterns simulated for known thicknesses. The proposed tool detects CBED disks, localizes a pattern in detected disks and unifies the coordinate system of the experimental pattern with the simulated one. The experimental pattern is then compared disk-by-disk with a series of simulated patterns each corresponding to different known thicknesses. The thickness of the most similar simulated pattern is then taken as the thickness estimate. The tool was tested on [0 1 1] Si, [0 1 0] α-Ti and [0 1 1] α-Ti samples prepared using different techniques. Results of the presented approach were compared with thickness estimates based on analysis of CBED patterns in two beam conditions. The mean difference between these two methods was 4.1% for the FIB-prepared silicon samples, 5.2% for the electro-chemically polished titanium and 7.9% for Ar{sup +} ion-polished titanium. The proposed techniques can also be employed in other established CBED analyses. Apart from the thickness estimation, it can potentially be used to quantify lattice deformation, structure factors, symmetry, defects or extinction distance. - Highlights: • Automated TEM sample thickness estimation using zone-axis CBED is presented. • Computer vision and artificial intelligence are employed in CBED processing. • This approach reduces operator effort, analysis time and increases repeatability. • Individual parts can be employed in other analyses of CBED/diffraction pattern.

  12. Temporally stratified sampling programs for estimation of fish impingement

    International Nuclear Information System (INIS)

    Kumar, K.D.; Griffith, J.S.

    1977-01-01

    Impingement monitoring programs often expend valuable and limited resources and fail to provide a dependable estimate of either total annual impingement or those biological and physicochemical factors affecting impingement. In situations where initial monitoring has identified ''problem'' fish species and the periodicity of their impingement, intensive sampling during periods of high impingement will maximize information obtained. We use data gathered at two nuclear generating facilities in the southeastern United States to discuss techniques of designing such temporally stratified monitoring programs and their benefits and drawbacks. Of the possible temporal patterns in environmental factors within a calendar year, differences among seasons are most influential in the impingement of freshwater fishes in the Southeast. Data on the threadfin shad (Dorosoma petenense) and the role of seasonal temperature changes are utilized as an example to demonstrate ways of most efficiently and accurately estimating impingement of the species

  13. Reliable Quantification of the Potential for Equations Based on Spot Urine Samples to Estimate Population Salt Intake

    DEFF Research Database (Denmark)

    Huang, Liping; Crino, Michelle; Wu, Jason Hy

    2016-01-01

    to a standard format. Individual participant records will be compiled and a series of analyses will be completed to: (1) compare existing equations for estimating 24-hour salt intake from spot urine samples with 24-hour urine samples, and assess the degree of bias according to key demographic and clinical......BACKGROUND: Methods based on spot urine samples (a single sample at one time-point) have been identified as a possible alternative approach to 24-hour urine samples for determining mean population salt intake. OBJECTIVE: The aim of this study is to identify a reliable method for estimating mean...... population salt intake from spot urine samples. This will be done by comparing the performance of existing equations against one other and against estimates derived from 24-hour urine samples. The effects of factors such as ethnicity, sex, age, body mass index, antihypertensive drug use, health status...

  14. Coherence in quantum estimation

    Science.gov (United States)

    Giorda, Paolo; Allegra, Michele

    2018-01-01

    The geometry of quantum states provides a unifying framework for estimation processes based on quantum probes, and it establishes the ultimate bounds of the achievable precision. We show a relation between the statistical distance between infinitesimally close quantum states and the second order variation of the coherence of the optimal measurement basis with respect to the state of the probe. In quantum phase estimation protocols, this leads to propose coherence as the relevant resource that one has to engineer and control to optimize the estimation precision. Furthermore, the main object of the theory i.e. the symmetric logarithmic derivative, in many cases allows one to identify a proper factorization of the whole Hilbert space in two subsystems. The factorization allows one to discuss the role of coherence versus correlations in estimation protocols; to show how certain estimation processes can be completely or effectively described within a single-qubit subsystem; and to derive lower bounds for the scaling of the estimation precision with the number of probes used. We illustrate how the framework works for both noiseless and noisy estimation procedures, in particular those based on multi-qubit GHZ-states. Finally we succinctly analyze estimation protocols based on zero-temperature critical behavior. We identify the coherence that is at the heart of their efficiency, and we show how it exhibits the non-analyticities and scaling behavior proper of a large class of quantum phase transitions.

  15. B-graph sampling to estimate the size of a hidden population

    NARCIS (Netherlands)

    Spreen, M.; Bogaerts, S.

    2015-01-01

    Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is

  16. Agricultural Soil Spectral Response and Properties Assessment: Effects of Measurement Protocol and Data Mining Technique

    Directory of Open Access Journals (Sweden)

    Asa Gholizadeh

    2017-10-01

    Full Text Available Soil spectroscopy has shown to be a fast, cost-effective, environmentally friendly, non-destructive, reproducible and repeatable analytical technique. Soil components, as well as types of instruments, protocols, sampling methods, sample preparation, spectral acquisition techniques and analytical algorithms have a combined influence on the final performance. Therefore, it is important to characterize these differences and to introduce an effective approach in order to minimize the technical factors that alter reflectance spectra and consequent prediction. To quantify this alteration, a joint project between Czech University of Life Sciences Prague (CULS and Tel-Aviv University (TAU was conducted to estimate Cox, pH-H2O, pH-KCl and selected forms of Fe and Mn. Two different soil spectral measurement protocols and two data mining techniques were used to examine seventy-eight soil samples from five agricultural areas in different parts of the Czech Republic. Spectral measurements at both laboratories were made using different ASD spectroradiometers. The CULS protocol was based on employing a contact probe (CP spectral measurement scheme, while the TAU protocol was carried out using a CP measurement method, accompanied with the internal soil standard (ISS procedure. Two spectral datasets, acquired from different protocols, were both analyzed using partial least square regression (PLSR technique as well as the PARACUDA II®, a new data mining engine for optimizing PLSR models. The results showed that spectra based on the CULS setup (non-ISS demonstrated significantly higher albedo intensity and reflectance values relative to the TAU setup with ISS. However, the majority of statistics using the TAU protocol was not noticeably better than the CULS spectra. The paper also highlighted that under both measurement protocols, the PARACUDA II® engine proved to be a powerful tool for providing better results than PLSR. Such initiative is not only a way to

  17. Centrifugation protocols: tests to determine optimal lithium heparin and citrate plasma sample quality.

    Science.gov (United States)

    Dimeski, Goce; Solano, Connie; Petroff, Mark K; Hynd, Matthew

    2011-05-01

    Currently, no clear guidelines exist for the most appropriate tests to determine sample quality from centrifugation protocols for plasma sample types with both lithium heparin in gel barrier tubes for biochemistry testing and citrate tubes for coagulation testing. Blood was collected from 14 participants in four lithium heparin and one serum tube with gel barrier. The plasma tubes were centrifuged at four different centrifuge settings and analysed for potassium (K(+)), lactate dehydrogenase (LD), glucose and phosphorus (Pi) at zero time, poststorage at six hours at 21 °C and six days at 2-8°C. At the same time, three citrate tubes were collected and centrifuged at three different centrifuge settings and analysed immediately for prothrombin time/international normalized ratio, activated partial thromboplastin time, derived fibrinogen and surface-activated clotting time (SACT). The biochemistry analytes indicate plasma is less stable than serum. Plasma sample quality is higher with longer centrifugation time, and much higher g force. Blood cells present in the plasma lyse with time or are damaged when transferred in the reaction vessels, causing an increase in the K(+), LD and Pi above outlined limits. The cells remain active and consume glucose even in cold storage. The SACT is the only coagulation parameter that was affected by platelets >10 × 10(9)/L in the citrate plasma. In addition to the platelet count, a limited but sensitive number of assays (K(+), LD, glucose and Pi for biochemistry, and SACT for coagulation) can be used to determine appropriate centrifuge settings to consistently obtain the highest quality lithium heparin and citrate plasma samples. The findings will aid laboratories to balance the need to provide the most accurate results in the best turnaround time.

  18. Radon in large buildings: The development of a protocol

    International Nuclear Information System (INIS)

    Wilson, D.L.; Dudney, C.S.; Gammage, R.B.

    1993-01-01

    Over the past several years, considerable research has been devoted by the US Environmental Protection Agency (USEPA) and others to develop radon sampling protocols for single family residences and schools. However, very little research has been performed on measuring radon in the work place. To evaluate possible sampling protocols, 833 buildings throughout the United States were selected for extensive radon testing. The buildings tested (warehouses, production plants and office buildings) were representative of commercial buildings across the country both in design, size and use. Based on the results, preliminary radon sampling protocols for the work place have been developed

  19. Automatic sampling for unbiased and efficient stereological estimation using the proportionator in biological studies

    DEFF Research Database (Denmark)

    Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb

    2008-01-01

    Quantification of tissue properties is improved using the general proportionator sampling and estimation procedure: automatic image analysis and non-uniform sampling with probability proportional to size (PPS). The complete region of interest is partitioned into fields of view, and every field...... of view is given a weight (the size) proportional to the total amount of requested image analysis features in it. The fields of view sampled with known probabilities proportional to individual weight are the only ones seen by the observer who provides the correct count. Even though the image analysis...... cerebellum, total number of orexin positive neurons in transgenic mice brain, and estimating the absolute area and the areal fraction of β islet cells in dog pancreas.  The proportionator was at least eight times more efficient (precision and time combined) than traditional computer controlled sampling....

  20. Assessing Exhaustiveness of Stochastic Sampling for Integrative Modeling of Macromolecular Structures.

    Science.gov (United States)

    Viswanath, Shruthi; Chemmama, Ilan E; Cimermancic, Peter; Sali, Andrej

    2017-12-05

    Modeling of macromolecular structures involves structural sampling guided by a scoring function, resulting in an ensemble of good-scoring models. By necessity, the sampling is often stochastic, and must be exhaustive at a precision sufficient for accurate modeling and assessment of model uncertainty. Therefore, the very first step in analyzing the ensemble is an estimation of the highest precision at which the sampling is exhaustive. Here, we present an objective and automated method for this task. As a proxy for sampling exhaustiveness, we evaluate whether two independently and stochastically generated sets of models are sufficiently similar. The protocol includes testing 1) convergence of the model score, 2) whether model scores for the two samples were drawn from the same parent distribution, 3) whether each structural cluster includes models from each sample proportionally to its size, and 4) whether there is sufficient structural similarity between the two model samples in each cluster. The evaluation also provides the sampling precision, defined as the smallest clustering threshold that satisfies the third, most stringent test. We validate the protocol with the aid of enumerated good-scoring models for five illustrative cases of binary protein complexes. Passing the proposed four tests is necessary, but not sufficient for thorough sampling. The protocol is general in nature and can be applied to the stochastic sampling of any set of models, not just structural models. In addition, the tests can be used to stop stochastic sampling as soon as exhaustiveness at desired precision is reached, thereby improving sampling efficiency; they may also help in selecting a model representation that is sufficiently detailed to be informative, yet also sufficiently coarse for sampling to be exhaustive. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  1. Sample size methods for estimating HIV incidence from cross-sectional surveys.

    Science.gov (United States)

    Konikoff, Jacob; Brookmeyer, Ron

    2015-12-01

    Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.

  2. Uncertainties of estimating average radon and radon decay product concentrations in occupied houses

    International Nuclear Information System (INIS)

    Ronca-Battista, M.; Magno, P.; Windham, S.

    1986-01-01

    Radon and radon decay product measurements made in up to 68 Butte, Montana homes over a period of 18 months were used to estimate the uncertainty in estimating long-term average radon and radon decay product concentrations from a short-term measurement. This analysis was performed in support of the development of radon and radon decay product measurement protocols by the Environmental Protection Agency (EPA). The results of six measurement methods were analyzed: continuous radon and working level monitors, radon progeny integrating sampling units, alpha-track detectors, and grab radon and radon decay product techniques. Uncertainties were found to decrease with increasing sampling time and to be smaller when measurements were conducted during the winter months. In general, radon measurements had a smaller uncertainty than radon decay product measurements. As a result of this analysis, the EPA measurements protocols specify that all measurements be made under closed-house (winter) conditions, and that sampling times of at least a 24 hour period be used when the measurement will be the basis for a decision about remedial action or long-term health risks. 13 references, 3 tables

  3. Active SAmpling Protocol (ASAP) to Optimize Individual Neurocognitive Hypothesis Testing: A BCI-Inspired Dynamic Experimental Design.

    Science.gov (United States)

    Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie

    2016-01-01

    The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges.

  4. Effects of sample size on estimation of rainfall extremes at high temperatures

    Science.gov (United States)

    Boessenkool, Berry; Bürger, Gerd; Heistermann, Maik

    2017-09-01

    High precipitation quantiles tend to rise with temperature, following the so-called Clausius-Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  5. Effects of sample size on estimation of rainfall extremes at high temperatures

    Directory of Open Access Journals (Sweden)

    B. Boessenkool

    2017-09-01

    Full Text Available High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  6. Evaluation of sampling, cookery, and shear force protocols for objective evaluation of lamb longissimus tenderness.

    Science.gov (United States)

    Shackelford, S D; Wheeler, T L; Koohmaraie, M

    2004-03-01

    Experiments were conducted to compare the effects of two cookery methods, two shear force procedures, and sampling location within non-callipyge and callipyge lamb LM on the magnitude, variance, and repeatability of LM shear force data. In Exp. 1, 15 non-callipyge and 15 callipyge carcasses were sampled, and Warner-Bratzler shear force (WBSF) was determined for both sides of each carcass at three locations along the length (anterior to posterior) of the LM, whereas slice shear force (SSF) was determined for both sides of each carcass at only one location. For approximately half the carcasses within each genotype, LM chops were cooked for a constant amount of time using a belt grill, and chops of the remaining carcasses were cooked to a constant endpoint temperature using open-hearth electric broilers. Regardless of cooking method and sampling location, repeatability estimates were at least 0.8 for LM WBSF and SSF. For WBSF, repeatability estimates were slightly higher at the anterior location (0.93 to 0.98) than the posterior location (0.88 to 0.90). The difference in repeatability between locations was probably a function of a greater level of variation in shear force at the anterior location. For callipyge LM, WBSF was higher (P lamb LM chops cooked with the belt grill using a larger number of animals (n = 87). In Exp. 2, LM chops were obtained from matching locations of both sides of 44 non-callipyge and 43 callipyge carcasses. Chops were cooked with a belt grill and SSF was measured, and repeatability was estimated to be 0.95. Repeatable estimates of lamb LM tenderness can be achieved either by cooking to a constant endpoint temperature with electric broilers or cooking for a constant amount of time with a belt grill. Likewise, repeatable estimates of lamb LM tenderness can be achieved with WBSF or SSF. However, use of belt grill cookery and the SSF technique could decrease time requirements which would decrease research costs.

  7. Evaluating the accuracy of sampling to estimate central line-days: simplification of the National Healthcare Safety Network surveillance methods.

    Science.gov (United States)

    Thompson, Nicola D; Edwards, Jonathan R; Bamberg, Wendy; Beldavs, Zintars G; Dumyati, Ghinwa; Godine, Deborah; Maloney, Meghan; Kainer, Marion; Ray, Susan; Thompson, Deborah; Wilson, Lucy; Magill, Shelley S

    2013-03-01

    To evaluate the accuracy of weekly sampling of central line-associated bloodstream infection (CLABSI) denominator data to estimate central line-days (CLDs). Obtained CLABSI denominator logs showing daily counts of patient-days and CLD for 6-12 consecutive months from participants and CLABSI numerators and facility and location characteristics from the National Healthcare Safety Network (NHSN). Convenience sample of 119 inpatient locations in 63 acute care facilities within 9 states participating in the Emerging Infections Program. Actual CLD and estimated CLD obtained from sampling denominator data on all single-day and 2-day (day-pair) samples were compared by assessing the distributions of the CLD percentage error. Facility and location characteristics associated with increased precision of estimated CLD were assessed. The impact of using estimated CLD to calculate CLABSI rates was evaluated by measuring the change in CLABSI decile ranking. The distribution of CLD percentage error varied by the day and number of days sampled. On average, day-pair samples provided more accurate estimates than did single-day samples. For several day-pair samples, approximately 90% of locations had CLD percentage error of less than or equal to ±5%. A lower number of CLD per month was most significantly associated with poor precision in estimated CLD. Most locations experienced no change in CLABSI decile ranking, and no location's CLABSI ranking changed by more than 2 deciles. Sampling to obtain estimated CLD is a valid alternative to daily data collection for a large proportion of locations. Development of a sampling guideline for NHSN users is underway.

  8. Procedure manual for the estimation of average indoor radon-daughter concentrations using the radon grab-sampling method

    International Nuclear Information System (INIS)

    George, J.L.

    1986-04-01

    The US Department of Energy (DOE) Office of Remedial Action and Waste Technology established the Technical Measurements Center to provide standardization, calibration, comparability, verification of data, quality assurance, and cost-effectiveness for the measurement requirements of DOE remedial action programs. One of the remedial-action measurement needs is the estimation of average indoor radon-daughter concentration. One method for accomplishing such estimations in support of DOE remedial action programs is the radon grab-sampling method. This manual describes procedures for radon grab sampling, with the application specifically directed to the estimation of average indoor radon-daughter concentration (RDC) in highly ventilated structures. This particular application of the measurement method is for cases where RDC estimates derived from long-term integrated measurements under occupied conditions are below the standard and where the structure being evaluated is considered to be highly ventilated. The radon grab-sampling method requires that sampling be conducted under standard maximized conditions. Briefly, the procedure for radon grab sampling involves the following steps: selection of sampling and counting equipment; sample acquisition and processing, including data reduction; calibration of equipment, including provisions to correct for pressure effects when sampling at various elevations; and incorporation of quality-control and assurance measures. This manual describes each of the above steps in detail and presents an example of a step-by-step radon grab-sampling procedure using a scintillation cell

  9. Reliability of different sampling densities for estimating and mapping lichen diversity in biomonitoring studies

    International Nuclear Information System (INIS)

    Ferretti, M.; Brambilla, E.; Brunialti, G.; Fornasier, F.; Mazzali, C.; Giordani, P.; Nimis, P.L.

    2004-01-01

    Sampling requirements related to lichen biomonitoring include optimal sampling density for obtaining precise and unbiased estimates of population parameters and maps of known reliability. Two available datasets on a sub-national scale in Italy were used to determine a cost-effective sampling density to be adopted in medium-to-large-scale biomonitoring studies. As expected, the relative error in the mean Lichen Biodiversity (Italian acronym: BL) values and the error associated with the interpolation of BL values for (unmeasured) grid cells increased as the sampling density decreased. However, the increase in size of the error was not linear and even a considerable reduction (up to 50%) in the original sampling effort led to a far smaller increase in errors in the mean estimates (<6%) and in mapping (<18%) as compared with the original sampling densities. A reduction in the sampling effort can result in considerable savings of resources, which can then be used for a more detailed investigation of potentially problematic areas. It is, however, necessary to decide the acceptable level of precision at the design stage of the investigation, so as to select the proper sampling density. - An acceptable level of precision must be decided before determining a sampling design

  10. Estimates of Inequality Indices Based on Simple Random, Ranked Set, and Systematic Sampling

    OpenAIRE

    Bansal, Pooja; Arora, Sangeeta; Mahajan, Kalpana K.

    2013-01-01

    Gini index, Bonferroni index, and Absolute Lorenz index are some popular indices of inequality showing different features of inequality measurement. In general simple random sampling procedure is commonly used to estimate the inequality indices and their related inference. The key condition that the samples must be drawn via simple random sampling procedure though makes calculations much simpler but this assumption is often violated in practice as the data does not always yield simple random ...

  11. Magnetic resonance imaging of third molars: developing a protocol suitable for forensic age estimation.

    Science.gov (United States)

    De Tobel, Jannick; Hillewig, Elke; Bogaert, Stephanie; Deblaere, Karel; Verstraete, Koenraad

    2017-03-01

    Established dental age estimation methods in sub-adults study the development of third molar root apices on radiographs. In living individuals, however, avoiding ionising radiation is expedient. Studying dental development with magnetic resonance imaging complies with this requirement, adding the advantage of imaging in three dimensions. To elaborate the development of an MRI protocol to visualise all third molars for forensic age estimation, with particular attention to the development of the root apex. Ex vivo scans of porcine jaws and in vivo scans of 10 volunteers aged 17-25 years were performed to select adequate sequences. Studied parameters were T1 vs T2 weighting, ultrashort echo time (UTE), fat suppression, in plane resolution, slice thickness, 3D imaging, signal-to-noise ratio, and acquisition time. A bilateral four-channel flexible surface coil was used. Two observers evaluated the suitability of the images. T2-weighted images were preferred to T1-weighted images. To clearly distinguish root apices in (almost) fully developed third molars an in plane resolution of 0.33 × 0.33 mm 2 was deemed necessary. Taking acquisition time limits into account, only a T2 FSE sequence with slice thickness of 2 mm generated images with sufficient resolution and contrast. UTE, thinner slice T2 FSE and T2 3D FSE sequences could not generate the desired resolution within 6.5 minutes. Three Tesla MRI of the third molars is a feasible technique for forensic age estimation, in which a T2 FSE sequence can provide the desired in plane resolution within a clinically acceptable acquisition time.

  12. Time delay estimation in a reverberant environment by low rate sampling of impulsive acoustic sources

    KAUST Repository

    Omer, Muhammad

    2012-07-01

    This paper presents a new method of time delay estimation (TDE) using low sample rates of an impulsive acoustic source in a room environment. The proposed method finds the time delay from the room impulse response (RIR) which makes it robust against room reverberations. The RIR is considered a sparse phenomenon and a recently proposed sparse signal reconstruction technique called orthogonal clustering (OC) is utilized for its estimation from the low rate sampled received signal. The arrival time of the direct path signal at a pair of microphones is identified from the estimated RIR and their difference yields the desired time delay. Low sampling rates reduce the hardware and computational complexity and decrease the communication between the microphones and the centralized location. The performance of the proposed technique is demonstrated by numerical simulations and experimental results. © 2012 IEEE.

  13. Clinical usefulness of limited sampling strategies for estimating AUC of proton pump inhibitors.

    Science.gov (United States)

    Niioka, Takenori

    2011-03-01

    Cytochrome P450 (CYP) 2C19 (CYP2C19) genotype is regarded as a useful tool to predict area under the blood concentration-time curve (AUC) of proton pump inhibitors (PPIs). In our results, however, CYP2C19 genotypes had no influence on AUC of all PPIs during fluvoxamine treatment. These findings suggest that CYP2C19 genotyping is not always a good indicator for estimating AUC of PPIs. Limited sampling strategies (LSS) were developed to estimate AUC simply and accurately. It is important to minimize the number of blood samples because of patient's acceptance. This article reviewed the usefulness of LSS for estimating AUC of three PPIs (omeprazole: OPZ, lansoprazole: LPZ and rabeprazole: RPZ). The best prediction formulas in each PPI were AUC(OPZ)=9.24 x C(6h)+2638.03, AUC(LPZ)=12.32 x C(6h)+3276.09 and AUC(RPZ)=1.39 x C(3h)+7.17 x C(6h)+344.14, respectively. In order to optimize the sampling strategy of LPZ, we tried to establish LSS for LPZ using a time point within 3 hours through the property of pharmacokinetics of its enantiomers. The best prediction formula using the fewest sampling points (one point) was AUC(racemic LPZ)=6.5 x C(3h) of (R)-LPZ+13.7 x C(3h) of (S)-LPZ-9917.3 x G1-14387.2×G2+7103.6 (G1: homozygous extensive metabolizer is 1 and the other genotypes are 0; G2: heterozygous extensive metabolizer is 1 and the other genotypes are 0). Those strategies, plasma concentration monitoring at one or two time-points, might be more suitable for AUC estimation than reference to CYP2C19 genotypes, particularly in the case of coadministration of CYP mediators.

  14. A model for estimating the minimum number of offspring to sample in studies of reproductive success.

    Science.gov (United States)

    Anderson, Joseph H; Ward, Eric J; Carlson, Stephanie M

    2011-01-01

    Molecular parentage permits studies of selection and evolution in fecund species with cryptic mating systems, such as fish, amphibians, and insects. However, there exists no method for estimating the number of offspring that must be assigned parentage to achieve robust estimates of reproductive success when only a fraction of offspring can be sampled. We constructed a 2-stage model that first estimated the mean (μ) and variance (v) in reproductive success from published studies on salmonid fishes and then sampled offspring from reproductive success distributions simulated from the μ and v estimates. Results provided strong support for modeling salmonid reproductive success via the negative binomial distribution and suggested that few offspring samples are needed to reject the null hypothesis of uniform offspring production. However, the sampled reproductive success distributions deviated significantly (χ(2) goodness-of-fit test p value reproductive success distribution at rates often >0.05 and as high as 0.24, even when hundreds of offspring were assigned parentage. In general, reproductive success patterns were less accurate when offspring were sampled from cohorts with larger numbers of parents and greater variance in reproductive success. Our model can be reparameterized with data from other species and will aid researchers in planning reproductive success studies by providing explicit sampling targets required to accurately assess reproductive success.

  15. Tissue Sampling Guides for Porcine Biomedical Models.

    Science.gov (United States)

    Albl, Barbara; Haesner, Serena; Braun-Reichhart, Christina; Streckel, Elisabeth; Renner, Simone; Seeliger, Frank; Wolf, Eckhard; Wanke, Rüdiger; Blutke, Andreas

    2016-04-01

    This article provides guidelines for organ and tissue sampling adapted to porcine animal models in translational medical research. Detailed protocols for the determination of sampling locations and numbers as well as recommendations on the orientation, size, and trimming direction of samples from ∼50 different porcine organs and tissues are provided in the Supplementary Material. The proposed sampling protocols include the generation of samples suitable for subsequent qualitative and quantitative analyses, including cryohistology, paraffin, and plastic histology; immunohistochemistry;in situhybridization; electron microscopy; and quantitative stereology as well as molecular analyses of DNA, RNA, proteins, metabolites, and electrolytes. With regard to the planned extent of sampling efforts, time, and personnel expenses, and dependent upon the scheduled analyses, different protocols are provided. These protocols are adjusted for (I) routine screenings, as used in general toxicity studies or in analyses of gene expression patterns or histopathological organ alterations, (II) advanced analyses of single organs/tissues, and (III) large-scale sampling procedures to be applied in biobank projects. Providing a robust reference for studies of porcine models, the described protocols will ensure the efficiency of sampling, the systematic recovery of high-quality samples representing the entire organ or tissue as well as the intra-/interstudy comparability and reproducibility of results. © The Author(s) 2016.

  16. Non-parametric adaptive importance sampling for the probability estimation of a launcher impact position

    International Nuclear Information System (INIS)

    Morio, Jerome

    2011-01-01

    Importance sampling (IS) is a useful simulation technique to estimate critical probability with a better accuracy than Monte Carlo methods. It consists in generating random weighted samples from an auxiliary distribution rather than the distribution of interest. The crucial part of this algorithm is the choice of an efficient auxiliary PDF that has to be able to simulate more rare random events. The optimisation of this auxiliary distribution is often in practice very difficult. In this article, we propose to approach the IS optimal auxiliary density with non-parametric adaptive importance sampling (NAIS). We apply this technique for the probability estimation of spatial launcher impact position since it has currently become a more and more important issue in the field of aeronautics.

  17. Reef-associated crustacean fauna: biodiversity estimates using semi-quantitative sampling and DNA barcoding

    Science.gov (United States)

    Plaisance, L.; Knowlton, N.; Paulay, G.; Meyer, C.

    2009-12-01

    The cryptofauna associated with coral reefs accounts for a major part of the biodiversity in these ecosystems but has been largely overlooked in biodiversity estimates because the organisms are hard to collect and identify. We combine a semi-quantitative sampling design and a DNA barcoding approach to provide metrics for the diversity of reef-associated crustacean. Twenty-two similar-sized dead heads of Pocillopora were sampled at 10 m depth from five central Pacific Ocean localities (four atolls in the Northern Line Islands and in Moorea, French Polynesia). All crustaceans were removed, and partial cytochrome oxidase subunit I was sequenced from 403 individuals, yielding 135 distinct taxa using a species-level criterion of 5% similarity. Most crustacean species were rare; 44% of the OTUs were represented by a single individual, and an additional 33% were represented by several specimens found only in one of the five localities. The Northern Line Islands and Moorea shared only 11 OTUs. Total numbers estimated by species richness statistics (Chao1 and ACE) suggest at least 90 species of crustaceans in Moorea and 150 in the Northern Line Islands for this habitat type. However, rarefaction curves for each region failed to approach an asymptote, and Chao1 and ACE estimators did not stabilize after sampling eight heads in Moorea, so even these diversity figures are underestimates. Nevertheless, even this modest sampling effort from a very limited habitat resulted in surprisingly high species numbers.

  18. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Is gait variability reliable in older adults and Parkinson's disease? Towards an optimal testing protocol.

    Science.gov (United States)

    Galna, Brook; Lord, Sue; Rochester, Lynn

    2013-04-01

    Despite the widespread use of gait variability in research and clinical studies, testing protocols designed to optimise its reliability have not been established. This study evaluates the impact of testing protocol and pathology on the reliability of gait variability. To (i) estimate the reliability of gait variability during continuous and intermittent walking protocols in older adults and people with Parkinson's disease (PD), (ii) determine optimal number of steps for acceptable levels of reliability of gait variability and (iii) provide sample size estimates for use in clinical trials. Gait variability was measured twice, one week apart, in 27 older adults and 25 PD participants. Participants walked at their preferred pace during: (i) a continuous 2 min walk and (ii) 3 intermittent walks over a 12 m walkway. Gait variability was calculated as the within-person standard deviation for step velocity, length and width, and step, stance and swing duration. Reliability of gait variability ranged from poor to excellent (intra class correlations .041-.860; relative limits of agreement 34-89%). Gait variability was more reliable during continuous walks. Control and PD participants demonstrated similar reliability. Increasing the number of steps improved reliability, with most improvement seen across the first 30 steps. In this study, we identified testing protocols that improve the reliability of measuring gait variability. We recommend using a continuous walking protocol and to collect no fewer than 30 steps. Early PD does not appear to impact negatively on the reliability of gait variability. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. [Analyzing consumer preference by using the latest semantic model for verbal protocol].

    Science.gov (United States)

    Tamari, Yuki; Takemura, Kazuhisa

    2012-02-01

    This paper examines consumers' preferences for competing brands by using a preference model of verbal protocols. Participants were 150 university students, who reported their opinions and feelings about McDonalds and Mos Burger (competing hamburger restaurants in Japan). Their verbal protocols were analyzed by using the singular value decomposition method, and the latent decision frames were estimated. The verbal protocols having a large value in the decision frames could be interpreted as showing attributes that consumers emphasize. Based on the estimated decision frames, we predicted consumers' preferences using the logistic regression analysis method. The results indicate that the decision frames projected from the verbal protocol data explained consumers' preferences effectively.

  1. Point and Fixed Plot Sampling Inventory Estimates at the Savannah River Site, South Carolina.

    Energy Technology Data Exchange (ETDEWEB)

    Parresol, Bernard, R.

    2004-02-01

    This report provides calculation of systematic point sampling volume estimates for trees greater than or equal to 5 inches diameter breast height (dbh) and fixed radius plot volume estimates for trees < 5 inches dbh at the Savannah River Site (SRS), Aiken County, South Carolina. The inventory of 622 plots was started in March 1999 and completed in January 2002 (Figure 1). Estimates are given in cubic foot volume. The analyses are presented in a series of Tables and Figures. In addition, a preliminary analysis of fuel levels on the SRS is given, based on depth measurements of the duff and litter layers on the 622 inventory plots plus line transect samples of down coarse woody material. Potential standing live fuels are also included. The fuels analyses are presented in a series of tables.

  2. Assessment of cardiorespiratory fitness using submaximal protocol in older adults with mood disorder and Parkinson's disease

    Directory of Open Access Journals (Sweden)

    Natacha Alves de Oliveira

    2013-01-01

    Full Text Available BACKGROUND: Evidence has shown benefits for mental health through aerobic training oriented in percentage of VO2max, indicating the importance of this variable for clinical practice. OBJECTIVE: To validate a method for estimating VO2max using a submaximal protocol in elderly patients with clinically diagnosis as major depressive disorder (MDD and Parkinson's disease (PD. METHODS: The sample comprised 18 patients (64.22 ± 9.92 years with MDD (n = 7 and with PD (n = 11. Three evaluations were performed: I disease staging, II direct measurement of VO2max and III submaximal exercise test. Linear regression was performed to verify the accuracy of estimation in VO2max established in ergospirometry and the predicted VO2max from the submaximal test measurement. We also analyzed the correlation between the Bland-Altman procedures. RESULTS: The regression analysis showed that VO2max values estimated by submaximal protocol associated with the VO2max measured, both in absolute values (R² = 0.65; SEE = 0.26; p < 0.001 and the relative (R² = 0.56; SEE = 3.70; p < 0.001. The Bland-Altman plots for analysis of agreement of showed a good correlation between the two measures. DISCUSSION: The VO2max predicted by submaximal protocol demonstrated satisfactory criterion validity and simple execution compared to ergospirometry.

  3. Comparison of protocols for genomic DNA extraction from 'velame ...

    African Journals Online (AJOL)

    usuario

    2013-07-24

    Jul 24, 2013 ... involving C. linearifolius, we compared the efficiency of six protocols for genomic DNA extraction previously ... phytic, with diverse aspect and floristics, average rainfall between ..... The variation observed for DNA concentrations estimated with .... performed with protocol 1 (data not shown), or still, bands.

  4. Estimating cross-validatory predictive p-values with integrated importance sampling for disease mapping models.

    Science.gov (United States)

    Li, Longhai; Feng, Cindy X; Qiu, Shi

    2017-06-30

    An important statistical task in disease mapping problems is to identify divergent regions with unusually high or low risk of disease. Leave-one-out cross-validatory (LOOCV) model assessment is the gold standard for estimating predictive p-values that can flag such divergent regions. However, actual LOOCV is time-consuming because one needs to rerun a Markov chain Monte Carlo analysis for each posterior distribution in which an observation is held out as a test case. This paper introduces a new method, called integrated importance sampling (iIS), for estimating LOOCV predictive p-values with only Markov chain samples drawn from the posterior based on a full data set. The key step in iIS is that we integrate away the latent variables associated the test observation with respect to their conditional distribution without reference to the actual observation. By following the general theory for importance sampling, the formula used by iIS can be proved to be equivalent to the LOOCV predictive p-value. We compare iIS and other three existing methods in the literature with two disease mapping datasets. Our empirical results show that the predictive p-values estimated with iIS are almost identical to the predictive p-values estimated with actual LOOCV and outperform those given by the existing three methods, namely, the posterior predictive checking, the ordinary importance sampling, and the ghosting method by Marshall and Spiegelhalter (2003). Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Improved sampling for airborne surveys to estimate wildlife population parameters in the African Savannah

    NARCIS (Netherlands)

    Khaemba, W.; Stein, A.

    2002-01-01

    Parameter estimates, obtained from airborne surveys of wildlife populations, often have large bias and large standard errors. Sampling error is one of the major causes of this imprecision and the occurrence of many animals in herds violates the common assumptions in traditional sampling designs like

  6. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    Science.gov (United States)

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  7. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.

    Science.gov (United States)

    Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe

    2015-08-01

    The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Estimation of the biserial correlation and its sampling variance for use in meta-analysis.

    Science.gov (United States)

    Jacobs, Perke; Viechtbauer, Wolfgang

    2017-06-01

    Meta-analyses are often used to synthesize the findings of studies examining the correlational relationship between two continuous variables. When only dichotomous measurements are available for one of the two variables, the biserial correlation coefficient can be used to estimate the product-moment correlation between the two underlying continuous variables. Unlike the point-biserial correlation coefficient, biserial correlation coefficients can therefore be integrated with product-moment correlation coefficients in the same meta-analysis. The present article describes the estimation of the biserial correlation coefficient for meta-analytic purposes and reports simulation results comparing different methods for estimating the coefficient's sampling variance. The findings indicate that commonly employed methods yield inconsistent estimates of the sampling variance across a broad range of research situations. In contrast, consistent estimates can be obtained using two methods that appear to be unknown in the meta-analytic literature. A variance-stabilizing transformation for the biserial correlation coefficient is described that allows for the construction of confidence intervals for individual coefficients with close to nominal coverage probabilities in most of the examined conditions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Investigation of Bicycle Travel Time Estimation Using Bluetooth Sensors for Low Sampling Rates

    Directory of Open Access Journals (Sweden)

    Zhenyu Mei

    2014-10-01

    Full Text Available Filtering the data for bicycle travel time using Bluetooth sensors is crucial to the estimation of link travel times on a corridor. The current paper describes an adaptive filtering algorithm for estimating bicycle travel times using Bluetooth data, with consideration of low sampling rates. The data for bicycle travel time using Bluetooth sensors has two characteristics. First, the bicycle flow contains stable and unstable conditions. Second, the collected data have low sampling rates (less than 1%. To avoid erroneous inference, filters are introduced to “purify” multiple time series. The valid data are identified within a dynamically varying validity window with the use of a robust data-filtering procedure. The size of the validity window varies based on the number of preceding sampling intervals without a Bluetooth record. Applications of the proposed algorithm to the dataset from Genshan East Road and Moganshan Road in Hangzhou demonstrate its ability to track typical variations in bicycle travel time efficiently, while suppressing high frequency noise signals.

  10. Comparison of blood RNA isolation methods from samples stabilized in Tempus tubes and stored at a large human biobank.

    Science.gov (United States)

    Aarem, Jeanette; Brunborg, Gunnar; Aas, Kaja K; Harbak, Kari; Taipale, Miia M; Magnus, Per; Knudsen, Gun Peggy; Duale, Nur

    2016-09-01

    More than 50,000 adult and cord blood samples were collected in Tempus tubes and stored at the Norwegian Institute of Public Health Biobank for future use. In this study, we systematically evaluated and compared five blood-RNA isolation protocols: three blood-RNA isolation protocols optimized for simultaneous isolation of all blood-RNA species (MagMAX RNA Isolation Kit, both manual and semi-automated protocols; and Norgen Preserved Blood RNA kit I); and two protocols optimized for large RNAs only (Tempus Spin RNA, and Tempus 6-port isolation kit). We estimated the following parameters: RNA quality, RNA yield, processing time, cost per sample, and RNA transcript stability of six selected mRNAs and 13 miRNAs using real-time qPCR. Whole blood samples from adults (n = 59 tubes) and umbilical cord blood (n = 18 tubes) samples collected in Tempus tubes were analyzed. High-quality blood-RNAs with average RIN-values above seven were extracted using all five RNA isolation protocols. The transcript levels of the six selected genes showed minimal variation between the five protocols. Unexplained differences within the transcript levels of the 13 miRNA were observed; however, the 13 miRNAs had similar expression direction and they were within the same order of magnitude. Some differences in the RNA processing time and cost were noted. Sufficient amounts of high-quality RNA were obtained using all five protocols, and the Tempus blood RNA system therefore seems not to be dependent on one specific RNA isolation method.

  11. How many dinosaur species were there? Fossil bias and true richness estimated using a Poisson sampling model.

    Science.gov (United States)

    Starrfelt, Jostein; Liow, Lee Hsiang

    2016-04-05

    The fossil record is a rich source of information about biological diversity in the past. However, the fossil record is not only incomplete but has also inherent biases due to geological, physical, chemical and biological factors. Our knowledge of past life is also biased because of differences in academic and amateur interests and sampling efforts. As a result, not all individuals or species that lived in the past are equally likely to be discovered at any point in time or space. To reconstruct temporal dynamics of diversity using the fossil record, biased sampling must be explicitly taken into account. Here, we introduce an approach that uses the variation in the number of times each species is observed in the fossil record to estimate both sampling bias and true richness. We term our technique TRiPS (True Richness estimated using a Poisson Sampling model) and explore its robustness to violation of its assumptions via simulations. We then venture to estimate sampling bias and absolute species richness of dinosaurs in the geological stages of the Mesozoic. Using TRiPS, we estimate that 1936 (1543-2468) species of dinosaurs roamed the Earth during the Mesozoic. We also present improved estimates of species richness trajectories of the three major dinosaur clades: the sauropodomorphs, ornithischians and theropods, casting doubt on the Jurassic-Cretaceous extinction event and demonstrating that all dinosaur groups are subject to considerable sampling bias throughout the Mesozoic. © 2016 The Authors.

  12. Remote Sensing Based Two-Stage Sampling for Accuracy Assessment and Area Estimation of Land Cover Changes

    Directory of Open Access Journals (Sweden)

    Heinz Gallaun

    2015-09-01

    Full Text Available Land cover change processes are accelerating at the regional to global level. The remote sensing community has developed reliable and robust methods for wall-to-wall mapping of land cover changes; however, land cover changes often occur at rates below the mapping errors. In the current publication, we propose a cost-effective approach to complement wall-to-wall land cover change maps with a sampling approach, which is used for accuracy assessment and accurate estimation of areas undergoing land cover changes, including provision of confidence intervals. We propose a two-stage sampling approach in order to keep accuracy, efficiency, and effort of the estimations in balance. Stratification is applied in both stages in order to gain control over the sample size allocated to rare land cover change classes on the one hand and the cost constraints for very high resolution reference imagery on the other. Bootstrapping is used to complement the accuracy measures and the area estimates with confidence intervals. The area estimates and verification estimations rely on a high quality visual interpretation of the sampling units based on time series of satellite imagery. To demonstrate the cost-effective operational applicability of the approach we applied it for assessment of deforestation in an area characterized by frequent cloud cover and very low change rate in the Republic of Congo, which makes accurate deforestation monitoring particularly challenging.

  13. Estimation of technetium 99m mercaptoacetyltriglycine plasma clearance by use of one single plasma sample

    International Nuclear Information System (INIS)

    Mueller-Suur, R.; Magnusson, G.; Karolinska Inst., Stockholm; Bois-Svensson, I.; Jansson, B.

    1991-01-01

    Recent studies have shown that technetium 99m mercaptoacetyltriglycine (MAG-3) is a suitable replacement for iodine 131 or 123 hippurate in gamma-camera renography. Also, the determination of its clearance is of value, since it correlates well with that of hippurate and thus may be an indirect measure of renal plasma flow. In order to simplify the clearance method we developed formulas for the estimation of plasma clearance of MAG-3 based on a single plasma sample and compared them with the multiple sample method based on 7 plasma samples. The correlation to effective renal plasma flow (ERPF) (according to Tauxe's method, using iodine 123 hippurate), which ranged from 75 to 654 ml/min per 1.73 m 2 , was determined in these patients. Using the developed regression equations the error of estimate for the simplified clearance method was acceptably low (18-14 ml/min), when the single plasma sample was taken 44-64 min post-injection. Formulas for different sampling times at 44, 48, 52, 56, 60 and 64 min are given, and we recommend 60 min as optimal, with an error of estimate of 15.5 ml/min. The correlation between the MAG-3 clearances and ERPF was high (r=0.90). Since normal values for MAG-3 clearance are not yet available, transformation to estimated ERPF values by the regression equation (ERPF=1.86xC MAG-3 +4.6) could be of clinical value in order to compare it with the normal values for ERPF given in the literature. (orig.)

  14. Differences in Movement Pattern and Detectability between Males and Females Influence How Common Sampling Methods Estimate Sex Ratio.

    Directory of Open Access Journals (Sweden)

    João Fabrício Mota Rodrigues

    Full Text Available Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population's sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns.

  15. Differences in Movement Pattern and Detectability between Males and Females Influence How Common Sampling Methods Estimate Sex Ratio.

    Science.gov (United States)

    Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco

    2016-01-01

    Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population's sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns.

  16. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  17. An optimized Line Sampling method for the estimation of the failure probability of nuclear passive systems

    International Nuclear Information System (INIS)

    Zio, E.; Pedroni, N.

    2010-01-01

    The quantitative reliability assessment of a thermal-hydraulic (T-H) passive safety system of a nuclear power plant can be obtained by (i) Monte Carlo (MC) sampling the uncertainties of the system model and parameters, (ii) computing, for each sample, the system response by a mechanistic T-H code and (iii) comparing the system response with pre-established safety thresholds, which define the success or failure of the safety function. The computational effort involved can be prohibitive because of the large number of (typically long) T-H code simulations that must be performed (one for each sample) for the statistical estimation of the probability of success or failure. In this work, Line Sampling (LS) is adopted for efficient MC sampling. In the LS method, an 'important direction' pointing towards the failure domain of interest is determined and a number of conditional one-dimensional problems are solved along such direction; this allows for a significant reduction of the variance of the failure probability estimator, with respect, for example, to standard random sampling. Two issues are still open with respect to LS: first, the method relies on the determination of the 'important direction', which requires additional runs of the T-H code; second, although the method has been shown to improve the computational efficiency by reducing the variance of the failure probability estimator, no evidence has been given yet that accurate and precise failure probability estimates can be obtained with a number of samples reduced to below a few hundreds, which may be required in case of long-running models. The work presented in this paper addresses the first issue by (i) quantitatively comparing the efficiency of the methods proposed in the literature to determine the LS important direction; (ii) employing artificial neural network (ANN) regression models as fast-running surrogates of the original, long-running T-H code to reduce the computational cost associated to the

  18. Uncertainty Estimation of Neutron Activation Analysis in Zinc Elemental Determination in Food Samples

    International Nuclear Information System (INIS)

    Endah Damastuti; Muhayatun; Diah Dwiana L

    2009-01-01

    Beside to complished the requirements of international standard of ISO/IEC 17025:2005, uncertainty estimation should be done to increase quality and confidence of analysis results and also to establish traceability of the analysis results to SI unit. Neutron activation analysis is a major technique used by Radiometry technique analysis laboratory and is included as scope of accreditation under ISO/IEC 17025:2005, therefore uncertainty estimation of neutron activation analysis is needed to be carried out. Sample and standard preparation as well as, irradiation and measurement using gamma spectrometry were the main activities which could give contribution to uncertainty. The components of uncertainty sources were specifically explained. The result of expanded uncertainty was 4,0 mg/kg with level of confidence 95% (coverage factor=2) and Zn concentration was 25,1 mg/kg. Counting statistic of cuplikan and standard were the major contribution of combined uncertainty. The uncertainty estimation was expected to increase the quality of the analysis results and could be applied further to other kind of samples. (author)

  19. Some remarks on estimating a covariance structure model from a sample correlation matrix

    OpenAIRE

    Maydeu Olivares, Alberto; Hernández Estrada, Adolfo

    2000-01-01

    A popular model in structural equation modeling involves a multivariate normal density with a structured covariance matrix that has been categorized according to a set of thresholds. In this setup one may estimate the covariance structure parameters from the sample tetrachoricl polychoric correlations but only if the covariance structure is scale invariant. Doing so when the covariance structure is not scale invariant results in estimating a more restricted covariance structure than the one i...

  20. Estimation of equivalent dose and its uncertainty in the OSL SAR protocol when count numbers do not follow a Poisson distribution

    International Nuclear Information System (INIS)

    Bluszcz, Andrzej; Adamiec, Grzegorz; Heer, Aleksandra J.

    2015-01-01

    The current work focuses on the estimation of equivalent dose and its uncertainty using the single aliquot regenerative protocol in optically stimulated luminescence measurements. The authors show that the count numbers recorded with the use of photomultiplier tubes are well described by negative binomial distributions, different ones for background counts and photon induced counts. This fact is then exploited in pseudo-random count number generation and simulations of D e determination assuming a saturating exponential growth. A least squares fitting procedure is applied using different types of weights to determine whether the obtained D e 's and their error estimates are unbiased and accurate. A weighting procedure is suggested that leads to almost unbiased D e estimates. It is also shown that the assumption of Poisson distribution in D e estimation may lead to severe underestimation of the D e error. - Highlights: • Detailed analysis of statistics of count numbers in luminescence readers. • Generation of realistically scattered pseudo-random numbers of counts in luminescence measurements. • A practical guide for stringent analysis of D e values and errors assessment.

  1. Correcting for Systematic Bias in Sample Estimates of Population Variances: Why Do We Divide by n-1?

    Science.gov (United States)

    Mittag, Kathleen Cage

    An important topic presented in introductory statistics courses is the estimation of population parameters using samples. Students learn that when estimating population variances using sample data, we always get an underestimate of the population variance if we divide by n rather than n-1. One implication of this correction is that the degree of…

  2. Genotyping faecal samples of Bengal tiger Panthera tigris tigris for population estimation: A pilot study

    Directory of Open Access Journals (Sweden)

    Singh Lalji

    2006-10-01

    Full Text Available Abstract Background Bengal tiger Panthera tigris tigris the National Animal of India, is an endangered species. Estimating populations for such species is the main objective for designing conservation measures and for evaluating those that are already in place. Due to the tiger's cryptic and secretive behaviour, it is not possible to enumerate and monitor its populations through direct observations; instead indirect methods have always been used for studying tigers in the wild. DNA methods based on non-invasive sampling have not been attempted so far for tiger population studies in India. We describe here a pilot study using DNA extracted from faecal samples of tigers for the purpose of population estimation. Results In this study, PCR primers were developed based on tiger-specific variations in the mitochondrial cytochrome b for reliably identifying tiger faecal samples from those of sympatric carnivores. Microsatellite markers were developed for the identification of individual tigers with a sibling Probability of Identity of 0.005 that can distinguish even closely related individuals with 99.9% certainty. The effectiveness of using field-collected tiger faecal samples for DNA analysis was evaluated by sampling, identification and subsequently genotyping samples from two protected areas in southern India. Conclusion Our results demonstrate the feasibility of using tiger faecal matter as a potential source of DNA for population estimation of tigers in protected areas in India in addition to the methods currently in use.

  3. Comparison of sampling designs for estimating deforestation from landsat TM and MODIS imagery: a case study in Mato Grosso, Brazil.

    Science.gov (United States)

    Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin

    2014-01-01

    Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.

  4. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    Science.gov (United States)

    Mitra, Abhishek; Skrzypczak, Magdalena; Ginalski, Krzysztof; Rowicka, Maga

    2015-01-01

    Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding). Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants). Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol) that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively, we discuss how

  5. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    Directory of Open Access Journals (Sweden)

    Abhishek Mitra

    Full Text Available Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding. Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants. Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively

  6. Effect of variable rates of daily sampling of fly larvae on decomposition and carrion insect community assembly: implications for forensic entomology field study protocols.

    Science.gov (United States)

    Michaud, Jean-Philippe; Moreau, Gaétan

    2013-07-01

    Experimental protocols in forensic entomology successional field studies generally involve daily sampling of insects to document temporal changes in species composition on animal carcasses. One challenge with that method has been to adjust the sampling intensity to obtain the best representation of the community present without affecting the said community. To this date, little is known about how such investigator perturbations affect decomposition-related processes. Here, we investigated how different levels of daily sampling of fly eggs and fly larvae affected, over time, carcass decomposition rate and the carrion insect community. Results indicated that a daily sampling of forensic entomology successional field studies.

  7. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

    Science.gov (United States)

    Ellison, Laura E.; Lukacs, Paul M.

    2014-01-01

    Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

  8. Sex Estimation From Modern American Humeri and Femora, Accounting for Sample Variance Structure

    DEFF Research Database (Denmark)

    Boldsen, J. L.; Milner, G. R.; Boldsen, S. K.

    2015-01-01

    several decades. Results: For measurements individually and collectively, the probabilities of being one sex or the other were generated for samples with an equal distribution of males and females, taking into account the variance structure of the original measurements. The combination providing the best......Objectives: A new procedure for skeletal sex estimation based on humeral and femoral dimensions is presented, based on skeletons from the United States. The approach specifically addresses the problem that arises from a lack of variance homogeneity between the sexes, taking into account prior...... information about the sample's sex ratio, if known. Material and methods: Three measurements useful for estimating the sex of adult skeletons, the humeral and femoral head diameters and the humeral epicondylar breadth, were collected from 258 Americans born between 1893 and 1980 who died within the past...

  9. Estimation variance bounds of importance sampling simulations in digital communication systems

    Science.gov (United States)

    Lu, D.; Yao, K.

    1991-01-01

    In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.

  10. Matrix algebra and sampling theory : The case of the Horvitz-Thompson estimator

    NARCIS (Netherlands)

    Dol, W.; Steerneman, A.G.M.; Wansbeek, T.J.

    Matrix algebra is a tool not commonly employed in sampling theory. The intention of this paper is to help change this situation by showing, in the context of the Horvitz-Thompson (HT) estimator, the convenience of the use of a number of matrix-algebra results. Sufficient conditions for the

  11. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    Science.gov (United States)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  12. Comparison of Sampling Designs for Estimating Deforestation from Landsat TM and MODIS Imagery: A Case Study in Mato Grosso, Brazil

    Directory of Open Access Journals (Sweden)

    Shanyou Zhu

    2014-01-01

    Full Text Available Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.

  13. Dysphonia risk screening protocol

    Science.gov (United States)

    Nemr, Katia; Simões-Zenari, Marcia; da Trindade Duarte, João Marcos; Lobrigate, Karen Elena; Bagatini, Flavia Alves

    2016-01-01

    OBJECTIVE: To propose and test the applicability of a dysphonia risk screening protocol with score calculation in individuals with and without dysphonia. METHOD: This descriptive cross-sectional study included 365 individuals (41 children, 142 adult women, 91 adult men and 91 seniors) divided into a dysphonic group and a non-dysphonic group. The protocol consisted of 18 questions and a score was calculated using a 10-cm visual analog scale. The measured value on the visual analog scale was added to the overall score, along with other partial scores. Speech samples allowed for analysis/assessment of the overall degree of vocal deviation and initial definition of the respective groups and after six months, the separation of the groups was confirmed using an acoustic analysis. RESULTS: The mean total scores were different between the groups in all samples. Values ranged between 37.0 and 57.85 in the dysphonic group and between 12.95 and 19.28 in the non-dysphonic group, with overall means of 46.09 and 15.55, respectively. High sensitivity and specificity were demonstrated when discriminating between the groups with the following cut-off points: 22.50 (children), 29.25 (adult women), 22.75 (adult men), and 27.10 (seniors). CONCLUSION: The protocol demonstrated high sensitivity and specificity in differentiating groups of individuals with and without dysphonia in different sample groups and is thus an effective instrument for use in voice clinics. PMID:27074171

  14. Dysphonia risk screening protocol

    Directory of Open Access Journals (Sweden)

    Katia Nemr

    2016-03-01

    Full Text Available OBJECTIVE: To propose and test the applicability of a dysphonia risk screening protocol with score calculation in individuals with and without dysphonia. METHOD: This descriptive cross-sectional study included 365 individuals (41 children, 142 adult women, 91 adult men and 91 seniors divided into a dysphonic group and a non-dysphonic group. The protocol consisted of 18 questions and a score was calculated using a 10-cm visual analog scale. The measured value on the visual analog scale was added to the overall score, along with other partial scores. Speech samples allowed for analysis/assessment of the overall degree of vocal deviation and initial definition of the respective groups and after six months, the separation of the groups was confirmed using an acoustic analysis. RESULTS: The mean total scores were different between the groups in all samples. Values ranged between 37.0 and 57.85 in the dysphonic group and between 12.95 and 19.28 in the non-dysphonic group, with overall means of 46.09 and 15.55, respectively. High sensitivity and specificity were demonstrated when discriminating between the groups with the following cut-off points: 22.50 (children, 29.25 (adult women, 22.75 (adult men, and 27.10 (seniors. CONCLUSION: The protocol demonstrated high sensitivity and specificity in differentiating groups of individuals with and without dysphonia in different sample groups and is thus an effective instrument for use in voice clinics.

  15. Estimating instream constituent loads using replicate synoptic sampling, Peru Creek, Colorado

    Science.gov (United States)

    Runkel, Robert L.; Walton-Day, Katherine; Kimball, Briant A.; Verplanck, Philip L.; Nimick, David A.

    2013-01-01

    The synoptic mass balance approach is often used to evaluate constituent mass loading in streams affected by mine drainage. Spatial profiles of constituent mass load are used to identify sources of contamination and prioritize sites for remedial action. This paper presents a field scale study in which replicate synoptic sampling campaigns are used to quantify the aggregate uncertainty in constituent load that arises from (1) laboratory analyses of constituent and tracer concentrations, (2) field sampling error, and (3) temporal variation in concentration from diel constituent cycles and/or source variation. Consideration of these factors represents an advance in the application of the synoptic mass balance approach by placing error bars on estimates of constituent load and by allowing all sources of uncertainty to be quantified in aggregate; previous applications of the approach have provided only point estimates of constituent load and considered only a subset of the possible errors. Given estimates of aggregate uncertainty, site specific data and expert judgement may be used to qualitatively assess the contributions of individual factors to uncertainty. This assessment can be used to guide the collection of additional data to reduce uncertainty. Further, error bars provided by the replicate approach can aid the investigator in the interpretation of spatial loading profiles and the subsequent identification of constituent source areas within the watershed.The replicate sampling approach is applied to Peru Creek, a stream receiving acidic, metal-rich effluent from the Pennsylvania Mine. Other sources of acidity and metals within the study reach include a wetland area adjacent to the mine and tributary inflow from Cinnamon Gulch. Analysis of data collected under low-flow conditions indicates that concentrations of Al, Cd, Cu, Fe, Mn, Pb, and Zn in Peru Creek exceed aquatic life standards. Constituent loading within the study reach is dominated by effluent from the

  16. Estimating instream constituent loads using replicate synoptic sampling, Peru Creek, Colorado

    Science.gov (United States)

    Runkel, Robert L.; Walton-Day, Katherine; Kimball, Briant A.; Verplanck, Philip L.; Nimick, David A.

    2013-05-01

    SummaryThe synoptic mass balance approach is often used to evaluate constituent mass loading in streams affected by mine drainage. Spatial profiles of constituent mass load are used to identify sources of contamination and prioritize sites for remedial action. This paper presents a field scale study in which replicate synoptic sampling campaigns are used to quantify the aggregate uncertainty in constituent load that arises from (1) laboratory analyses of constituent and tracer concentrations, (2) field sampling error, and (3) temporal variation in concentration from diel constituent cycles and/or source variation. Consideration of these factors represents an advance in the application of the synoptic mass balance approach by placing error bars on estimates of constituent load and by allowing all sources of uncertainty to be quantified in aggregate; previous applications of the approach have provided only point estimates of constituent load and considered only a subset of the possible errors. Given estimates of aggregate uncertainty, site specific data and expert judgement may be used to qualitatively assess the contributions of individual factors to uncertainty. This assessment can be used to guide the collection of additional data to reduce uncertainty. Further, error bars provided by the replicate approach can aid the investigator in the interpretation of spatial loading profiles and the subsequent identification of constituent source areas within the watershed. The replicate sampling approach is applied to Peru Creek, a stream receiving acidic, metal-rich effluent from the Pennsylvania Mine. Other sources of acidity and metals within the study reach include a wetland area adjacent to the mine and tributary inflow from Cinnamon Gulch. Analysis of data collected under low-flow conditions indicates that concentrations of Al, Cd, Cu, Fe, Mn, Pb, and Zn in Peru Creek exceed aquatic life standards. Constituent loading within the study reach is dominated by effluent

  17. Understanding protocol performance: impact of test performance.

    Science.gov (United States)

    Turner, Robert G

    2013-01-01

    This is the second of two articles that examine the factors that determine protocol performance. The objective of these articles is to provide a general understanding of protocol performance that can be used to estimate performance, establish limits on performance, decide if a protocol is justified, and ultimately select a protocol. The first article was concerned with protocol criterion and test correlation. It demonstrated the advantages and disadvantages of different criterion when all tests had the same performance. It also examined the impact of increasing test correlation on protocol performance and the characteristics of the different criteria. To examine the impact on protocol performance when individual tests in a protocol have different performance. This is evaluated for different criteria and test correlations. The results of the two articles are combined and summarized. A mathematical model is used to calculate protocol performance for different protocol criteria and test correlations when there are small to large variations in the performance of individual tests in the protocol. The performance of the individual tests that make up a protocol has a significant impact on the performance of the protocol. As expected, the better the performance of the individual tests, the better the performance of the protocol. Many of the characteristics of the different criteria are relatively independent of the variation in the performance of the individual tests. However, increasing test variation degrades some criteria advantages and causes a new disadvantage to appear. This negative impact increases as test variation increases and as more tests are added to the protocol. Best protocol performance is obtained when individual tests are uncorrelated and have the same performance. In general, the greater the variation in the performance of tests in the protocol, the more detrimental this variation is to protocol performance. Since this negative impact is increased as

  18. 21 CFR 660.6 - Samples; protocols; official release.

    Science.gov (United States)

    2010-04-01

    ... product iodinated with 125I means a sample from each lot of diagnostic test kits in a finished package... manufacturer has satisfactorily completed all tests on the samples: (i) One sample until written notification... of this section, a sample of product not iodinated with 125I means a sample from each filling of each...

  19. Final Report: Sampling-Based Algorithms for Estimating Structure in Big Data.

    Energy Technology Data Exchange (ETDEWEB)

    Matulef, Kevin Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    The purpose of this project was to develop sampling-based algorithms to discover hidden struc- ture in massive data sets. Inferring structure in large data sets is an increasingly common task in many critical national security applications. These data sets come from myriad sources, such as network traffic, sensor data, and data generated by large-scale simulations. They are often so large that traditional data mining techniques are time consuming or even infeasible. To address this problem, we focus on a class of algorithms that do not compute an exact answer, but instead use sampling to compute an approximate answer using fewer resources. The particular class of algorithms that we focus on are streaming algorithms , so called because they are designed to handle high-throughput streams of data. Streaming algorithms have only a small amount of working storage - much less than the size of the full data stream - so they must necessarily use sampling to approximate the correct answer. We present two results: * A streaming algorithm called HyperHeadTail , that estimates the degree distribution of a graph (i.e., the distribution of the number of connections for each node in a network). The degree distribution is a fundamental graph property, but prior work on estimating the degree distribution in a streaming setting was impractical for many real-world application. We improve upon prior work by developing an algorithm that can handle streams with repeated edges, and graph structures that evolve over time. * An algorithm for the task of maintaining a weighted subsample of items in a stream, when the items must be sampled according to their weight, and the weights are dynamically changing. To our knowledge, this is the first such algorithm designed for dynamically evolving weights. We expect it may be useful as a building block for other streaming algorithms on dynamic data sets.

  20. Improving ambulatory saliva-sampling compliance in pregnant women: a randomized controlled study.

    Directory of Open Access Journals (Sweden)

    Julian Moeller

    Full Text Available OBJECTIVE: Noncompliance with scheduled ambulatory saliva sampling is common and has been associated with biased cortisol estimates in nonpregnant subjects. This study is the first to investigate in pregnant women strategies to improve ambulatory saliva-sampling compliance, and the association between sampling noncompliance and saliva cortisol estimates. METHODS: We instructed 64 pregnant women to collect eight scheduled saliva samples on two consecutive days each. Objective compliance with scheduled sampling times was assessed with a Medication Event Monitoring System and self-reported compliance with a paper-and-pencil diary. In a randomized controlled study, we estimated whether a disclosure intervention (informing women about objective compliance monitoring and a reminder intervention (use of acoustical reminders improved compliance. A mixed model analysis was used to estimate associations between women's objective compliance and their diurnal cortisol profiles, and between deviation from scheduled sampling and the cortisol concentration measured in the related sample. RESULTS: Self-reported compliance with a saliva-sampling protocol was 91%, and objective compliance was 70%. The disclosure intervention was associated with improved objective compliance (informed: 81%, noninformed: 60%, F(1,60  = 17.64, p<0.001, but not the reminder intervention (reminders: 68%, without reminders: 72%, F(1,60 = 0.78, p = 0.379. Furthermore, a woman's increased objective compliance was associated with a higher diurnal cortisol profile, F(2,64  = 8.22, p<0.001. Altered cortisol levels were observed in less objective compliant samples, F(1,705  = 7.38, p = 0.007, with delayed sampling associated with lower cortisol levels. CONCLUSIONS: The results suggest that in pregnant women, objective noncompliance with scheduled ambulatory saliva sampling is common and is associated with biased cortisol estimates. To improve sampling compliance, results suggest

  1. Influence of feed provisioning prior to digesta sampling on precaecal amino acid digestibility in broiler chickens.

    Science.gov (United States)

    Siegert, Wolfgang; Ganzer, Christian; Kluth, Holger; Rodehutscord, Markus

    2018-06-01

    A regression approach was applied to determine the influence of feed provisioning prior to digesta sampling on precaecal (pc) amino acid (AA) digestibility in broiler chickens. Soybean meal was used as an example test ingredient. Five feed-provisioning protocols were investigated, four with restricted provision and one with ad libitum provision. When provision was restricted, feed was provided for 30 min after a withdrawal period of 12 h. Digesta were sampled 1, 2, 4 and 6 h after feeding commenced. A diet containing 300 g maize starch/kg was prepared. Half or all the maize starch was replaced with soybean meal in two other diets. Average pc digestibility of all determined AA in the soybean meal was 86% for the 4 and 6-h protocols and 66% and 60% for the 2 and 1-h protocols, respectively. Average pc AA digestibility of soybean meal was 76% for ad libitum feed provision. Feed provisioning also influenced the determined variance. Variance in digestibility ranked in magnitude 1 h > ad libitum > 2 h > 6 h > 4 h for all AA. Owing to the considerable influence of feed-provisioning protocols found in this study, comparisons of pc AA digestibility between studies applying different protocols prior to digesta sampling must be treated with caution. Digestibility experiments aimed at providing estimates for practical feed formulation should use feed-provisioning procedures similar to those used in practice.

  2. Estimation of uranium in bioassay samples of occupational workers by laser fluorimetry

    International Nuclear Information System (INIS)

    Suja, A.; Prabhu, S.P.; Sawant, P.D.; Sarkar, P.K.; Tiwari, A.K.; Sharma, R.

    2010-01-01

    A newly established uranium processing facility has been commissioned at BARC, Trombay. Monitoring of occupational workers at regulars intervals is essential to assess intake of uranium by the workers in this facility. The design and engineering safety features of the plant are such that there is very low probability of uranium getting air borne during normal operations. However, the leakages from the system during routine maintenance of the plant may result in intake of uranium by workers. As per the new biokinetic model for uranium, 63% of uranium entering the blood stream gets directly excreted in urine. Therefore, bioassay monitoring (urinalysis) was recommended for these workers. A group of 21 workers was selected for bioassay monitoring to assess the existing urinary excretion levels of uranium before the commencement of actual work. For this purpose, sample collection kit along with an instruction slip was provided to the workers. Bioassay samples received were wet ashed with conc. nitric acid and hydrogen peroxide to break down the metabolized complexes of uranium and it was co-precipitated with calcium phosphate. Separation of uranium from the matrix was done using ion exchange technique and final activity quantification in these samples was done using laser fluorimeter (Quantalase, Model No. NFL/02). Calibration of the laser fluorimeter is done using 10 ppb uranium standard (WHO, France Ref. No. 180000). Verification of the system performance is done by measuring concentration of uranium in the standards (1 ppb to 100 ppb). Standard addition method was followed for estimation of uranium concentration in the samples. Uranyl ions present in the sample get excited by pulsed nitrogen laser at 337.1 nm, and on de-excitation emit fluorescence light (540 nm) intensity which is measured by the PMT. To estimate the uranium in the bioassay samples, a known aliquot of the sample was mixed with 5% sodium pyrophosphate and fluorescence intensity was measured

  3. Coded-Wire Tag Expansion Factors for Chinook Salmon Carcass Surveys in California: Estimating the Numbers and Proportions of Hatchery-Origin Fish

    Directory of Open Access Journals (Sweden)

    Michael S. Mohr

    2013-12-01

    Full Text Available Recovery of fish with adipose fin clips (adc and coded-wire tags (cwt in escapement surveys allows calculation of expansion factors used in estimation of the total number of fish from each adc,cwt release group, allowing escapement to be resolved by age and stock of origin. Expanded recoveries are used to derive important estimates such as the total number and proportion of hatchery-origin fish present. The standard estimation scheme assumes accurate visual classification of adc status, which can be problematic for decomposing carcasses. Failure to account for this potential misclassification can lead to significant estimation bias. We reviewed sample expansion factors used for the California Central Valley Chinook salmon 2010 carcass surveys in this context. For upper Sacramento River fall-run and late fall-run carcass surveys, the estimated proportions of adc,cwt fish for fresh and non-fresh carcasses differed substantially, likely from the under-recognition of adc fish in non-fresh carcasses. The resulting estimated proportions of hatchery-origin fish in the upper Sacramento River fall-run and late fall-run carcass surveys were 2.33 to 2.89 times higher if only fresh carcasses are considered. Similar biases can be avoided by consideration of only fresh carcasses for which determination of adc status is relatively straightforward; however, restricting the analysis entirely to fresh carcasses may limit precision because of reduced sample size, and is only possible if protocols for sampling and recording data ensure that the sample data and results for fresh carcasses can be extracted. Thus we recommend sampling protocols that are clearly documented and separately track fresh versus non-fresh carcasses, either collecting only definitively adc fish or that carefully track non-fresh carcasses that are definitively adc versus those that are possibly adc. This would allow judicious use of non-fresh carcass data when sample sizes are otherwise

  4. Expedited Radiation Biodosimetry by Automated Dicentric Chromosome Identification (ADCI) and Dose Estimation.

    Science.gov (United States)

    Shirley, Ben; Li, Yanxin; Knoll, Joan H M; Rogan, Peter K

    2017-09-04

    Biological radiation dose can be estimated from dicentric chromosome frequencies in metaphase cells. Performing these cytogenetic dicentric chromosome assays is traditionally a manual, labor-intensive process not well suited to handle the volume of samples which may require examination in the wake of a mass casualty event. Automated Dicentric Chromosome Identifier and Dose Estimator (ADCI) software automates this process by examining sets of metaphase images using machine learning-based image processing techniques. The software selects appropriate images for analysis by removing unsuitable images, classifies each object as either a centromere-containing chromosome or non-chromosome, further distinguishes chromosomes as monocentric chromosomes (MCs) or dicentric chromosomes (DCs), determines DC frequency within a sample, and estimates biological radiation dose by comparing sample DC frequency with calibration curves computed using calibration samples. This protocol describes the usage of ADCI software. Typically, both calibration (known dose) and test (unknown dose) sets of metaphase images are imported to perform accurate dose estimation. Optimal images for analysis can be found automatically using preset image filters or can also be filtered through manual inspection. The software processes images within each sample and DC frequencies are computed at different levels of stringency for calling DCs, using a machine learning approach. Linear-quadratic calibration curves are generated based on DC frequencies in calibration samples exposed to known physical doses. Doses of test samples exposed to uncertain radiation levels are estimated from their DC frequencies using these calibration curves. Reports can be generated upon request and provide summary of results of one or more samples, of one or more calibration curves, or of dose estimation.

  5. Multiple surveys employing a new sample-processing protocol reveal the genetic diversity of placozoans in Japan.

    Science.gov (United States)

    Miyazawa, Hideyuki; Nakano, Hiroaki

    2018-03-01

    Placozoans, flat free-living marine invertebrates, possess an extremely simple bauplan lacking neurons and muscle cells and represent one of the earliest-branching metazoan phyla. They are widely distributed from temperate to tropical oceans. Based on mitochondrial 16S rRNA sequences, 19 haplotypes forming seven distinct clades have been reported in placozoans to date. In Japan, placozoans have been found at nine locations, but 16S genotyping has been performed at only two of these locations. Here, we propose a new processing protocol, "ethanol-treated substrate sampling," for collecting placozoans from natural environments. We also report the collection of placozoans from three new locations, the islands of Shikine-jima, Chichi-jima, and Haha-jima, and we present the distribution of the 16S haplotypes of placozoans in Japan. Multiple surveys conducted at multiple locations yielded five haplotypes that were not reported previously, revealing high genetic diversity in Japan, especially at Shimoda and Shikine-jima Island. The observed geographic distribution patterns were different among haplotypes; some were widely distributed, while others were sampled only from a single location. However, samplings conducted on different dates at the same sites yielded different haplotypes, suggesting that placozoans of a given haplotype do not inhabit the same site constantly throughout the year. Continued sampling efforts conducted during all seasons at multiple locations worldwide and the development of molecular markers within the haplotypes are needed to reveal the geographic distribution pattern and dispersal history of placozoans in greater detail.

  6. Bayesian adaptive survey protocols for resource management

    Science.gov (United States)

    Halstead, Brian J.; Wylie, Glenn D.; Coates, Peter S.; Casazza, Michael L.

    2011-01-01

    Transparency in resource management decisions requires a proper accounting of uncertainty at multiple stages of the decision-making process. As information becomes available, periodic review and updating of resource management protocols reduces uncertainty and improves management decisions. One of the most basic steps to mitigating anthropogenic effects on populations is determining if a population of a species occurs in an area that will be affected by human activity. Species are rarely detected with certainty, however, and falsely declaring a species absent can cause improper conservation decisions or even extirpation of populations. We propose a method to design survey protocols for imperfectly detected species that accounts for multiple sources of uncertainty in the detection process, is capable of quantitatively incorporating expert opinion into the decision-making process, allows periodic updates to the protocol, and permits resource managers to weigh the severity of consequences if the species is falsely declared absent. We developed our method using the giant gartersnake (Thamnophis gigas), a threatened species precinctive to the Central Valley of California, as a case study. Survey date was negatively related to the probability of detecting the giant gartersnake, and water temperature was positively related to the probability of detecting the giant gartersnake at a sampled location. Reporting sampling effort, timing and duration of surveys, and water temperatures would allow resource managers to evaluate the probability that the giant gartersnake occurs at sampled sites where it is not detected. This information would also allow periodic updates and quantitative evaluation of changes to the giant gartersnake survey protocol. Because it naturally allows multiple sources of information and is predicated upon the idea of updating information, Bayesian analysis is well-suited to solving the problem of developing efficient sampling protocols for species of

  7. Estimation of sampling error uncertainties in observed surface air temperature change in China

    Science.gov (United States)

    Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun

    2017-08-01

    This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.

  8. The finite sample performance of estimators for mediation analysis under sequential conditional independence

    DEFF Research Database (Denmark)

    Huber, Martin; Lechner, Michael; Mellace, Giovanni

    Using a comprehensive simulation study based on empirical data, this paper investigates the finite sample properties of different classes of parametric and semi-parametric estimators of (natural) direct and indirect causal effects used in mediation analysis under sequential conditional independence...

  9. Generalized estimators of avian abundance from count survey data

    Directory of Open Access Journals (Sweden)

    Royle, J. A.

    2004-01-01

    Full Text Available I consider modeling avian abundance from spatially referenced bird count data collected according to common protocols such as capture-recapture, multiple observer, removal sampling and simple point counts. Small sample sizes and large numbers of parameters have motivated many analyses that disregard the spatial indexing of the data, and thus do not provide an adequate treatment of spatial structure. I describe a general framework for modeling spatially replicated data that regards local abundance as a random process, motivated by the view that the set of spatially referenced local populations (at the sample locations constitute a metapopulation. Under this view, attention can be focused on developing a model for the variation in local abundance independent of the sampling protocol being considered. The metapopulation model structure, when combined with the data generating model, define a simple hierarchical model that can be analyzed using conventional methods. The proposed modeling framework is completely general in the sense that broad classes of metapopulation models may be considered, site level covariates on detection and abundance may be considered, and estimates of abundance and related quantities may be obtained for sample locations, groups of locations, unsampled locations. Two brief examples are given, the first involving simple point counts, and the second based on temporary removal counts. Extension of these models to open systems is briefly discussed.

  10. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  11. A simple method for estimating the effective dose in dental CT. Conversion factors and calculation for a clinical low-dose protocol

    International Nuclear Information System (INIS)

    Homolka, P.; Kudler, H.; Nowotny, R.; Gahleitner, A.; Wien Univ.

    2001-01-01

    An easily appliable method to estimate effective dose including in its definition the high radio-sensitivity of the salivary glands from dental computed tomography is presented. Effective doses were calculated for a markedly dose reduced dental CT protocol as well as for standard settings. Data are compared with effective doses from the literature obtained with other modalities frequently used in dental care. Methods: Conversion factors based on the weighted Computed Tomography Dose Index were derived from published data to calculate effective dose values for various CT exposure settings. Results: Conversion factors determined can be used for clinically used kVp settings and prefiltrations. With reduced tube current an effective dose for a CT examination of the maxilla of 22 μSv can be achieved, which compares to values typically obtained with panoramic radiography (26 μSv). A CT scan of the mandible, respectively, gives 123 μSv comparable to a full mouth survey with intraoral films (150 μSv). Conclusion: For standard CT scan protocols of the mandible, effective doses exceed 600 μSv. Hence, low dose protocols for dental CT should be considered whenever feasable, especially for paediatric patients. If hard tissue diagnoses is performed, the potential of dose reduction is significant despite the higher image noise levels as readability is still adequate. (orig.) [de

  12. Standardizing serum 25-hydroxyvitamin D data from four Nordic population samples using the Vitamin D Standardization Program protocols: Shedding new light on vitamin D status in Nordic individuals

    DEFF Research Database (Denmark)

    Cashman, Kevin D; Dowling, Kirsten G; Škrabáková, Zuzana

    2015-01-01

    for the European Union are of variable quality making it difficult to estimate the prevalence of vitamin D deficiency across member states. As a consequence of the widespread, method-related differences in measurements of serum 25(OH)D concentrations, the Vitamin D Standardization Program (VDSP) developed...... protocols for standardizing existing serum 25(OH)D data from national surveys around the world. The objective of the present work was to apply the VDSP protocols to existing serum 25(OH)D data from a Danish, a Norwegian, and a Finnish population-based health survey and from a Danish randomized controlled...

  13. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    Science.gov (United States)

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  14. Assessment of the effect of population and diary sampling methods on estimation of school-age children exposure to fine particles.

    Science.gov (United States)

    Che, W W; Frey, H Christopher; Lau, Alexis K H

    2014-12-01

    Population and diary sampling methods are employed in exposure models to sample simulated individuals and their daily activity on each simulation day. Different sampling methods may lead to variations in estimated human exposure. In this study, two population sampling methods (stratified-random and random-random) and three diary sampling methods (random resampling, diversity and autocorrelation, and Markov-chain cluster [MCC]) are evaluated. Their impacts on estimated children's exposure to ambient fine particulate matter (PM2.5 ) are quantified via case studies for children in Wake County, NC for July 2002. The estimated mean daily average exposure is 12.9 μg/m(3) for simulated children using the stratified population sampling method, and 12.2 μg/m(3) using the random sampling method. These minor differences are caused by the random sampling among ages within census tracts. Among the three diary sampling methods, there are differences in the estimated number of individuals with multiple days of exposures exceeding a benchmark of concern of 25 μg/m(3) due to differences in how multiday longitudinal diaries are estimated. The MCC method is relatively more conservative. In case studies evaluated here, the MCC method led to 10% higher estimation of the number of individuals with repeated exposures exceeding the benchmark. The comparisons help to identify and contrast the capabilities of each method and to offer insight regarding implications of method choice. Exposure simulation results are robust to the two population sampling methods evaluated, and are sensitive to the choice of method for simulating longitudinal diaries, particularly when analyzing results for specific microenvironments or for exposures exceeding a benchmark of concern. © 2014 Society for Risk Analysis.

  15. Estimating the residential demand function for natural gas in Seoul with correction for sample selection bias

    International Nuclear Information System (INIS)

    Yoo, Seung-Hoon; Lim, Hea-Jin; Kwak, Seung-Jun

    2009-01-01

    Over the last twenty years, the consumption of natural gas in Korea has increased dramatically. This increase has mainly resulted from the rise of consumption in the residential sector. The main objective of the study is to estimate households' demand function for natural gas by applying a sample selection model using data from a survey of households in Seoul. The results show that there exists a selection bias in the sample and that failure to correct for sample selection bias distorts the mean estimate, of the demand for natural gas, downward by 48.1%. In addition, according to the estimation results, the size of the house, the dummy variable for dwelling in an apartment, the dummy variable for having a bed in an inner room, and the household's income all have positive relationships with the demand for natural gas. On the other hand, the size of the family and the price of gas negatively contribute to the demand for natural gas. (author)

  16. Indirect estimation of signal-dependent noise with nonadaptive heterogeneous samples.

    Science.gov (United States)

    Azzari, Lucio; Foi, Alessandro

    2014-08-01

    We consider the estimation of signal-dependent noise from a single image. Unlike conventional algorithms that build a scatterplot of local mean-variance pairs from either small or adaptively selected homogeneous data samples, our proposed approach relies on arbitrarily large patches of heterogeneous data extracted at random from the image. We demonstrate the feasibility of our approach through an extensive theoretical analysis based on mixture of Gaussian distributions. A prototype algorithm is also developed in order to validate the approach on simulated data as well as on real camera raw images.

  17. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  18. Background estimation in short-wave region during determination of total sample composition by x-ray fluorescence method

    International Nuclear Information System (INIS)

    Simakov, V.A.; Kordyukov, S.V.; Petrov, E.N.

    1988-01-01

    Method of background estimation in short-wave spectral region during determination of total sample composition by X-ray fluorescence method is described. 13 types of different rocks with considerable variations of base composition and Zr, Nb, Th, U content below 7x10 -3 % are investigated. The suggested method of background accounting provides for a less statistical error of the background estimation than direct isolated measurement and reliability of its determination in a short-wave region independent on the sample base. Possibilities of suggested method for artificial mixtures conforming by the content of main component to technological concemtrates - niobium, zirconium, tantalum are estimated

  19. Using semantics for representing experimental protocols.

    Science.gov (United States)

    Giraldo, Olga; García, Alexander; López, Federico; Corcho, Oscar

    2017-11-13

    An experimental protocol is a sequence of tasks and operations executed to perform experimental research in biological and biomedical areas, e.g. biology, genetics, immunology, neurosciences, virology. Protocols often include references to equipment, reagents, descriptions of critical steps, troubleshooting and tips, as well as any other information that researchers deem important for facilitating the reusability of the protocol. Although experimental protocols are central to reproducibility, the descriptions are often cursory. There is the need for a unified framework with respect to the syntactic structure and the semantics for representing experimental protocols. In this paper we present "SMART Protocols ontology", an ontology for representing experimental protocols. Our ontology represents the protocol as a workflow with domain specific knowledge embedded within a document. We also present the S ample I nstrument R eagent O bjective (SIRO) model, which represents the minimal common information shared across experimental protocols. SIRO was conceived in the same realm as the Patient Intervention Comparison Outcome (PICO) model that supports search, retrieval and classification purposes in evidence based medicine. We evaluate our approach against a set of competency questions modeled as SPARQL queries and processed against a set of published and unpublished protocols modeled with the SP Ontology and the SIRO model. Our approach makes it possible to answer queries such as Which protocols use tumor tissue as a sample. Improving reporting structures for experimental protocols requires collective efforts from authors, peer reviewers, editors and funding bodies. The SP Ontology is a contribution towards this goal. We build upon previous experiences and bringing together the view of researchers managing protocols in their laboratory work. Website: https://smartprotocols.github.io/ .

  20. Estimating black bear density in New Mexico using noninvasive genetic sampling coupled with spatially explicit capture-recapture methods

    Science.gov (United States)

    Gould, Matthew J.; Cain, James W.; Roemer, Gary W.; Gould, William R.

    2016-01-01

    During the 2004–2005 to 2015–2016 hunting seasons, the New Mexico Department of Game and Fish (NMDGF) estimated black bear abundance (Ursus americanus) across the state by coupling density estimates with the distribution of primary habitat generated by Costello et al. (2001). These estimates have been used to set harvest limits. For example, a density of 17 bears/100 km2 for the Sangre de Cristo and Sacramento Mountains and 13.2 bears/100 km2 for the Sandia Mountains were used to set harvest levels. The advancement and widespread acceptance of non-invasive sampling and mark-recapture methods, prompted the NMDGF to collaborate with the New Mexico Cooperative Fish and Wildlife Research Unit and New Mexico State University to update their density estimates for black bear populations in select mountain ranges across the state.We established 5 study areas in 3 mountain ranges: the northern (NSC; sampled in 2012) and southern Sangre de Cristo Mountains (SSC; sampled in 2013), the Sandia Mountains (Sandias; sampled in 2014), and the northern (NSacs) and southern Sacramento Mountains (SSacs; both sampled in 2014). We collected hair samples from black bears using two concurrent non-invasive sampling methods, hair traps and bear rubs. We used a gender marker and a suite of microsatellite loci to determine the individual identification of hair samples that were suitable for genetic analysis. We used these data to generate mark-recapture encounter histories for each bear and estimated density in a spatially explicit capture-recapture framework (SECR). We constructed a suite of SECR candidate models using sex, elevation, land cover type, and time to model heterogeneity in detection probability and the spatial scale over which detection probability declines. We used Akaike’s Information Criterion corrected for small sample size (AICc) to rank and select the most supported model from which we estimated density.We set 554 hair traps, 117 bear rubs and collected 4,083 hair

  1. A Class of Estimators for Finite Population Mean in Double Sampling under Nonresponse Using Fractional Raw Moments

    Directory of Open Access Journals (Sweden)

    Manzoor Khan

    2014-01-01

    Full Text Available This paper presents new classes of estimators in estimating the finite population mean under double sampling in the presence of nonresponse when using information on fractional raw moments. The expressions for mean square error of the proposed classes of estimators are derived up to the first degree of approximation. It is shown that a proposed class of estimators performs better than the usual mean estimator, ratio type estimators, and Singh and Kumar (2009 estimator. An empirical study is carried out to demonstrate the performance of a proposed class of estimators.

  2. On the choice of statistical models for estimating occurrence and extinction from animal surveys

    Science.gov (United States)

    Dorazio, R.M.

    2007-01-01

    In surveys of natural animal populations the number of animals that are present and available to be detected at a sample location is often low, resulting in few or no detections. Low detection frequencies are especially common in surveys of imperiled species; however, the choice of sampling method and protocol also may influence the size of the population that is vulnerable to detection. In these circumstances, probabilities of animal occurrence and extinction will generally be estimated more accurately if the models used in data analysis account for differences in abundance among sample locations and for the dependence between site-specific abundance and detection. Simulation experiments are used to illustrate conditions wherein these types of models can be expected to outperform alternative estimators of population site occupancy and extinction. ?? 2007 by the Ecological Society of America.

  3. Spacecraft Trajectory Estimation Using a Sampled-Data Extended Kalman Filter with Range-Only Measurements

    National Research Council Canada - National Science Library

    Erwin, R. S; Bernstein, Dennis S

    2005-01-01

    .... In this paper we use a sampled-data extended Kalman Filter to estimate the trajectory or a target satellite when only range measurements are available from a constellation or orbiting spacecraft...

  4. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    Science.gov (United States)

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  5. Optimization of the sampling scheme for maps of physical and chemical properties estimated by kriging

    Directory of Open Access Journals (Sweden)

    Gener Tadeu Pereira

    2013-10-01

    Full Text Available The sampling scheme is essential in the investigation of the spatial variability of soil properties in Soil Science studies. The high costs of sampling schemes optimized with additional sampling points for each physical and chemical soil property, prevent their use in precision agriculture. The purpose of this study was to obtain an optimal sampling scheme for physical and chemical property sets and investigate its effect on the quality of soil sampling. Soil was sampled on a 42-ha area, with 206 geo-referenced points arranged in a regular grid spaced 50 m from each other, in a depth range of 0.00-0.20 m. In order to obtain an optimal sampling scheme for every physical and chemical property, a sample grid, a medium-scale variogram and the extended Spatial Simulated Annealing (SSA method were used to minimize kriging variance. The optimization procedure was validated by constructing maps of relative improvement comparing the sample configuration before and after the process. A greater concentration of recommended points in specific areas (NW-SE direction was observed, which also reflects a greater estimate variance at these locations. The addition of optimal samples, for specific regions, increased the accuracy up to 2 % for chemical and 1 % for physical properties. The use of a sample grid and medium-scale variogram, as previous information for the conception of additional sampling schemes, was very promising to determine the locations of these additional points for all physical and chemical soil properties, enhancing the accuracy of kriging estimates of the physical-chemical properties.

  6. Reliability of sampling strategies for measuring dairy cattle welfare on commercial farms.

    Science.gov (United States)

    Van Os, Jennifer M C; Winckler, Christoph; Trieb, Julia; Matarazzo, Soraia V; Lehenbauer, Terry W; Champagne, John D; Tucker, Cassandra B

    2018-02-01

    Our objective was to evaluate how the proportion of high-producing lactating cows sampled on each farm and the selection method affect prevalence estimates for animal-based measures. We assessed the entire high-producing pen (days in milk size calculations from the Welfare Quality Protocol; and (4) selecting the first, middle, or final third of cows exiting the milking parlor. Estimates were compared with true values using regression analysis and were considered accurate if they met 3 criteria: the coefficient of determination was ≥0.9 and the slope and intercept did not differ significantly from 1 and 0, respectively. All estimates met the slope and intercept criteria, whereas the coefficient of determination increased when more cows were sampled. All estimates were accurate for neck alterations, ocular discharge (22.2 ± 27.4%), and carpal joint hair loss (14.1 ± 17.4%). Selecting a third of the milking order or using the Welfare Quality sample size calculations failed to accurately estimate all measures simultaneously. However, all estimates were accurate when selecting at least 2 of every 3 cows locked at the feed bunk. Using restraint position at the feed bunk did not differ systematically from computer-selecting the same proportion of cows randomly, and the former may be a simpler approach for welfare assessments. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  7. [Sampling, storage and transport of biological materials collected from living and deceased subjects for determination of concentration levels of ethyl alcohol and similarly acting substances. A proposal of updating the blood and urine sampling protocol].

    Science.gov (United States)

    Wiergowski, Marek; Reguła, Krystyna; Pieśniak, Dorota; Galer-Tatarowicz, Katarzyna; Szpiech, Beata; Jankowski, Zbigniew

    2007-01-01

    The present paper emphasizes the most common mistakes committed at the beginning of an analytical procedure. To shorten the time and decrease the cost of determinations of substances with similar to alcohol activity, it is postulated to introduce mass-scale screening analysis of saliva collected from a living subject at the site of the event, with all positive results confirmed in blood or urine samples. If no saliva sample is collected for toxicology, a urine sample, allowing for a stat fast screening analysis, and a blood sample, to confirm the result, should be ensured. Inappropriate storage of a blood sample in the tube without a preservative can cause sample spilling and its irretrievable loss. The authors propose updating the "Blood/urine sampling protocol", with the updated version to be introduced into practice following consultations and revisions.

  8. Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.

    Science.gov (United States)

    Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca

    2018-02-01

    The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.

  9. Small-vessel Survey and Auction Sampling to Estimate Growth and Maturity of Eteline Snappers

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Small-vessel Survey and Auction Sampling to Estimate Growth and Maturity of Eteline Snappers and Improve Data-Limited Stock Assessments. This biosampling project...

  10. Physical Therapy Protocols for Arthroscopic Bankart Repair.

    Science.gov (United States)

    DeFroda, Steven F; Mehta, Nabil; Owens, Brett D

    Outcomes after arthroscopic Bankart repair can be highly dependent on compliance and participation in physical therapy. Additionally, there are many variations in physician-recommended physical therapy protocols. The rehabilitation protocols of academic orthopaedic surgery departments vary widely despite the presence of consensus protocols. Descriptive epidemiology study. Level 3. Web-based arthroscopic Bankart rehabilitation protocols available online from Accreditation Council for Graduate Medical Education (ACGME)-accredited orthopaedic surgery programs were included for review. Individual protocols were reviewed to evaluate for the presence or absence of recommended therapies, goals for completion of ranges of motion, functional milestones, exercise start times, and recommended time to return to sport. Thirty protocols from 27 (16.4%) total institutions were identified out of 164 eligible for review. Overall, 9 (30%) protocols recommended an initial period of strict immobilization. Variability existed between the recommended time periods for sling immobilization (mean, 4.8 ± 1.8 weeks). The types of exercises and their start dates were also inconsistent. Goals to full passive range of motion (mean, 9.2 ± 2.8 weeks) and full active range of motion (mean, 12.2 ± 2.8 weeks) were consistent with other published protocols; however, wide ranges existed within the reviewed protocols as a whole. Only 10 protocols (33.3%) included a timeline for return to sport, and only 3 (10%) gave an estimate for return to game competition. Variation also existed when compared with the American Society of Shoulder and Elbow Therapists' (ASSET) consensus protocol. Rehabilitation protocols after arthroscopic Bankart repair were found to be highly variable. They also varied with regard to published consensus protocols. This discrepancy may lead to confusion among therapists and patients. This study highlights the importance of attending surgeons being very clear and specific with

  11. A method for the estimation of the significance of cross-correlations in unevenly sampled red-noise time series

    Science.gov (United States)

    Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.

    2014-11-01

    We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.

  12. Interval estimation methods of the mean in small sample situation and the results' comparison

    International Nuclear Information System (INIS)

    Wu Changli; Guo Chunying; Jiang Meng; Lin Yuangen

    2009-01-01

    The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)

  13. A NEW METHOD FOR NON DESTRUCTIVE ESTIMATION OF Jc IN YBaCuO CERAMIC SAMPLES

    Directory of Open Access Journals (Sweden)

    Giancarlo Cordeiro Costa

    2014-12-01

    Full Text Available This work presents a new method for estimation of Jc as a bulk characteristic of YBCO blocks. The experimental magnetic interaction force between a SmCo permanent magnet and a YBCO block was compared to finite element method (FEM simulations results, allowing us to search a best fitting value to the critical current of the superconducting sample. As FEM simulations were based on Bean model , the critical current density was taken as an unknown parameter. This is a non destructive estimation method. since there is no need of breaking even a little piece of the sample for analysis.

  14. An Improved Estimation of Regional Fractional Woody/Herbaceous Cover Using Combined Satellite Data and High-Quality Training Samples

    Directory of Open Access Journals (Sweden)

    Xu Liu

    2017-01-01

    Full Text Available Mapping vegetation cover is critical for understanding and monitoring ecosystem functions in semi-arid biomes. As existing estimates tend to underestimate the woody cover in areas with dry deciduous shrubland and woodland, we present an approach to improve the regional estimation of woody and herbaceous fractional cover in the East Asia steppe. This developed approach uses Random Forest models by combining multiple remote sensing data—training samples derived from high-resolution image in a tailored spatial sampling and model inputs composed of specific metrics from MODIS sensor and ancillary variables including topographic, bioclimatic, and land surface information. We emphasize that effective spatial sampling, high-quality classification, and adequate geospatial information are important prerequisites of establishing appropriate model inputs and achieving high-quality training samples. This study suggests that the optimal models improve estimation accuracy (NMSE 0.47 for woody and 0.64 for herbaceous plants and show a consistent agreement with field observations. Compared with existing woody estimate product, the proposed woody cover estimation can delineate regions with subshrubs and shrubs, showing an improved capability of capturing spatialized detail of vegetation signals. This approach can be applicable over sizable semi-arid areas such as temperate steppes, savannas, and prairies.

  15. Bounding the per-protocol effect in randomized trials: An application to colorectal cancer screening

    NARCIS (Netherlands)

    S.A. Swanson (Sonja); Holme (Øyvind); M. Løberg (Magnus); M. Kalager (Mette); M. Bretthauer (Michael); G. Hoff (G.); E. Aas (Eline); M.A. Hernán (M.)

    2015-01-01

    textabstractBackground: The per-protocol effect is the effect that would have been observed in a randomized trial had everybody followed the protocol. Though obtaining a valid point estimate for the per-protocol effect requires assumptions that are unverifiable and often implausible, lower and upper

  16. SNP calling, genotype calling, and sample allele frequency estimation from new-generation sequencing data

    DEFF Research Database (Denmark)

    Nielsen, Rasmus; Korneliussen, Thorfinn Sand; Albrechtsen, Anders

    2012-01-01

    We present a statistical framework for estimation and application of sample allele frequency spectra from New-Generation Sequencing (NGS) data. In this method, we first estimate the allele frequency spectrum using maximum likelihood. In contrast to previous methods, the likelihood function is cal...... be extended to various other cases including cases with deviations from Hardy-Weinberg equilibrium. We evaluate the statistical properties of the methods using simulations and by application to a real data set....

  17. A practical way to estimate retail tobacco sales violation rates more accurately.

    Science.gov (United States)

    Levinson, Arnold H; Patnaik, Jennifer L

    2013-11-01

    U.S. states annually estimate retailer propensity to sell adolescents cigarettes, which is a violation of law, by staging a single purchase attempt among a random sample of tobacco businesses. The accuracy of single-visit estimates is unknown. We examined this question using a novel test-retest protocol. Supervised minors attempted to purchase cigarettes at all retail tobacco businesses located in 3 Colorado counties. The attempts observed federal standards: Minors were aged 15-16 years, were nonsmokers, and were free of visible tattoos and piercings, and were allowed to enter stores alone or in pairs to purchase a small item while asking for cigarettes and to show or not show genuine identification (ID, e.g., driver's license). Unlike federal standards, stores received a second purchase attempt within a few days unless minors were firmly told not to return. Separate violation rates were calculated for first visits, second visits, and either visit. Eleven minors attempted to purchase cigarettes 1,079 times from 671 retail businesses. One sixth of first visits (16.8%) resulted in a violation; the rate was similar for second visits (15.7%). Considering either visit, 25.3% of businesses failed the test. Factors predictive of violation were whether clerks asked for ID, whether the clerks closely examined IDs, and whether minors included snacks or soft drinks in cigarette purchase attempts. A test-retest protocol for estimating underage cigarette sales detected half again as many businesses in violation as the federally approved one-test protocol. Federal policy makers should consider using the test-retest protocol to increase accuracy and awareness of widespread adolescent access to cigarettes through retail businesses.

  18. Verbal protocols as methodological resources: research evidence

    Directory of Open Access Journals (Sweden)

    Alessandra Baldo

    2012-01-01

    Full Text Available This article aims at reflecting on the use of verbal protocols as a methodological resource in qualitative research, more specifically on the aspect regarded as the main limitation of a study about lexical inferencing in L2 (BALDO; VELASQUES, 2010: its subjective trait. The article begins with a brief literature review on protocols, followed by a description of the study in which they were employed as methodological resources. Based on that, protocol subjectivity is illustrated through samples of unparalleled data classification, carried out independently by two researchers. In the final section, the path followed to minimize the problem is presented, intending to contribute to improve efficiency in the use of verbal protocols in future research.

  19. Cost-effective sampling of 137Cs-derived net soil redistribution: part 1 – estimating the spatial mean across scales of variation

    International Nuclear Information System (INIS)

    Li, Y.; Chappell, A.; Nyamdavaa, B.; Yu, H.; Davaasuren, D.; Zoljargal, K.

    2015-01-01

    The 137 Cs technique for estimating net time-integrated soil redistribution is valuable for understanding the factors controlling soil redistribution by all processes. The literature on this technique is dominated by studies of individual fields and describes its typically time-consuming nature. We contend that the community making these studies has inappropriately assumed that many 137 Cs measurements are required and hence estimates of net soil redistribution can only be made at the field scale. Here, we support future studies of 137 Cs-derived net soil redistribution to apply their often limited resources across scales of variation (field, catchment, region etc.) without compromising the quality of the estimates at any scale. We describe a hybrid, design-based and model-based, stratified random sampling design with composites to estimate the sampling variance and a cost model for fieldwork and laboratory measurements. Geostatistical mapping of net (1954–2012) soil redistribution as a case study on the Chinese Loess Plateau is compared with estimates for several other sampling designs popular in the literature. We demonstrate the cost-effectiveness of the hybrid design for spatial estimation of net soil redistribution. To demonstrate the limitations of current sampling approaches to cut across scales of variation, we extrapolate our estimate of net soil redistribution across the region, show that for the same resources, estimates from many fields could have been provided and would elucidate the cause of differences within and between regional estimates. We recommend that future studies evaluate carefully the sampling design to consider the opportunity to investigate 137 Cs-derived net soil redistribution across scales of variation. - Highlights: • The 137 Cs technique estimates net time-integrated soil redistribution by all processes. • It is time-consuming and dominated by studies of individual fields. • We use limited resources to estimate soil

  20. Three-dimensional reconstruction of highly complex microscopic samples using scanning electron microscopy and optical flow estimation.

    Directory of Open Access Journals (Sweden)

    Ahmadreza Baghaie

    Full Text Available Scanning Electron Microscope (SEM as one of the major research and industrial equipment for imaging of micro-scale samples and surfaces has gained extensive attention from its emerge. However, the acquired micrographs still remain two-dimensional (2D. In the current work a novel and highly accurate approach is proposed to recover the hidden third-dimension by use of multi-view image acquisition of the microscopic samples combined with pre/post-processing steps including sparse feature-based stereo rectification, nonlocal-based optical flow estimation for dense matching and finally depth estimation. Employing the proposed approach, three-dimensional (3D reconstructions of highly complex microscopic samples were achieved to facilitate the interpretation of topology and geometry of surface/shape attributes of the samples. As a byproduct of the proposed approach, high-definition 3D printed models of the samples can be generated as a tangible means of physical understanding. Extensive comparisons with the state-of-the-art reveal the strength and superiority of the proposed method in uncovering the details of the highly complex microscopic samples.

  1. Variations among animals when estimating the undegradable fraction of fiber in forage samples

    Directory of Open Access Journals (Sweden)

    Cláudia Batista Sampaio

    2014-10-01

    Full Text Available The objective of this study was to assess the variability among animals regarding the critical time to estimate the undegradable fraction of fiber (ct using an in situ incubation procedure. Five rumenfistulated Nellore steers were used to estimate the degradation profile of fiber. Animals were fed a standard diet with an 80:20 forage:concentrate ratio. Sugarcane, signal grass hay, corn silage and fresh elephant grass samples were assessed. Samples were put in F57 Ankom® bags and were incubated in the rumens of the animals for 0, 6, 12, 18, 24, 48, 72, 96, 120, 144, 168, 192, 216, 240 and 312 hours. The degradation profiles were interpreted using a mixed non-linear model in which a random effect was associated with the degradation rate. For sugarcane, signal grass hay and corn silage, there were no significant variations among animals regarding the fractional degradation rate of neutral and acid detergent fiber; consequently, the ct required to estimate the undegradable fiber fraction did not vary among animals for those forages. However, a significant variability among animals was found for the fresh elephant grass. The results seem to suggest that the variability among animals regarding the degradation rate of fibrous components can be significant.

  2. Technical Note: Comparison of storage strategies of sea surface microlayer samples

    Directory of Open Access Journals (Sweden)

    K. Schneider-Zapp

    2013-07-01

    Full Text Available The sea surface microlayer (SML is an important biogeochemical system whose physico-chemical analysis often necessitates some degree of sample storage. However, many SML components degrade with time so the development of optimal storage protocols is paramount. We here briefly review some commonly used treatment and storage protocols. Using freshwater and saline SML samples from a river estuary, we investigated temporal changes in surfactant activity (SA and the absorbance and fluorescence of chromophoric dissolved organic matter (CDOM over four weeks, following selected sample treatment and storage protocols. Some variability in the effectiveness of individual protocols most likely reflects sample provenance. None of the various protocols examined performed any better than dark storage at 4 °C without pre-treatment. We therefore recommend storing samples refrigerated in the dark.

  3. Estimation of uranium in bioassay samples of occupational workers by laser fluorimetry

    International Nuclear Information System (INIS)

    Suja, A.; Prabhu, S.P.; Sawant, P.D.; Sarkar, P.K.; Tiwari, A.K.; Sharma, R.

    2012-01-01

    A newly established uranium processing facility has been commissioned at BARC, Trombay. Monitoring of occupational workers is essential to assess intake of uranium in this facility. A group of 21 workers was selected for bioassay monitoring to assess the existing urinary excretion levels of uranium before the commencement of actual work. Bioassay samples collected from these workers were analyzed by ion-exchange technique followed by laser fluorimetry. Standard addition method was followed for estimation of uranium concentration in the samples. The minimum detectable activity by this technique is about 0.2 ng. The range of uranium observed in these samples varies from 19 to 132 ng/L. Few of these samples were also analyzed by fission track analysis technique and the results were found to be comparable to those obtained by laser fluorimetry. The urinary excretion rate observed for the individual can be regarded as a 'personal baseline' and will be treated as the existing level of uranium in urine for these workers at the facility. (author)

  4. Limited sampling strategy models for estimating the AUC of gliclazide in Chinese healthy volunteers.

    Science.gov (United States)

    Huang, Ji-Han; Wang, Kun; Huang, Xiao-Hui; He, Ying-Chun; Li, Lu-Jin; Sheng, Yu-Cheng; Yang, Juan; Zheng, Qing-Shan

    2013-06-01

    The aim of this work is to reduce the cost of required sampling for the estimation of the area under the gliclazide plasma concentration versus time curve within 60 h (AUC0-60t ). The limited sampling strategy (LSS) models were established and validated by the multiple regression model within 4 or fewer gliclazide concentration values. Absolute prediction error (APE), root of mean square error (RMSE) and visual prediction check were used as criterion. The results of Jack-Knife validation showed that 10 (25.0 %) of the 40 LSS based on the regression analysis were not within an APE of 15 % using one concentration-time point. 90.2, 91.5 and 92.4 % of the 40 LSS models were capable of prediction using 2, 3 and 4 points, respectively. Limited sampling strategies were developed and validated for estimating AUC0-60t of gliclazide. This study indicates that the implementation of an 80 mg dosage regimen enabled accurate predictions of AUC0-60t by the LSS model. This study shows that 12, 6, 4, 2 h after administration are the key sampling times. The combination of (12, 2 h), (12, 8, 2 h) or (12, 8, 4, 2 h) can be chosen as sampling hours for predicting AUC0-60t in practical application according to requirement.

  5. Sample preparation guidelines for two-dimensional electrophoresis.

    Science.gov (United States)

    Posch, Anton

    2014-12-01

    Sample preparation is one of the key technologies for successful two-dimensional electrophoresis (2DE). Due to the great diversity of protein sample types and sources, no single sample preparation method works with all proteins; for any sample the optimum procedure must be determined empirically. This review is meant to provide a broad overview of the most important principles in sample preparation in order to avoid a multitude of possible pitfalls. Sample preparation protocols from the expert in the field were screened and evaluated. On the basis of these protocols and my own comprehensive practical experience important guidelines are given in this review. The presented guidelines will facilitate straightforward protocol development for researchers new to gel-based proteomics. In addition the available choices are rationalized in order to successfully prepare a protein sample for 2DE separations. The strategies described here are not limited to 2DE and can also be applied to other protein separation techniques.

  6. Protocol voor meting van lachgasemissie uit huisvestingssystemen in de veehouderij 2010 = Measurement protocol for nitrous oxide emission from housing systems in livestock production 2010

    NARCIS (Netherlands)

    Mosquera Losada, J.; Groenestein, C.M.; Ogink, N.W.M.

    2011-01-01

    This report describes a measurement protocol for nitrous oxide emissions from animal housing systems. The protocol is based on sampling periods of 24 hours spread over one year and can be applied in specified animal categories.

  7. Validation of a Sampling Method to Collect Exposure Data for Central-Line-Associated Bloodstream Infections.

    Science.gov (United States)

    Hammami, Naïma; Mertens, Karl; Overholser, Rosanna; Goetghebeur, Els; Catry, Boudewijn; Lambert, Marie-Laurence

    2016-05-01

    Surveillance of central-line-associated bloodstream infections requires the labor-intensive counting of central-line days (CLDs). This workload could be reduced by sampling. Our objective was to evaluate the accuracy of various sampling strategies in the estimation of CLDs in intensive care units (ICUs) and to establish a set of rules to identify optimal sampling strategies depending on ICU characteristics. Analyses of existing data collected according to the European protocol for patient-based surveillance of ICU-acquired infections in Belgium between 2004 and 2012. CLD data were reported by 56 ICUs in 39 hospitals during 364 trimesters. We compared estimated CLD data obtained from weekly and monthly sampling schemes with the observed exhaustive CLD data over the trimester by assessing the CLD percentage error (ie, observed CLDs - estimated CLDs/observed CLDs). We identified predictors of improved accuracy using linear mixed models. When sampling once per week or 3 times per month, 80% of ICU trimesters had a CLD percentage error within 10%. When sampling twice per week, this was >90% of ICU trimesters. Sampling on Tuesdays provided the best estimations. In the linear mixed model, the observed CLD count was the best predictor for a smaller percentage error. The following sampling strategies provided an estimate within 10% of the actual CLD for 97% of the ICU trimesters with 90% confidence: 3 times per month in an ICU with >650 CLDs per trimester or each Tuesday in an ICU with >480 CLDs per trimester. Sampling of CLDs provides an acceptable alternative to daily collection of CLD data.

  8. Estimation of time-delayed mutual information and bias for irregularly and sparsely sampled time-series

    International Nuclear Information System (INIS)

    Albers, D.J.; Hripcsak, George

    2012-01-01

    Highlights: ► Time-delayed mutual information for irregularly sampled time-series. ► Estimation bias for the time-delayed mutual information calculation. ► Fast, simple, PDF estimator independent, time-delayed mutual information bias estimate. ► Quantification of data-set-size limits of the time-delayed mutual calculation. - Abstract: A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database.

  9. Estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean.

    Science.gov (United States)

    Schillaci, Michael A; Schillaci, Mario E

    2009-02-01

    The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (nresearchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.

  10. Designing a monitoring program to estimate estuarine survival of anadromous salmon smolts: simulating the effect of sample design on inference

    Science.gov (United States)

    Romer, Jeremy D.; Gitelman, Alix I.; Clements, Shaun; Schreck, Carl B.

    2015-01-01

    A number of researchers have attempted to estimate salmonid smolt survival during outmigration through an estuary. However, it is currently unclear how the design of such studies influences the accuracy and precision of survival estimates. In this simulation study we consider four patterns of smolt survival probability in the estuary, and test the performance of several different sampling strategies for estimating estuarine survival assuming perfect detection. The four survival probability patterns each incorporate a systematic component (constant, linearly increasing, increasing and then decreasing, and two pulses) and a random component to reflect daily fluctuations in survival probability. Generally, spreading sampling effort (tagging) across the season resulted in more accurate estimates of survival. All sampling designs in this simulation tended to under-estimate the variation in the survival estimates because seasonal and daily variation in survival probability are not incorporated in the estimation procedure. This under-estimation results in poorer performance of estimates from larger samples. Thus, tagging more fish may not result in better estimates of survival if important components of variation are not accounted for. The results of our simulation incorporate survival probabilities and run distribution data from previous studies to help illustrate the tradeoffs among sampling strategies in terms of the number of tags needed and distribution of tagging effort. This information will assist researchers in developing improved monitoring programs and encourage discussion regarding issues that should be addressed prior to implementation of any telemetry-based monitoring plan. We believe implementation of an effective estuary survival monitoring program will strengthen the robustness of life cycle models used in recovery plans by providing missing data on where and how much mortality occurs in the riverine and estuarine portions of smolt migration. These data

  11. A Weak Value Based QKD Protocol Robust Against Detector Attacks

    Science.gov (United States)

    Troupe, James

    2015-03-01

    We propose a variation of the BB84 quantum key distribution protocol that utilizes the properties of weak values to insure the validity of the quantum bit error rate estimates used to detect an eavesdropper. The protocol is shown theoretically to be secure against recently demonstrated attacks utilizing detector blinding and control and should also be robust against all detector based hacking. Importantly, the new protocol promises to achieve this additional security without negatively impacting the secure key generation rate as compared to that originally promised by the standard BB84 scheme. Implementation of the weak measurements needed by the protocol should be very feasible using standard quantum optical techniques.

  12. Shortened protocol in practical [11C]SA4503-PET studies for sigma1 receptor quantification

    International Nuclear Information System (INIS)

    Sakata, Muneyuki; Kimura, Yuichi; Ishikawa, Masatomo; Oda, Keiichi; Ishii, Kenji; Ishiwata, Kiichi; Naganawa, Mika; Hashimoto, Kenji; Chihara, Kunihiro

    2008-01-01

    In practical positron emission tomography (PET) diagnosis, a shortened protocol is preferred for patients with brain disorders. In this study, the applicability of a shortened protocol as an alternative to the 90-min PET scan with [ 11 C]SA4503 for quantitative sigma 1 receptor measurement was investigated. Tissue time-activity curves of 288 regions of interest in the brain from 32 [ 11 C]SA4503-PET scans of 16 healthy subjects prior to and following administration of a selective serotonin reuptake inhibitor (fluvoxamine or paroxetine) were applied to two algorithms of quantitative analysis; binding potential (BP) was derived from compartmental analysis based on nonlinear estimation, and total distribution volume (tDV) was derived from Logan plot analysis. As a result, although both BP and tDV tended to be underestimated by the shortened method, the estimates from the shortened protocol had good linear relationships with those of the full-length protocol. In conclusion, if approximately 10% differences in the estimated results are acceptable for a specific purpose, then a 60-min measurement protocol is capable of providing reliable results. (author)

  13. An econometric method for estimating population parameters from non-random samples: An application to clinical case finding.

    Science.gov (United States)

    Burger, Rulof P; McLaren, Zoë M

    2017-09-01

    The problem of sample selection complicates the process of drawing inference about populations. Selective sampling arises in many real world situations when agents such as doctors and customs officials search for targets with high values of a characteristic. We propose a new method for estimating population characteristics from these types of selected samples. We develop a model that captures key features of the agent's sampling decision. We use a generalized method of moments with instrumental variables and maximum likelihood to estimate the population prevalence of the characteristic of interest and the agents' accuracy in identifying targets. We apply this method to tuberculosis (TB), which is the leading infectious disease cause of death worldwide. We use a national database of TB test data from South Africa to examine testing for multidrug resistant TB (MDR-TB). Approximately one quarter of MDR-TB cases was undiagnosed between 2004 and 2010. The official estimate of 2.5% is therefore too low, and MDR-TB prevalence is as high as 3.5%. Signal-to-noise ratios are estimated to be between 0.5 and 1. Our approach is widely applicable because of the availability of routinely collected data and abundance of potential instruments. Using routinely collected data to monitor population prevalence can guide evidence-based policy making. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Estimation of the deoxynivalenol and moisture contents of bulk wheat grain samples by FT-NIR spectroscopy

    Science.gov (United States)

    Deoxynivalenol (DON) levels in harvested grain samples are used to evaluate the Fusarium head blight (FHB) resistance of wheat cultivars and breeding lines. Fourier transform near-infrared (FT-NIR) calibrations were developed to estimate the DON and moisture content (MC) of bulk wheat grain samples ...

  15. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    Science.gov (United States)

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  16. Asymptotic analysis of the role of spatial sampling for covariance parameter estimation of Gaussian processes

    International Nuclear Information System (INIS)

    Bachoc, Francois

    2014-01-01

    Covariance parameter estimation of Gaussian processes is analyzed in an asymptotic framework. The spatial sampling is a randomly perturbed regular grid and its deviation from the perfect regular grid is controlled by a single scalar regularity parameter. Consistency and asymptotic normality are proved for the Maximum Likelihood and Cross Validation estimators of the covariance parameters. The asymptotic covariance matrices of the covariance parameter estimators are deterministic functions of the regularity parameter. By means of an exhaustive study of the asymptotic covariance matrices, it is shown that the estimation is improved when the regular grid is strongly perturbed. Hence, an asymptotic confirmation is given to the commonly admitted fact that using groups of observation points with small spacing is beneficial to covariance function estimation. Finally, the prediction error, using a consistent estimator of the covariance parameters, is analyzed in detail. (authors)

  17. Assessing NIR & MIR Spectral Analysis as a Method for Soil C Estimation Across a Network of Sampling Sites

    Science.gov (United States)

    Spencer, S.; Ogle, S.; Borch, T.; Rock, B.

    2008-12-01

    Monitoring soil C stocks is critical to assess the impact of future climate and land use change on carbon sinks and sources in agricultural lands. A benchmark network for soil carbon monitoring of stock changes is being designed for US agricultural lands with 3000-5000 sites anticipated and re-sampling on a 5- to10-year basis. Approximately 1000 sites would be sampled per year producing around 15,000 soil samples to be processed for total, organic, and inorganic carbon, as well as bulk density and nitrogen. Laboratory processing of soil samples is cost and time intensive, therefore we are testing the efficacy of using near-infrared (NIR) and mid-infrared (MIR) spectral methods for estimating soil carbon. As part of an initial implementation of national soil carbon monitoring, we collected over 1800 soil samples from 45 cropland sites in the mid-continental region of the U.S. Samples were processed using standard laboratory methods to determine the variables above. Carbon and nitrogen were determined by dry combustion and inorganic carbon was estimated with an acid-pressure test. 600 samples are being scanned using a bench- top NIR reflectance spectrometer (30 g of 2 mm oven-dried soil and 30 g of 8 mm air-dried soil) and 500 samples using a MIR Fourier-Transform Infrared Spectrometer (FTIR) with a DRIFT reflectance accessory (0.2 g oven-dried ground soil). Lab-measured carbon will be compared to spectrally-estimated carbon contents using Partial Least Squares (PLS) multivariate statistical approach. PLS attempts to develop a soil C predictive model that can then be used to estimate C in soil samples not lab-processed. The spectral analysis of soil samples either whole or partially processed can potentially save both funding resources and time to process samples. This is particularly relevant for the implementation of a national monitoring network for soil carbon. This poster will discuss our methods, initial results and potential for using NIR and MIR spectral

  18. Statistical properties of mean stand biomass estimators in a LIDAR-based double sampling forest survey design.

    Science.gov (United States)

    H.E. Anderson; J. Breidenbach

    2007-01-01

    Airborne laser scanning (LIDAR) can be a valuable tool in double-sampling forest survey designs. LIDAR-derived forest structure metrics are often highly correlated with important forest inventory variables, such as mean stand biomass, and LIDAR-based synthetic regression estimators have the potential to be highly efficient compared to single-stage estimators, which...

  19. Direct comparisons of Illumina vs. Roche 454 sequencing technologies on the same microbial community DNA sample.

    Science.gov (United States)

    Luo, Chengwei; Tsementzi, Despina; Kyrpides, Nikos; Read, Timothy; Konstantinidis, Konstantinos T

    2012-01-01

    Next-generation sequencing (NGS) is commonly used in metagenomic studies of complex microbial communities but whether or not different NGS platforms recover the same diversity from a sample and their assembled sequences are of comparable quality remain unclear. We compared the two most frequently used platforms, the Roche 454 FLX Titanium and the Illumina Genome Analyzer (GA) II, on the same DNA sample obtained from a complex freshwater planktonic community. Despite the substantial differences in read length and sequencing protocols, the platforms provided a comparable view of the community sampled. For instance, derived assemblies overlapped in ~90% of their total sequences and in situ abundances of genes and genotypes (estimated based on sequence coverage) correlated highly between the two platforms (R(2)>0.9). Evaluation of base-call error, frameshift frequency, and contig length suggested that Illumina offered equivalent, if not better, assemblies than Roche 454. The results from metagenomic samples were further validated against DNA samples of eighteen isolate genomes, which showed a range of genome sizes and G+C% content. We also provide quantitative estimates of the errors in gene and contig sequences assembled from datasets characterized by different levels of complexity and G+C% content. For instance, we noted that homopolymer-associated, single-base errors affected ~1% of the protein sequences recovered in Illumina contigs of 10× coverage and 50% G+C; this frequency increased to ~3% when non-homopolymer errors were also considered. Collectively, our results should serve as a useful practical guide for choosing proper sampling strategies and data possessing protocols for future metagenomic studies.

  20. Accounting for animal movement in estimation of resource selection functions: sampling and data analysis.

    Science.gov (United States)

    Forester, James D; Im, Hae Kyung; Rathouz, Paul J

    2009-12-01

    Patterns of resource selection by animal populations emerge as a result of the behavior of many individuals. Statistical models that describe these population-level patterns of habitat use can miss important interactions between individual animals and characteristics of their local environment; however, identifying these interactions is difficult. One approach to this problem is to incorporate models of individual movement into resource selection models. To do this, we propose a model for step selection functions (SSF) that is composed of a resource-independent movement kernel and a resource selection function (RSF). We show that standard case-control logistic regression may be used to fit the SSF; however, the sampling scheme used to generate control points (i.e., the definition of availability) must be accommodated. We used three sampling schemes to analyze simulated movement data and found that ignoring sampling and the resource-independent movement kernel yielded biased estimates of selection. The level of bias depended on the method used to generate control locations, the strength of selection, and the spatial scale of the resource map. Using empirical or parametric methods to sample control locations produced biased estimates under stronger selection; however, we show that the addition of a distance function to the analysis substantially reduced that bias. Assuming a uniform availability within a fixed buffer yielded strongly biased selection estimates that could be corrected by including the distance function but remained inefficient relative to the empirical and parametric sampling methods. As a case study, we used location data collected from elk in Yellowstone National Park, USA, to show that selection and bias may be temporally variable. Because under constant selection the amount of bias depends on the scale at which a resource is distributed in the landscape, we suggest that distance always be included as a covariate in SSF analyses. This approach to

  1. Software documentation and user's manual for fish-impingement sampling design and estimation method computer programs

    International Nuclear Information System (INIS)

    Murarka, I.P.; Bodeau, D.J.

    1977-11-01

    This report contains a description of three computer programs that implement the theory of sampling designs and the methods for estimating fish-impingement at the cooling-water intakes of nuclear power plants as described in companion report ANL/ES-60. Complete FORTRAN listings of these programs, named SAMPLE, ESTIMA, and SIZECO, are given and augmented with examples of how they are used

  2. Protocolo preoperatorio para estimar morbilidad y mortalidad quirúrgicas. Un enfoque social Preoperative protocol to estimate surgical morbidity and mortality. A social approach

    Directory of Open Access Journals (Sweden)

    Zaily Fuentes Díaz

    2012-04-01

    Full Text Available La investigación aborda algunos de los incidentes anestésicos que ocurren en la actualidad que se asocian a la valoración pre anestésica incompleta o inexistente. Un procedimiento protocolizado y orientado a optimizar la elección de la estrategia anestésica de acuerdo a las características propias del paciente, disminuiría la morbilidad y mortalidad inmediatas. Este trabajo tiene como objetivo determinar las condicionantes sociales del proyecto de investigación Protocolo preoperatorio para estimar morbilidad y mortalidad quirúrgicas con un enfoque social.The paper deals with some of today’s anesthetic incidents, which are associated to incomplete or inexistent preanesthetic valuation. A protocol procedure that optimizes the election of a anesthetic strategy according to the patient’s characteristics to reduce immediate morbidity and mortality. The paper aims to determine the social conditions of the research project Preoperative protocol to estimate surgical morbidity and mortality with a social approach.

  3. Soil Gas Sample Handling: Evaluation of Water Removal and Sample Ganging

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, Brad G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Abrecht, David G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hayes, James C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Mendoza, Donaldo P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-10-31

    Soil gas sampling is currently conducted in support of Nuclear Test Ban treaty verification. Soil gas samples are collected and analyzed for isotopes of interest. Some issues that can impact sampling and analysis of these samples are excess moisture and sample processing time. Here we discuss three potential improvements to the current sampling protocol; a desiccant for water removal, use of molecular sieve to remove CO2 from the sample during collection, and a ganging manifold to allow composite analysis of multiple samples.

  4. Design-based estimators for snowball sampling

    OpenAIRE

    Shafie, Termeh

    2010-01-01

    Snowball sampling, where existing study subjects recruit further subjects from amongtheir acquaintances, is a popular approach when sampling from hidden populations.Since people with many in-links are more likely to be selected, there will be a selectionbias in the samples obtained. In order to eliminate this bias, the sample data must beweighted. However, the exact selection probabilities are unknown for snowball samplesand need to be approximated in an appropriate way. This paper proposes d...

  5. The evaluation of an analytical protocol for the determination of substances in waste for hazard classification

    Energy Technology Data Exchange (ETDEWEB)

    Hennebert, Pierre, E-mail: pierre.hennebert@ineris.fr [INERIS – Institut National de l’Environnement Industriel et des Risques, Domaine du Petit Arbois BP33, F-13545 Aix-en-Provence (France); Papin, Arnaud [INERIS, Parc Technologique ALATA, BP No. 2, 60550 Verneuil en Halatte (France); Padox, Jean-Marie [INERIS – Institut National de l’Environnement Industriel et des Risques, Domaine du Petit Arbois BP33, F-13545 Aix-en-Provence (France); Hasebrouck, Benoît [INERIS, Parc Technologique ALATA, BP No. 2, 60550 Verneuil en Halatte (France)

    2013-07-15

    results. Despite discrepancies in some parameters, a satisfactory sum of estimated or measured concentrations (analytical balance) of 90% was reached for 20 samples (63% of the overall total) during this first test exercise, with identified reasons for most of the unsatisfactory results. Regular use of this protocol (which is now included in the French legislation) has enabled service laboratories to reach a 90% mass balance for nearly all the solid samples tested, and most of liquid samples (difficulties were caused in some samples from polymers in solution and vegetable oil). The protocol is submitted to French and European normalization bodies (AFNOR and CEN) and further improvements are awaited.

  6. Protocol for Measuring the Thermal Properties of a Supercooled Synthetic Sand-water-gas-methane Hydrate Sample.

    Science.gov (United States)

    Muraoka, Michihiro; Susuki, Naoko; Yamaguchi, Hiroko; Tsuji, Tomoya; Yamamoto, Yoshitaka

    2016-03-21

    Methane hydrates (MHs) are present in large amounts in the ocean floor and permafrost regions. Methane and hydrogen hydrates are being studied as future energy resources and energy storage media. To develop a method for gas production from natural MH-bearing sediments and hydrate-based technologies, it is imperative to understand the thermal properties of gas hydrates. The thermal properties' measurements of samples comprising sand, water, methane, and MH are difficult because the melting heat of MH may affect the measurements. To solve this problem, we performed thermal properties' measurements at supercooled conditions during MH formation. The measurement protocol, calculation method of the saturation change, and tips for thermal constants' analysis of the sample using transient plane source techniques are described here. The effect of the formation heat of MH on measurement is very small because the gas hydrate formation rate is very slow. This measurement method can be applied to the thermal properties of the gas hydrate-water-guest gas system, which contains hydrogen, CO2, and ozone hydrates, because the characteristic low formation rate of gas hydrate is not unique to MH. The key point of this method is the low rate of phase transition of the target material. Hence, this method may be applied to other materials having low phase-transition rates.

  7. Communication Estimation for Hardware/Software Codesign

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1998-01-01

    This paper presents a general high level estimation model of communication throughput for the implementation of a given communication protocol. The model, which is part of a larger model that includes component price, software driver object code size and hardware driver area, is intended...... to be general enough to be able to capture the characteristics of a wide range of communication protocols and yet to be sufficiently detailed as to allow the designer or design tool to efficiently explore tradeoffs between throughput, bus widths, burst/non-burst transfers and data packing strategies. Thus...... it provides a basis for decision making with respect to communication protocols/components and communication driver design in the initial design space exploration phase of a co-synthesis process where a large number of possibilities must be examined and where fast estimators are therefore necessary. The fill...

  8. A new fractionator principle with varying sampling fractions: exemplified by estimation of synapse number using electron microscopy

    DEFF Research Database (Denmark)

    Witgen, Brent Marvin; Grady, M. Sean; Nyengaard, Jens Randel

    2006-01-01

    The quantification of ultrastructure has been permanently improved by the application of new stereological principles. Both precision and efficiency have been enhanced. Here we report for the first time a fractionator method that can be applied at the electron microscopy level. This new design...... the total object number using section sampling fractions based on the average thickness of sections of variable thicknesses. As an alternative, this approach estimates the correct particle section sampling probability based on an estimator of the Horvitz-Thompson type, resulting in a theoretically more...

  9. Sample size estimation to substantiate freedom from disease for clustered binary data with a specific risk profile

    DEFF Research Database (Denmark)

    Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.

    2013-01-01

    and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different heterogeneity......, thus, optimizing resource allocation. A VPC-based predictive simulation method for sample size estimation to substantiate freedom from disease is presented. To illustrate the benefits of the proposed approach we give two examples with the analysis of data from a risk factor study on Mycobacterium avium...

  10. The finite sample performance of estimators for mediation analysis under sequential conditional independence

    DEFF Research Database (Denmark)

    Huber, Martin; Lechner, Michael; Mellace, Giovanni

    2016-01-01

    Using a comprehensive simulation study based on empirical data, this paper investigates the finite sample properties of different classes of parametric and semi-parametric estimators of (natural) direct and indirect causal effects used in mediation analysis under sequential conditional independen...... of the methods often (but not always) varies with the features of the data generating process....

  11. A Model Based Approach to Sample Size Estimation in Recent Onset Type 1 Diabetes

    Science.gov (United States)

    Bundy, Brian; Krischer, Jeffrey P.

    2016-01-01

    The area under the curve C-peptide following a 2-hour mixed meal tolerance test from 481 individuals enrolled on 5 prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrollment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in Observed vs. Expected calculations to estimate the presumption of benefit in ongoing trials. PMID:26991448

  12. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    Science.gov (United States)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  13. Ab initio quantum-enhanced optical phase estimation using real-time feedback control

    DEFF Research Database (Denmark)

    Berni, Adriano; Gehring, Tobias; Nielsen, Bo Melholt

    2015-01-01

    of a quantum-enhanced and fully deterministic ab initio phase estimation protocol based on real-time feedback control. Using robust squeezed states of light combined with a real-time Bayesian adaptive estimation algorithm, we demonstrate deterministic phase estimation with a precision beyond the quantum shot...... noise limit. The demonstrated protocol opens up new opportunities for quantum microscopy, quantum metrology and quantum information processing....

  14. A simple nomogram for sample size for estimating sensitivity and specificity of medical tests

    Directory of Open Access Journals (Sweden)

    Malhotra Rajeev

    2010-01-01

    Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.

  15. Reactivity-worth estimates of the OSMOSE samples in the MINERVE reactor R1-UO2 configuration.

    Energy Technology Data Exchange (ETDEWEB)

    Klann, R. T.; Perret, G.; Nuclear Engineering Division

    2007-10-03

    An initial series of calculations of the reactivity-worth of the OSMOSE samples in the MINERVE reactor with the R1-UO2 core configuration were completed. The reactor model was generated using the REBUS code developed at Argonne National Laboratory. The calculations are based on the specifications for fabrication, so they are considered preliminary until sampling and analysis have been completed on the fabricated samples. The estimates indicate a range of reactivity effect from -22 pcm to +25 pcm compared to the natural U sample.

  16. EMS Adherence to a Pre-hospital Cervical Spine Clearance Protocol

    Directory of Open Access Journals (Sweden)

    Johnson, David

    2001-10-01

    Full Text Available Purpose: To determine the degree of adherence to a cervical spine (c-spine clearance protocol by pre-hospital Emergency Medical Services (EMS personnel by both self-assessment and receiving hospital assessment, to describe deviations from the protocol, and to determine if the rate of compliance by paramedic self-assessment differed from receiving hospital assessment. Methods: A retrospective sample of pre-hospital (consecutive series and receiving hospital (convenience sample assessments of the compliance with and appropriateness of c-spine immobilization. The c-spine clearance protocol was implemented for Orange County EMS just prior to the April-November 1999 data collection period. Results: We collected 396 pre-hospital and 162 receiving hospital data forms. From the pre-hospital data sheet. the percentage deviation from the protocol was 4.096 (16/396. Only one out of 16 cases that did not comply with the protocol was due to over immobilization (0.2%. The remaining 15 cases were under immobilized, according to protocol. Nine of the under immobilized cases (66% that should have been placed in c-spine precautions met physical assessment criteria in the protocol, while the other five cases met mechanism of injury criteria. The rate of deviations from protocol did not differ over time. The receiving hospital identified 8.0% (13/162; 6/16 over immobilized, 7/16 under immobilized of patients with deviations from the protocol; none was determined to have actual c-spine injury. Conclusion: The implementation of a pre-hospital c-spine clearance protocol in Orange County was associated with a moderate overall adherence rate (96% from the pre-hospital perspective, and 92% from the hospital perspective, p=.08 for the two evaluation methods. Most patients who deviated from protocol were under immobilized, but no c-spine injuries were missed. The rate of over immobilization was better than previously reported, implying a saving of resources.

  17. An optimised protocol for molecular identification of Eimeria from chickens☆

    Science.gov (United States)

    Kumar, Saroj; Garg, Rajat; Moftah, Abdalgader; Clark, Emily L.; Macdonald, Sarah E.; Chaudhry, Abdul S.; Sparagano, Olivier; Banerjee, Partha S.; Kundu, Krishnendu; Tomley, Fiona M.; Blake, Damer P.

    2014-01-01

    Molecular approaches supporting identification of Eimeria parasites infecting chickens have been available for more than 20 years, although they have largely failed to replace traditional measures such as microscopy and pathology. Limitations of microscopy-led diagnostics, including a requirement for specialist parasitological expertise and low sample throughput, are yet to be outweighed by the difficulties associated with accessing genomic DNA from environmental Eimeria samples. A key step towards the use of Eimeria species-specific PCR as a sensitive and reproducible discriminatory tool for use in the field is the production of a standardised protocol that includes sample collection and DNA template preparation, as well as primer selection from the numerous PCR assays now published. Such a protocol will facilitate development of valuable epidemiological datasets which may be easily compared between studies and laboratories. The outcome of an optimisation process undertaken in laboratories in India and the UK is described here, identifying four steps. First, samples were collected into a 2% (w/v) potassium dichromate solution. Second, oocysts were enriched by flotation in saturated saline. Third, genomic DNA was extracted using a QIAamp DNA Stool mini kit protocol including a mechanical homogenisation step. Finally, nested PCR was carried out using previously published primers targeting the internal transcribed spacer region 1 (ITS-1). Alternative methods tested included sample processing in the presence of faecal material, DNA extraction using a traditional phenol/chloroform protocol, the use of SCAR multiplex PCR (one tube and two tube versions) and speciation using the morphometric tool COCCIMORPH for the first time with field samples. PMID:24138724

  18. Do we need 3D tube current modulation information for accurate organ dosimetry in chest CT? Protocols dose comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Rendon, Xochitl; Develter, Wim [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Leuven (Belgium); Zhang, Guozhi; Coudyzer, Walter; Zanca, Federica [University Hospitals of the KU Leuven, Department of Radiology, Leuven (Belgium); Bosmans, Hilde [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Leuven (Belgium); University Hospitals of the KU Leuven, Department of Radiology, Leuven (Belgium)

    2017-11-15

    To compare the lung and breast dose associated with three chest protocols: standard, organ-based tube current modulation (OBTCM) and fast-speed scanning; and to estimate the error associated with organ dose when modelling the longitudinal (z-) TCM versus the 3D-TCM in Monte Carlo simulations (MC) for these three protocols. Five adult and three paediatric cadavers with different BMI were scanned. The CTDI{sub vol} of the OBTCM and the fast-speed protocols were matched to the patient-specific CTDI{sub vol} of the standard protocol. Lung and breast doses were estimated using MC with both z- and 3D-TCM simulated and compared between protocols. The fast-speed scanning protocol delivered the highest doses. A slight reduction for breast dose (up to 5.1%) was observed for two of the three female cadavers with the OBTCM in comparison to the standard. For both adult and paediatric, the implementation of the z-TCM data only for organ dose estimation resulted in 10.0% accuracy for the standard and fast-speed protocols, while relative dose differences were up to 15.3% for the OBTCM protocol. At identical CTDI{sub vol} values, the standard protocol delivered the lowest overall doses. Only for the OBTCM protocol is the 3D-TCM needed if an accurate (<10.0%) organ dosimetry is desired. (orig.)

  19. Quantitative CT: technique dependence of volume estimation on pulmonary nodules

    Science.gov (United States)

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan

    2012-03-01

    Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.

  20. An online method for lithium-ion battery remaining useful life estimation using importance sampling and neural networks

    International Nuclear Information System (INIS)

    Wu, Ji; Zhang, Chenbin; Chen, Zonghai

    2016-01-01

    Highlights: • An online RUL estimation method for lithium-ion battery is proposed. • RUL is described by the difference among battery terminal voltage curves. • A feed forward neural network is employed for RUL estimation. • Importance sampling is utilized to select feed forward neural network inputs. - Abstract: An accurate battery remaining useful life (RUL) estimation can facilitate the design of a reliable battery system as well as the safety and reliability of actual operation. A reasonable definition and an effective prediction algorithm are indispensable for the achievement of an accurate RUL estimation result. In this paper, the analysis of battery terminal voltage curves under different cycle numbers during charge process is utilized for RUL definition. Moreover, the relationship between RUL and charge curve is simulated by feed forward neural network (FFNN) for its simplicity and effectiveness. Considering the nonlinearity of lithium-ion charge curve, importance sampling (IS) is employed for FFNN input selection. Based on these results, an online approach using FFNN and IS is presented to estimate lithium-ion battery RUL in this paper. Experiments and numerical comparisons are conducted to validate the proposed method. The results show that the FFNN with IS is an accurate estimation method for actual operation.

  1. The proportionator: unbiased stereological estimation using biased automatic image analysis and non-uniform probability proportional to size sampling

    DEFF Research Database (Denmark)

    Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb

    2008-01-01

    examined, which in turn leads to any of the known stereological estimates, including size distributions and spatial distributions. The unbiasedness is not a function of the assumed relation between the weight and the structure, which is in practice always a biased relation from a stereological (integral......, the desired number of fields are sampled automatically with probability proportional to the weight and presented to the expert observer. Using any known stereological probe and estimator, the correct count in these fields leads to a simple, unbiased estimate of the total amount of structure in the sections...... geometric) point of view. The efficiency of the proportionator depends, however, directly on this relation to be positive. The sampling and estimation procedure is simulated in sections with characteristics and various kinds of noises in possibly realistic ranges. In all cases examined, the proportionator...

  2. Direct comparisons of Illumina vs. Roche 454 sequencing technologies on the same microbial community DNA sample.

    Directory of Open Access Journals (Sweden)

    Chengwei Luo

    Full Text Available Next-generation sequencing (NGS is commonly used in metagenomic studies of complex microbial communities but whether or not different NGS platforms recover the same diversity from a sample and their assembled sequences are of comparable quality remain unclear. We compared the two most frequently used platforms, the Roche 454 FLX Titanium and the Illumina Genome Analyzer (GA II, on the same DNA sample obtained from a complex freshwater planktonic community. Despite the substantial differences in read length and sequencing protocols, the platforms provided a comparable view of the community sampled. For instance, derived assemblies overlapped in ~90% of their total sequences and in situ abundances of genes and genotypes (estimated based on sequence coverage correlated highly between the two platforms (R(2>0.9. Evaluation of base-call error, frameshift frequency, and contig length suggested that Illumina offered equivalent, if not better, assemblies than Roche 454. The results from metagenomic samples were further validated against DNA samples of eighteen isolate genomes, which showed a range of genome sizes and G+C% content. We also provide quantitative estimates of the errors in gene and contig sequences assembled from datasets characterized by different levels of complexity and G+C% content. For instance, we noted that homopolymer-associated, single-base errors affected ~1% of the protein sequences recovered in Illumina contigs of 10× coverage and 50% G+C; this frequency increased to ~3% when non-homopolymer errors were also considered. Collectively, our results should serve as a useful practical guide for choosing proper sampling strategies and data possessing protocols for future metagenomic studies.

  3. Dosimetric evaluation of cone beam computed tomography scanning protocols

    International Nuclear Information System (INIS)

    Soares, Maria Rosangela

    2015-01-01

    It was evaluated the cone beam computed tomography, CBCT scanning protocols, that was introduced in dental radiology at the end of the 1990's, and quickly became a fundamental examination for various procedures. Its main characteristic, the difference of medical CT is the beam shape. This study aimed to calculate the absorbed dose in eight tissues / organs of the head and neck, and to estimate the effective dose in 13 protocols and two techniques (stitched FOV e single FOV) of 5 equipment of different manufacturers of cone beam CT. For that purpose, a female anthropomorphic phantom was used, representing a default woman, in which were inserted thermoluminescent dosimeters at several points, representing organs / tissues with weighting values presented in the standard ICRP 103. The results were evaluated by comparing the dose according to the purpose of the tomographic image. Among the results, there is a difference up to 325% in the effective dose in relation to protocols with the same image goal. In relation to the image acquisition technique, the stitched FOV technique resulted in an effective dose of 5.3 times greater than the single FOV technique for protocols with the same image goal. In the individual contribution, the salivary glands are responsible for 31% of the effective dose in CT exams. The remaining tissues have also a significant contribution, 36%. The results drew attention to the need of estimating the effective dose in different equipment and protocols of the market, besides the knowledge of the radiation parameters and equipment manufacturing engineering to obtain the image. (author)

  4. A novel staining protocol for multiparameter assessment of cell heterogeneity in Phormidium populations (cyanobacteria employing fluorescent dyes.

    Directory of Open Access Journals (Sweden)

    Daria Tashyreva

    Full Text Available Bacterial populations display high heterogeneity in viability and physiological activity at the single-cell level, especially under stressful conditions. We demonstrate a novel staining protocol for multiparameter assessment of individual cells in physiologically heterogeneous populations of cyanobacteria. The protocol employs fluorescent probes, i.e., redox dye 5-cyano-2,3-ditolyl tetrazolium chloride, 'dead cell' nucleic acid stain SYTOX Green, and DNA-specific fluorochrome 4',6-diamidino-2-phenylindole, combined with microscopy image analysis. Our method allows simultaneous estimates of cellular respiration activity, membrane and nucleoid integrity, and allows the detection of photosynthetic pigments fluorescence along with morphological observations. The staining protocol has been adjusted for, both, laboratory and natural populations of the genus Phormidium (Oscillatoriales, and tested on 4 field-collected samples and 12 laboratory strains of cyanobacteria. Based on the mentioned cellular functions we suggest classification of cells in cyanobacterial populations into four categories: (i active and intact; (ii injured but active; (iii metabolically inactive but intact; (iv inactive and injured, or dead.

  5. A Web-based Simulator for Sample Size and Power Estimation in Animal Carcinogenicity Studies

    Directory of Open Access Journals (Sweden)

    Hojin Moon

    2002-12-01

    Full Text Available A Web-based statistical tool for sample size and power estimation in animal carcinogenicity studies is presented in this paper. It can be used to provide a design with sufficient power for detecting a dose-related trend in the occurrence of a tumor of interest when competing risks are present. The tumors of interest typically are occult tumors for which the time to tumor onset is not directly observable. It is applicable to rodent tumorigenicity assays that have either a single terminal sacrifice or multiple (interval sacrifices. The design is achieved by varying sample size per group, number of sacrifices, number of sacrificed animals at each interval, if any, and scheduled time points for sacrifice. Monte Carlo simulation is carried out in this tool to simulate experiments of rodent bioassays because no closed-form solution is available. It takes design parameters for sample size and power estimation as inputs through the World Wide Web. The core program is written in C and executed in the background. It communicates with the Web front end via a Component Object Model interface passing an Extensible Markup Language string. The proposed statistical tool is illustrated with an animal study in lung cancer prevention research.

  6. Multisensor sampling of pelagic ecosystem variables in a coastal environment to estimate zooplankton grazing impact

    Science.gov (United States)

    Sutton, Tracey; Hopkins, Thomas; Remsen, Andrew; Burghart, Scott

    2001-01-01

    Sampling was conducted on the west Florida continental shelf ecosystem modeling site to estimate zooplankton grazing impact on primary production. Samples were collected with the high-resolution sampler, a towed array bearing electronic and optical sensors operating in tandem with a paired net/bottle verification system. A close biological-physical coupling was observed, with three main plankton communities: 1. a high-density inshore community dominated by larvaceans coincident with a salinity gradient; 2. a low-density offshore community dominated by small calanoid copepods coincident with the warm mixed layer; and 3. a high-density offshore community dominated by small poecilostomatoid and cyclopoid copepods and ostracods coincident with cooler, sub-pycnocline oceanic water. Both high-density communities were associated with relatively turbid water. Applying available grazing rates from the literature to our abundance data, grazing pressure mirrored the above bio-physical pattern, with the offshore sub-pycnocline community contributing ˜65% of grazing pressure despite representing only 19% of the total volume of the transect. This suggests that grazing pressure is highly localized, emphasizing the importance of high-resolution sampling to better understand plankton dynamics. A comparison of our grazing rate estimates with primary production estimates suggests that mesozooplankton do not control the fate of phytoplankton over much of the area studied (<5% grazing of daily primary production), but "hot spots" (˜25-50% grazing) do occur which may have an effect on floral composition.

  7. Isolation of cancer cells by "in situ" microfluidic biofunctionalization protocols

    KAUST Repository

    De Vitis, Stefania; Matarise, Giuseppina; Pardeo, Francesca; Catalano, Rossella; Malara, Natalia Maria; Trunzo, Valentina; Tallerico, Rossana; Gentile, Francesco T.; Candeloro, Patrizio; Coluccio, Maria Laura; Massaro, Alessandro S.; Viglietto, Giuseppe; Carbone, Ennio; Kutter, Jö rg Peter; Perozziello, Gerardo; Di Fabrizio, Enzo M.

    2014-01-01

    The aim of this work is the development of a microfluidic immunosensor for the immobilization of cancer cells and their separation from healthy cells by using "in situ" microfluidic biofunctionalization protocols. These protocols allow to link antibodies on microfluidic device surfaces and can be used to study the interaction between cell membrane and biomolecules. Moreover they allow to perform analysis with high processing speed, small quantity of reagents and samples, short reaction times and low production costs. In this work the developed protocols were used in microfluidic devices for the isolation of cancer cells in heterogeneous blood samples by exploiting the binding of specific antibody to an adhesion protein (EpCAM), overexpressed on the tumor cell membranes. The presented biofunctionalization protocols can be performed right before running the experiment: this allows to have a flexible platform where biomolecules of interest can be linked on the device surface according to the user's needs. © 2014 Elsevier B.V. All rights reserved.

  8. Isolation of cancer cells by "in situ" microfluidic biofunctionalization protocols

    KAUST Repository

    De Vitis, Stefania

    2014-07-01

    The aim of this work is the development of a microfluidic immunosensor for the immobilization of cancer cells and their separation from healthy cells by using "in situ" microfluidic biofunctionalization protocols. These protocols allow to link antibodies on microfluidic device surfaces and can be used to study the interaction between cell membrane and biomolecules. Moreover they allow to perform analysis with high processing speed, small quantity of reagents and samples, short reaction times and low production costs. In this work the developed protocols were used in microfluidic devices for the isolation of cancer cells in heterogeneous blood samples by exploiting the binding of specific antibody to an adhesion protein (EpCAM), overexpressed on the tumor cell membranes. The presented biofunctionalization protocols can be performed right before running the experiment: this allows to have a flexible platform where biomolecules of interest can be linked on the device surface according to the user\\'s needs. © 2014 Elsevier B.V. All rights reserved.

  9. Cooperative Fault Tolerant Tracking Control for Multiagent Systems: An Intermediate Estimator-Based Approach.

    Science.gov (United States)

    Zhu, Jun-Wei; Yang, Guang-Hong; Zhang, Wen-An; Yu, Li

    2017-10-17

    This paper studies the observer based fault tolerant tracking control problem for linear multiagent systems with multiple faults and mismatched disturbances. A novel distributed intermediate estimator based fault tolerant tracking protocol is presented. The leader's input is nonzero and unavailable to the followers. By applying a projection technique, the mismatched disturbances are separated into matched and unmatched components. For each node, a tracking error system is established, for which an intermediate estimator driven by the relative output measurements is constructed to estimate the sensor faults and a combined signal of the leader's input, process faults, and matched disturbance component. Based on the estimation, a fault tolerant tracking protocol is designed to eliminate the effects of the combined signal. Besides, the effect of unmatched disturbance component can be attenuated by directly adjusting some specified parameters. Finally, a simulation example of aircraft demonstrates the effectiveness of the designed tracking protocol.This paper studies the observer based fault tolerant tracking control problem for linear multiagent systems with multiple faults and mismatched disturbances. A novel distributed intermediate estimator based fault tolerant tracking protocol is presented. The leader's input is nonzero and unavailable to the followers. By applying a projection technique, the mismatched disturbances are separated into matched and unmatched components. For each node, a tracking error system is established, for which an intermediate estimator driven by the relative output measurements is constructed to estimate the sensor faults and a combined signal of the leader's input, process faults, and matched disturbance component. Based on the estimation, a fault tolerant tracking protocol is designed to eliminate the effects of the combined signal. Besides, the effect of unmatched disturbance component can be attenuated by directly adjusting some

  10. Publication trends of study protocols in rehabilitation.

    Science.gov (United States)

    Jesus, Tiago S; Colquhoun, Heather L

    2017-09-04

    Growing evidence points for the need to publish study protocols in the health field. To observe whether the growing interest in publishing study protocols in the broader health field has been translated into increased publications of rehabilitation study protocols. Observational study using publication data and its indexation in PubMed. Not applicable. Not applicable. PubMed was searched with appropriate combinations of Medical Subject Headings up to December 2014. The effective presence of study protocols was manually screened. Regression models analyzed the yearly growth of publications. Two-sample Z-tests analyzed whether the proportion of Systematic Reviews (SRs) and Randomized Controlled Trials (RCTs) among study protocols differed from that of the same designs for the broader rehabilitation research. Up to December 2014, 746 publications of rehabilitation study protocols were identified, with an exponential growth since 2005 (r2=0.981; p<0.001). RCT protocols were the most common among rehabilitation study protocols (83%), while RCTs were significantly more prevalent among study protocols than among the broader rehabilitation research (83% vs. 35.8%; p<0.001). For SRs, the picture was reversed: significantly less common among study protocols (2.8% vs. 9.3%; p<0.001). Funding was more often reported by rehabilitation study protocols than the broader rehabilitation research (90% vs. 53.1%; p<0.001). Rehabilitation journals published a significantly lower share of rehabilitation study protocols than they did for the broader rehabilitation research (1.8% vs.16.7%; p<0.001). Identifying the reasons for these discrepancies and reverting unwarranted disparities (e.g. low rate of publication for rehabilitation SR protocols) are likely new avenues for rehabilitation research and its publication. SRs, particularly those aggregating RCT results, are considered the best standard of evidence to guide rehabilitation clinical practice; however, that standard can be improved

  11. Protocol compliance of administering parenteral medication in Dutch hospitals: an evaluation and cost-estimation of the implementation.

    NARCIS (Netherlands)

    Schilp, J.; Boot, S.; Blok, C. de; Spreeuwenberg, P.; Wagner, C.

    2014-01-01

    Objectives: Preventable adverse drug events (ADEs) are closely related to administration processes of parenteral medication. The Dutch Patient Safety Program provided a protocol for administering parenteral medication to reduce the amount of ADEs. The execution of the protocol was evaluated and a

  12. A Note on the Large Sample Properties of Estimators Based on Generalized Linear Models for Correlated Pseudo-observations

    DEFF Research Database (Denmark)

    Jacobsen, Martin; Martinussen, Torben

    2016-01-01

    Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These r......Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results....... These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second-order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo-values still seem unclear. In this paper......, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U-statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error...

  13. Phase I Forest Area Estimation Using Landsat TM and Iterative Guided Spectral Class Rejection: Assessment of Possible Training Data Protocols

    Science.gov (United States)

    John A. Scrivani; Randolph H. Wynne; Christine E. Blinn; Rebecca F. Musy

    2001-01-01

    Two methods of training data collection for automated image classification were tested in Virginia as part of a larger effort to develop an objective, repeatable, and low-cost method to provide forest area classification from satellite imagery. The derived forest area estimates were compared to estimates derived from a traditional photo-interpreted, double sample. One...

  14. Sample Based Unit Liter Dose Estimates

    International Nuclear Information System (INIS)

    JENSEN, L.

    1999-01-01

    The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999) and the Final Safety Analysis Report (FSAR) (FDH 1999) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in developing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks

  15. Hybrid Cubature Kalman filtering for identifying nonlinear models from sampled recording: Estimation of neuronal dynamics.

    Science.gov (United States)

    Madi, Mahmoud K; Karameh, Fadi N

    2017-01-01

    Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate

  16. Hybrid Cubature Kalman filtering for identifying nonlinear models from sampled recording: Estimation of neuronal dynamics

    Science.gov (United States)

    2017-01-01

    Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate

  17. A model-based approach to sample size estimation in recent onset type 1 diabetes.

    Science.gov (United States)

    Bundy, Brian N; Krischer, Jeffrey P

    2016-11-01

    The area under the curve C-peptide following a 2-h mixed meal tolerance test from 498 individuals enrolled on five prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrolment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors, and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in observed versus expected calculations to estimate the presumption of benefit in ongoing trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Do we need 3D tube current modulation information for accurate organ dosimetry in chest CT? Protocols dose comparisons.

    Science.gov (United States)

    Lopez-Rendon, Xochitl; Zhang, Guozhi; Coudyzer, Walter; Develter, Wim; Bosmans, Hilde; Zanca, Federica

    2017-11-01

    To compare the lung and breast dose associated with three chest protocols: standard, organ-based tube current modulation (OBTCM) and fast-speed scanning; and to estimate the error associated with organ dose when modelling the longitudinal (z-) TCM versus the 3D-TCM in Monte Carlo simulations (MC) for these three protocols. Five adult and three paediatric cadavers with different BMI were scanned. The CTDI vol of the OBTCM and the fast-speed protocols were matched to the patient-specific CTDI vol of the standard protocol. Lung and breast doses were estimated using MC with both z- and 3D-TCM simulated and compared between protocols. The fast-speed scanning protocol delivered the highest doses. A slight reduction for breast dose (up to 5.1%) was observed for two of the three female cadavers with the OBTCM in comparison to the standard. For both adult and paediatric, the implementation of the z-TCM data only for organ dose estimation resulted in 10.0% accuracy for the standard and fast-speed protocols, while relative dose differences were up to 15.3% for the OBTCM protocol. At identical CTDI vol values, the standard protocol delivered the lowest overall doses. Only for the OBTCM protocol is the 3D-TCM needed if an accurate (<10.0%) organ dosimetry is desired. • The z-TCM information is sufficient for accurate dosimetry for standard protocols. • The z-TCM information is sufficient for accurate dosimetry for fast-speed scanning protocols. • For organ-based TCM schemes, the 3D-TCM information is necessary for accurate dosimetry. • At identical CTDI vol , the fast-speed scanning protocol delivered the highest doses. • Lung dose was higher in XCare than standard protocol at identical CTDI vol .

  19. Protocol for Cohesionless Sample Preparation for Physical Experimentation

    Science.gov (United States)

    2016-05-01

    Standard test method for consolidated drained triaxial compression test for soils . In Annual book of ASTM standards. West Conshohocken, PA: ASTM...derived wherein uncertainties and laboratory scatter associated with soil fabric-behavior variance during sample preparation are mitigated. Samples of...wherein comparable analysis between different laboratory tests’ results can be made by ensuring a comparable soil fabric prior to laboratory testing

  20. Optimistic protocol for partitioned distributed database systems

    International Nuclear Information System (INIS)

    Davidson, S.B.

    1982-01-01

    A protocol for transaction processing during partition failures is presented which guarantees mutual consistency between copies of data-items after repair is completed. The protocol is optimistic in that transactions are processed without restrictions during the failure; conflicts are detected at repair time using a precedence graph and are resolved by backing out transactions according to some backout strategy. The protocol is then evaluated using simulation and probabilistic modeling. In the simulation, several parameters are varied such as the number of transactions processed in a group, the type of transactions processed, the number of data-items present in the database, and the distribution of references to data-items. The simulation also uses different backout strategies. From these results we note conditions under which the protocol performs well, i.e., conditions under which the protocol backs out a small percentage of the transaction run. A probabilistic model is developed to estimate the expected number of transactions backed out using most of the above database and transaction parameters, and is shown to agree with simulation results. Suggestions are then made on how to improve the performance of the protocol. Insights gained from the simulation and probabilistic modeling are used to develop a backout strategy which takes into account individual transaction costs and attempts to minimize total backout cost. Although the problem of choosing transactions to minimize total backout cost is, in general, NP-complete, the backout strategy is efficient and produces very good results

  1. Convenience Sampling of Children Presenting to Hospital-Based Outpatient Clinics to Estimate Childhood Obesity Levels in Local Surroundings.

    Science.gov (United States)

    Gilliland, Jason; Clark, Andrew F; Kobrzynski, Marta; Filler, Guido

    2015-07-01

    Childhood obesity is a critical public health matter associated with numerous pediatric comorbidities. Local-level data are required to monitor obesity and to help administer prevention efforts when and where they are most needed. We hypothesized that samples of children visiting hospital clinics could provide representative local population estimates of childhood obesity using data from 2007 to 2013. Such data might provide more accurate, timely, and cost-effective obesity estimates than national surveys. Results revealed that our hospital-based sample could not serve as a population surrogate. Further research is needed to confirm this finding.

  2. HOW TO ESTIMATE THE AMOUNT OF IMPORTANT CHARACTERISTICS MISSING IN A CONSUMERS SAMPLE BY USING BAYESIAN ESTIMATORS

    Directory of Open Access Journals (Sweden)

    Sueli A. Mingoti

    2001-06-01

    Full Text Available Consumers surveys are conducted very often by many companies with the main objective of obtaining information about the opinions the consumers have about a specific prototype, product or service. In many situations the goal is to identify the characteristics that are considered important by the consumers when taking the decision of buying or using the products or services. When the survey is performed some characteristics that are present in the consumers population might not be reported by those consumers in the observed sample. Therefore, some important characteristics of the product according to the consumers opinions could be missing in the observed sample. The main objective of this paper is to show how the amount of characteristics missing in the observed sample could be easily estimated by using some Bayesian estimators proposed by Mingoti & Meeden (1992 and Mingoti (1999. An example of application related to an automobile survey is presented.Pesquisas de mercado são conduzidas freqüentemente com o propósito de obter informações sobre a opinião dos consumidores em relação a produtos já existentes no mercado, protótipos, ou determinados tipos de serviços prestados pela empresa. Em muitas situações deseja-se identificar as características que são consideradas importantes pelos consumidores no que se refere à tomada de decisão de compra do produto ou de opção pelo serviço prestado pela empresa. Como as pesquisas são feitas com amostras de consumidores do mercado potencial, algumas características consideradas importantes pela população podem não estar representadas nas amostras. O objetivo deste artigo é mostrar como a quantidade de características presentes na população e que não estão representadas nas amostras, pode ser facilmente estimada através de estimadores Bayesianos propostos por Mingoti & Meeden (1992 e Mingoti (1999. Como ilustração apresentamos um exemplo de uma pesquisa de mercado sobre um

  3. Identification of a research protocol to study orthodontic tooth movement

    Directory of Open Access Journals (Sweden)

    Annalisa Dichicco

    2014-06-01

    Full Text Available Aim: The orthodontic movement is associated with a process of tissue remodeling together with the release of several chemical mediators in periodontal tissues. Each mediator is a potential marker of tooth movement and expresses biological processes as: tissue inflammation and bone remodeling. Different amounts of every mediator are present in several tissues and fluids of the oral cavity. Therefore, there are different methods that allow sampling with several degrees of invasiveness. Chemical mediators are also substances of different molecular nature, and multiple kind of analysis methods allow detection. The purpose of this study was to draft the best research protocol for an optimal study on orthodontic movement efficiency. Methods: An analysis of the international literature have been made, to identify the gold standard of each aspect of the protocol: type of mediator, source and method of sampling and analysis method. Results: From the analysis of the international literature was created an original research protocol for the study and the assessment of the orthodontic movement, by using the biomarkers of the tooth movement. Conclusions: The protocol created is based on the choice of the gold standard of every aspect already analyzed in the literature and in existing protocols for the monitoring of orthodontic tooth movement through the markers of tooth movement. Clinical trials re required for the evaluation and validation of the protocol created.

  4. Total reflection x-ray fluorescence spectroscopy as a tool for evaluation of iron concentration in ferrofluids and yeast samples

    Energy Technology Data Exchange (ETDEWEB)

    Kulesh, N.A., E-mail: nikita.kulesh@urfu.ru [Ural Federal University, Mira 19, 620002 Ekaterinburg (Russian Federation); Novoselova, I.P. [Ural Federal University, Mira 19, 620002 Ekaterinburg (Russian Federation); Immanuel Kant Baltic Federal University, 236041 Kaliningrad (Russian Federation); Safronov, A.P. [Ural Federal University, Mira 19, 620002 Ekaterinburg (Russian Federation); Institute of Electrophysics UD RAS, Amundsen 106, 620016 Ekaterinburg (Russian Federation); Beketov, I.V.; Samatov, O.M. [Institute of Electrophysics UD RAS, Amundsen 106, 620016 Ekaterinburg (Russian Federation); Kurlyandskaya, G.V. [Ural Federal University, Mira 19, 620002 Ekaterinburg (Russian Federation); University of the Basque Country UPV-EHU, 48940 Leioa (Spain); Morozova, M. [Ural Federal University, Mira 19, 620002 Ekaterinburg (Russian Federation); Denisova, T.P. [Irkutsk State University, Karl Marks 1, 664003 Irkutsk (Russian Federation)

    2016-10-01

    In this study, total reflection x-ray fluorescent (TXRF) spectrometry was applied for the evaluation of iron concentration in ferrofluids and biological samples containing iron oxide magnetic nanoparticles obtained by the laser target evaporation technique. Suspensions of maghemite nanoparticles of different concentrations were used to estimate the limitation of the method for the evaluation of nanoparticle concentration in the range of 1–5000 ppm in absence of organic matrix. Samples of single-cell yeasts grown in the nutrient media containing maghemite nanoparticles were used to study the nanoparticle absorption mechanism. The obtained results were analyzed in terms of applicability of TXRF for quantitative analysis in a wide range of iron oxide nanoparticle concentrations for biological samples and ferrofluids with a simple established protocol of specimen preparation. - Highlights: • Ferrofluids and yeasts samples were analysed by TXRF spectroscopy. • Simple protocol for iron quantification by means of TXRF was proposed. • Results were combined with magnetic, structural, and morphological characterization. • Preliminary conclusion on nanoparticles uptake mechanism was made.

  5. Total reflection x-ray fluorescence spectroscopy as a tool for evaluation of iron concentration in ferrofluids and yeast samples

    International Nuclear Information System (INIS)

    Kulesh, N.A.; Novoselova, I.P.; Safronov, A.P.; Beketov, I.V.; Samatov, O.M.; Kurlyandskaya, G.V.; Morozova, M.; Denisova, T.P.

    2016-01-01

    In this study, total reflection x-ray fluorescent (TXRF) spectrometry was applied for the evaluation of iron concentration in ferrofluids and biological samples containing iron oxide magnetic nanoparticles obtained by the laser target evaporation technique. Suspensions of maghemite nanoparticles of different concentrations were used to estimate the limitation of the method for the evaluation of nanoparticle concentration in the range of 1–5000 ppm in absence of organic matrix. Samples of single-cell yeasts grown in the nutrient media containing maghemite nanoparticles were used to study the nanoparticle absorption mechanism. The obtained results were analyzed in terms of applicability of TXRF for quantitative analysis in a wide range of iron oxide nanoparticle concentrations for biological samples and ferrofluids with a simple established protocol of specimen preparation. - Highlights: • Ferrofluids and yeasts samples were analysed by TXRF spectroscopy. • Simple protocol for iron quantification by means of TXRF was proposed. • Results were combined with magnetic, structural, and morphological characterization. • Preliminary conclusion on nanoparticles uptake mechanism was made.

  6. May the Kyoto protocol produce results?

    International Nuclear Information System (INIS)

    Jaureguy-Naudin, M.

    2009-01-01

    A not well managed drastic reduction of greenhouse emissions might result in significant decrease of living standards, but without such reduction efforts, climate change might have five to twenty times higher costs. Thus, while indicating estimated consequences or evolutions of greenhouse emissions and temperature, the author stresses the need of emission reduction. She discusses the role of economic instruments which can be used in policies aimed at the struggle against climate change. She recalls the emission reduction commitments specified in the Kyoto protocol, discusses the present status, operation and results of the international emission trading scheme, the lessons learned after the first years of operation, comments the involvement of emerging countries in relationship with another mechanism defined in the protocol: the Clean Development Mechanism

  7. Time-Frequency Based Instantaneous Frequency Estimation of Sparse Signals from an Incomplete Set of Samples

    Science.gov (United States)

    2014-06-17

    100 0 2 4 Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function 0 50 100 0 2 4 L- Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function ...bilinear or higher order autocorrelation functions will increase the number of missing samples, the analysis shows that accurate instantaneous...frequency estimation can be achieved even if we deal with only few samples, as long as the auto-correlation function is properly chosen to coincide with

  8. Population Pharmacokinetics of Gemcitabine and dFdU in Pancreatic Cancer Patients Using an Optimal Design, Sparse Sampling Approach.

    Science.gov (United States)

    Serdjebi, Cindy; Gattacceca, Florence; Seitz, Jean-François; Fein, Francine; Gagnière, Johan; François, Eric; Abakar-Mahamat, Abakar; Deplanque, Gael; Rachid, Madani; Lacarelle, Bruno; Ciccolini, Joseph; Dahan, Laetitia

    2017-06-01

    Gemcitabine remains a pillar in pancreatic cancer treatment. However, toxicities are frequently observed. Dose adjustment based on therapeutic drug monitoring might help decrease the occurrence of toxicities. In this context, this work aims at describing the pharmacokinetics (PK) of gemcitabine and its metabolite dFdU in pancreatic cancer patients and at identifying the main sources of their PK variability using a population PK approach, despite a sparse sampled-population and heterogeneous administration and sampling protocols. Data from 38 patients were included in the analysis. The 3 optimal sampling times were determined using KineticPro and the population PK analysis was performed on Monolix. Available patient characteristics, including cytidine deaminase (CDA) status, were tested as covariates. Correlation between PK parameters and occurrence of severe hematological toxicities was also investigated. A two-compartment model best fitted the gemcitabine and dFdU PK data (volume of distribution and clearance for gemcitabine: V1 = 45 L and CL1 = 4.03 L/min; for dFdU: V2 = 36 L and CL2 = 0.226 L/min). Renal function was found to influence gemcitabine clearance, and body surface area to impact the volume of distribution of dFdU. However, neither CDA status nor the occurrence of toxicities was correlated to PK parameters. Despite sparse sampling and heterogeneous administration and sampling protocols, population and individual PK parameters of gemcitabine and dFdU were successfully estimated using Monolix population PK software. The estimated parameters were consistent with previously published results. Surprisingly, CDA activity did not influence gemcitabine PK, which was explained by the absence of CDA-deficient patients enrolled in the study. This work suggests that even sparse data are valuable to estimate population and individual PK parameters in patients, which will be usable to individualize the dose for an optimized benefit to risk ratio.

  9. Estimation of the radiation exposure of a chest pain protocol with ECG-gating in dual-source computed tomography

    International Nuclear Information System (INIS)

    Ketelsen, Dominik; Luetkhoff, Marie H.; Thomas, Christoph; Werner, Matthias; Tsiflikas, Ilias; Reimann, Anja; Kopp, Andreas F.; Claussen, Claus D.; Heuschmid, Martin; Buchgeister, Markus; Burgstahler, Christof

    2009-01-01

    The aim of the study was to evaluate radiation exposure of a chest pain protocol with ECG-gated dual-source computed tomography (DSCT). An Alderson Rando phantom equipped with thermoluminescent dosimeters was used for dose measurements. Exposure was performed on a dual-source computed tomography system with a standard protocol for chest pain evaluation (120 kV, 320 mAs/rot) with different simulated heart rates (HRs). The dose of a standard chest CT examination (120 kV, 160 mAs) was also measured. Effective dose of the chest pain protocol was 19.3/21.9 mSv (male/female, HR 60), 17.9/20.4 mSv (male/female, HR 80) and 14.7/16.7 mSv (male/female, HR 100). Effective dose of a standard chest examination was 6.3 mSv (males) and 7.2 mSv (females). Radiation dose of the chest pain protocol increases significantly with a lower heart rate for both males (p = 0.040) and females (p = 0.044). The average radiation dose of a standard chest CT examination is about 36.5% that of a CT examination performed for chest pain. Using DSCT, the evaluated chest pain protocol revealed a higher radiation exposure compared with standard chest CT. Furthermore, HRs markedly influenced the dose exposure when using the ECG-gated chest pain protocol. (orig.)

  10. Effect of stress on turbine fish passage mortality estimates

    International Nuclear Information System (INIS)

    Ruggles, C.P.

    1993-01-01

    Tests were conducted with juvenile alewife to determine the effects of four experimental protocols upon turbine fish passage mortality estimates. Three protocols determined the effect of cumulative stresses upon fish, while the fourth determined the effect of long range truck transportation prior to release into the penstock or tailrace. The wide range in results were attributed to the presence or absence of additional stress factors associated with the experiments. For instance, fish may survive passage through a turbine, or non-turbine related stresses imposed by the investigator; however, when both are imposed, the cumulative stresses may be lethal. The impact of protocol stress on turbine mortality estimates becomes almost exponential after control mortality exceeds 10%. Valid turbine related mortalities may be determined only after stresses associated with experimental protocol are adequately reduced. This is usually indicated by a control mortality of less than 10%. 14 refs., 5 figs., 6 tabs

  11. Effects of social organization, trap arrangement and density, sampling scale, and population density on bias in population size estimation using some common mark-recapture estimators.

    Directory of Open Access Journals (Sweden)

    Manan Gupta

    Full Text Available Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates

  12. Patient-Specific Internal Dosimetry Protocol for 131 treatment of differentiated thyroid cancer

    International Nuclear Information System (INIS)

    Deluca, G.M.; Rojo, Ana M.; Llina Fuentes, C.S.; Cabrejas, Mariana L.; Cabrejas, R.; Fadel, A.M.

    2008-01-01

    Full text: The most effective treatment against Differentiated Thyroid Cancer (DTC), in its most frequently types: papillar and follicular, is the administration of radioiodine. As a result of a multidisciplinary work, a dosimetrical protocol for radiological protection purpose has been developed that suggests the standards and formalisms for the determination of absorbed doses due to the administration of 131 I activity to DTC patients. This dosimetrical protocol takes into account individual data of each patient (age, gender, the presence or absence of metastases, physiology, physiopathology, biochemical parameters) and involves clinical aspects, the equipment that should be used and the dose assessment procedure of each treatment. Based on the Medical Internal radiation Dose (MIRD) scheme and considering the major critical organs for this therapy, the dosimetrical protocol states the 'how-to' of the following procedures, in adults and paediatric cases: 1) estimation of the red marrow dose (with/without bone metastases) to avoid mielotoxicity (200 cGy); 2) Estimation of the retention / dose rate / dose in lungs after 48 hours from the administration of radioiodine to avoid lung fibrosis; 3) Estimation of the testes dose in young male patients to avoid oligospermia; 4) Estimation of the maximum activity which can be safely administered without damaging the most critical organ for each patient; and 5) Acquisition of images and retention data from patients. This dosimetrical protocol also specifies the requirements and basic steps that should be followed, the essential information, the complementary studies and the basic equipment required to perform an appropriate internal dosimetry evaluation. To be fully implemented, the dosimetrical protocol needs the constitution of a multidisciplinary team including physicians, medical physicists and technicians. Clear instructions should be provided to the patient as his full collaboration is essential. Even though empirical

  13. Detection and identification of Leishmania spp.: application of two hsp70-based PCR-RFLP protocols to clinical samples from the New World.

    Science.gov (United States)

    Montalvo, Ana M; Fraga, Jorge; Tirado, Dídier; Blandón, Gustavo; Alba, Annia; Van der Auwera, Gert; Vélez, Iván Darío; Muskus, Carlos

    2017-07-01

    Leishmaniasis is highly prevalent in New World countries, where several methods are available for detection and identification of Leishmania spp. Two hsp70-based PCR protocols (PCR-N and PCR-F) and their corresponding restriction fragment length polymorphisms (RFLP) were applied for detection and identification of Leishmania spp. in clinical samples recruited in Colombia, Guatemala, and Honduras. A total of 93 cases were studied. The samples were classified into positive or suspected of leishmaniasis according to parasitological criteria. Molecular amplification of two different hsp70 gene fragments and further RFLP analysis for identification of Leishmania species was done. The detection in parasitologically positive samples was higher using PCR-N than PCR-F. In the total of samples studied, the main species identified were Leishmania panamensis, Leishmania braziliensis, and Leishmania infantum (chagasi). Although RFLP-N was more efficient for the identification, RFLP-F is necessary for discrimination between L. panamensis and Leishmania guyanesis, of great importance in Colombia. Unexpectedly, one sample from this country revealed an RFLP pattern corresponding to Leishmania naiffi. Both molecular variants are applicable for the study of clinical samples originated in Colombia, Honduras, and Guatemala. Choosing the better tool for each setting depends on the species circulating. More studies are needed to confirm the presence of L. naiffi in Colombian territory.

  14. Proposed quality control protocol of a dual energy bone densitometer from Spanish protocol for quality control of radiology

    International Nuclear Information System (INIS)

    Saez, F.; Benito, M. A.; Collado, P.; Saez, M.

    2011-01-01

    In this paper we propose additional testing to complete the tests by the Spanish Protocol for Quality Control of Diagnostic Radiology, taking into account the particular characteristics of these units, and including these tests in the estimation of patient dose dose assessment area. There is also the possibility to independently verify the quality control tests that are done automatically.

  15. Integration and analysis of neighbor discovery and link quality estimation in wireless sensor networks.

    Science.gov (United States)

    Radi, Marjan; Dezfouli, Behnam; Abu Bakar, Kamalrulnizam; Abd Razak, Shukor

    2014-01-01

    Network connectivity and link quality information are the fundamental requirements of wireless sensor network protocols to perform their desired functionality. Most of the existing discovery protocols have only focused on the neighbor discovery problem, while a few number of them provide an integrated neighbor search and link estimation. As these protocols require a careful parameter adjustment before network deployment, they cannot provide scalable and accurate network initialization in large-scale dense wireless sensor networks with random topology. Furthermore, performance of these protocols has not entirely been evaluated yet. In this paper, we perform a comprehensive simulation study on the efficiency of employing adaptive protocols compared to the existing nonadaptive protocols for initializing sensor networks with random topology. In this regard, we propose adaptive network initialization protocols which integrate the initial neighbor discovery with link quality estimation process to initialize large-scale dense wireless sensor networks without requiring any parameter adjustment before network deployment. To the best of our knowledge, this work is the first attempt to provide a detailed simulation study on the performance of integrated neighbor discovery and link quality estimation protocols for initializing sensor networks. This study can help system designers to determine the most appropriate approach for different applications.

  16. Integration and Analysis of Neighbor Discovery and Link Quality Estimation in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Marjan Radi

    2014-01-01

    Full Text Available Network connectivity and link quality information are the fundamental requirements of wireless sensor network protocols to perform their desired functionality. Most of the existing discovery protocols have only focused on the neighbor discovery problem, while a few number of them provide an integrated neighbor search and link estimation. As these protocols require a careful parameter adjustment before network deployment, they cannot provide scalable and accurate network initialization in large-scale dense wireless sensor networks with random topology. Furthermore, performance of these protocols has not entirely been evaluated yet. In this paper, we perform a comprehensive simulation study on the efficiency of employing adaptive protocols compared to the existing nonadaptive protocols for initializing sensor networks with random topology. In this regard, we propose adaptive network initialization protocols which integrate the initial neighbor discovery with link quality estimation process to initialize large-scale dense wireless sensor networks without requiring any parameter adjustment before network deployment. To the best of our knowledge, this work is the first attempt to provide a detailed simulation study on the performance of integrated neighbor discovery and link quality estimation protocols for initializing sensor networks. This study can help system designers to determine the most appropriate approach for different applications.

  17. The evaluation of an analytical protocol for the determination of substances in waste for hazard classification.

    Science.gov (United States)

    Hennebert, Pierre; Papin, Arnaud; Padox, Jean-Marie; Hasebrouck, Benoît

    2013-07-01

    The classification of waste as hazardous could soon be assessed in Europe using largely the hazard properties of its constituents, according to the the Classification, Labelling and Packaging (CLP) regulation. Comprehensive knowledge of the component constituents of a given waste will therefore be necessary. An analytical protocol for determining waste composition is proposed, which includes using inductively coupled plasma (ICP) screening methods to identify major elements and gas chromatography/mass spectrometry (GC-MS) screening techniques to measure organic compounds. The method includes a gross or indicator measure of 'pools' of higher molecular weight organic substances that are taken to be less bioactive and less hazardous, and of unresolved 'mass' during the chromatography of volatile and semi-volatile compounds. The concentration of some elements and specific compounds that are linked to specific hazard properties and are subject to specific regulation (examples include: heavy metals, chromium(VI), cyanides, organo-halogens, and PCBs) are determined by classical quantitative analysis. To check the consistency of the analysis, the sum of the concentrations (including unresolved 'pools') should give a mass balance between 90% and 110%. Thirty-two laboratory samples comprising different industrial wastes (liquids and solids) were tested by two routine service laboratories, to give circa 7000 parameter results. Despite discrepancies in some parameters, a satisfactory sum of estimated or measured concentrations (analytical balance) of 90% was reached for 20 samples (63% of the overall total) during this first test exercise, with identified reasons for most of the unsatisfactory results. Regular use of this protocol (which is now included in the French legislation) has enabled service laboratories to reach a 90% mass balance for nearly all the solid samples tested, and most of liquid samples (difficulties were caused in some samples from polymers in solution and

  18. Weighted Moments Estimators of the Parameters for the Extreme Value Distribution Based on the Multiply Type II Censored Sample

    Directory of Open Access Journals (Sweden)

    Jong-Wuu Wu

    2013-01-01

    Full Text Available We propose the weighted moments estimators (WMEs of the location and scale parameters for the extreme value distribution based on the multiply type II censored sample. Simulated mean squared errors (MSEs of best linear unbiased estimator (BLUE and exact MSEs of WMEs are compared to study the behavior of different estimation methods. The results show the best estimator among the WMEs and BLUE under different combinations of censoring schemes.

  19. A sampling scheme intended for tandem measurements of sodium transport and microvillous surface area in the coprodaeal epithelium of hens on high- and low-salt diets.

    Science.gov (United States)

    Mayhew, T M; Dantzer, V; Elbrønd, V S; Skadhauge, E

    1990-12-01

    A tissue sampling protocol for combined morphometric and physiological studies on the mucosa of the avian coprodaeum is presented. The morphometric goal is to estimate the surface area due to microvilli at the epithelial cell apex and the proposed scheme is illustrated using material from three White Plymouth Rock hens. The scheme is designed to satisfy sampling requirements for the unbiased estimation of surface areas by vertical sectioning coupled with cycloid test lines and it incorporates a number of useful internal checks. It relies on multi-level sampling with four levels of stereological estimation. At Level I, macroscopic estimates of coprodaeal volume are obtained. Light microscopy is employed at Level II to calculate epithelial volume density. Levels III and IV require low and high power electron microscopy to estimate the surface density of the epithelial apical border and the amplification factor due to microvilli. Worked examples of the calculation steps are provided.

  20. Estimating national crop yield potential and the relevance of weather data sources

    Science.gov (United States)

    Van Wart, Justin

    2011-12-01

    To determine where, when, and how to increase yields, researchers often analyze the yield gap (Yg), the difference between actual current farm yields and crop yield potential. Crop yield potential (Yp) is the yield of a crop cultivar grown under specific management limited only by temperature and solar radiation and also by precipitation for water limited yield potential (Yw). Yp and Yw are critical components of Yg estimations, but are very difficult to quantify, especially at larger scales because management data and especially daily weather data are scarce. A protocol was developed to estimate Yp and Yw at national scales using site-specific weather, soils and management data. Protocol procedures and inputs were evaluated to determine how to improve accuracy of Yp, Yw and Yg estimates. The protocol was also used to evaluate raw, site-specific and gridded weather database sources for use in simulations of Yp or Yw. The protocol was applied to estimate crop Yp in US irrigated maize and Chinese irrigated rice and Yw in US rainfed maize and German rainfed wheat. These crops and countries account for >20% of global cereal production. The results have significant implications for past and future studies of Yp, Yw and Yg. Accuracy of national long-term average Yp and Yw estimates was significantly improved if (i) > 7 years of simulations were performed for irrigated and > 15 years for rainfed sites, (ii) > 40% of nationally harvested area was within 100 km of all simulation sites, (iii) observed weather data coupled with satellite derived solar radiation data were used in simulations, and (iv) planting and harvesting dates were specified within +/- 7 days of farmers actual practices. These are much higher standards than have been applied in national estimates of Yp and Yw and this protocol is a substantial step in making such estimates more transparent, robust, and straightforward. Finally, this protocol may be a useful tool for understanding yield trends and directing

  1. Protocols for BNCT of glioblastoma multiforme at Brookhaven: Practical considerations

    Energy Technology Data Exchange (ETDEWEB)

    Chanana, A.D.; Coderre, J.A.; Joel, D.D.; Slatkin, D.N.

    1996-12-31

    In this report we discuss some issues considered in selecting initial protocols for boron neutron capture therapy (BNCT) of human glioblastoma multiforme. First the tolerance of normal tissues, especially the brain, to the radiation field. Radiation doses limits were based on results with human and animal exposures. Estimates of tumor control doses were based on the results of single-fraction photon therapy and single fraction BNCT both in humans and experimental animals. Of the two boron compounds (BSH and BPA), BPA was chosen since a FDA-sanctioned protocol for distribution in humans was in effect at the time the first BNCT protocols were written and therapy studies in experimental animals had shown it to be more effective than BSH.

  2. A heteroskedastic error covariance matrix estimator using a first-order conditional autoregressive Markov simulation for deriving asympotical efficient estimates from ecological sampled Anopheles arabiensis aquatic habitat covariates

    Directory of Open Access Journals (Sweden)

    Githure John I

    2009-09-01

    Full Text Available Abstract Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression. The eigenfunction

  3. An empirical comparison of isolate-based and sample-based definitions of antimicrobial resistance and their effect on estimates of prevalence.

    Science.gov (United States)

    Humphry, R W; Evans, J; Webster, C; Tongue, S C; Innocent, G T; Gunn, G J

    2018-02-01

    Antimicrobial resistance is primarily a problem in human medicine but there are unquantified links of transmission in both directions between animal and human populations. Quantitative assessment of the costs and benefits of reduced antimicrobial usage in livestock requires robust quantification of transmission of resistance between animals, the environment and the human population. This in turn requires appropriate measurement of resistance. To tackle this we selected two different methods for determining whether a sample is resistant - one based on screening a sample, the other on testing individual isolates. Our overall objective was to explore the differences arising from choice of measurement. A literature search demonstrated the widespread use of testing of individual isolates. The first aim of this study was to compare, quantitatively, sample level and isolate level screening. Cattle or sheep faecal samples (n=41) submitted for routine parasitology were tested for antimicrobial resistance in two ways: (1) "streak" direct culture onto plates containing the antimicrobial of interest; (2) determination of minimum inhibitory concentration (MIC) of 8-10 isolates per sample compared to published MIC thresholds. Two antibiotics (ampicillin and nalidixic acid) were tested. With ampicillin, direct culture resulted in more than double the number of resistant samples than the MIC method based on eight individual isolates. The second aim of this study was to demonstrate the utility of the observed relationship between these two measures of antimicrobial resistance to re-estimate the prevalence of antimicrobial resistance from a previous study, in which we had used "streak" cultures. Boot-strap methods were used to estimate the proportion of samples that would have tested resistant in the historic study, had we used the isolate-based MIC method instead. Our boot-strap results indicate that our estimates of prevalence of antimicrobial resistance would have been

  4. Bias in estimating the cross-sectional smoking, alcohol, obesity and diabetes associations with moderate-severe periodontitis in the Atherosclerosis Risk in Communities study: comparison of full versus partial-mouth estimates.

    Science.gov (United States)

    Akinkugbe, Aderonke A; Saraiya, Veeral M; Preisser, John S; Offenbacher, Steven; Beck, James D

    2015-07-01

    To assess whether partial-mouth protocols (PRPs) result in biased estimates of the associations between smoking, alcohol, obesity and diabetes with periodontitis. Using a sample (n = 6129) of the 1996-1998 Atherosclerosis Risk in Communities study, we used measures of probing pocket depth and clinical attachment level to identify moderate-severe periodontitis. Adjusting for confounders, unconditional binary logistic regression estimated prevalence odds ratios (POR) and 95% confidence limits. Specifically, we compared POR for smoking, alcohol, obesity and diabetes with periodontitis derived from full-mouth to those derived from 4-PRPs (Ramfjörd, National Health and Nutrition Examination survey-III, modified-NHANES-IV and 42-site-Random-site selection-method). Finally, we conducted a simple sensitivity analysis of periodontitis misclassification by changing the case definition threshold for each PRP. In comparison to full-mouth PORs, PRP PORs were biased in terms of magnitude and direction. Holding the full-mouth case definition at moderate-severe periodontitis and setting it at mild-moderate-severe for the PRPs did not consistently produce POR estimates that were either biased towards or away from the null in comparison to full-mouth estimates. Partial-mouth protocols result in misclassification of periodontitis and may bias epidemiologic measures of association. The magnitude and direction of this bias depends on choice of PRP and case definition threshold used. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. On-farm comparisons of different cleaning protocols in broiler houses.

    Science.gov (United States)

    Luyckx, K Y; Van Weyenberg, S; Dewulf, J; Herman, L; Zoons, J; Vervaet, E; Heyndrickx, M; De Reu, K

    2015-08-01

    The present study evaluated the effectiveness of 4 cleaning protocols designed to reduce the bacteriological infection pressure on broiler farms and prevent food-borne zoonoses. Additionally, difficult to clean locations and possible sources of infection were identified. Cleaning and disinfection rounds were evaluated in 12 broiler houses on 5 farms through microbiological analyses and adenosine triphosphate hygiene monitoring. Samples were taken at 3 different times: before cleaning, after cleaning, and after disinfection. At each sampling time, swabs were taken from various locations for enumeration of the total aerobic flora and Enterococcus species pluralis ( SPP:). In addition, before cleaning and after disinfection, testing for Escherichia coli and Salmonella was carried out. Finally, adenosine triphosphate swabs and agar contact plates for total aerobic flora counts were taken after cleaning and disinfection, respectively. Total aerobic flora and Enterococcus spp. counts on the swab samples showed that cleaning protocols which were preceded by an overnight soaking with water caused a higher bacterial reduction compared to protocols without a preceding soaking step. Moreover, soaking of broiler houses leads to less water consumption and reduced working time during high pressure cleaning. No differences were found between protocols using cold or warm water during cleaning. Drinking cups, drain holes, and floor cracks were identified as critical locations for cleaning and disinfection in broiler houses. © 2015 Poultry Science Association Inc.

  6. Estimating Sampling Biases and Measurement Uncertainties of AIRS-AMSU-A Temperature and Water Vapor Observations Using MERRA Reanalysis

    Science.gov (United States)

    Hearty, Thomas J.; Savtchenko, Andrey K.; Tian, Baijun; Fetzer, Eric; Yung, Yuk L.; Theobald, Michael; Vollmer, Bruce; Fishbein, Evan; Won, Young-In

    2014-01-01

    We use MERRA (Modern Era Retrospective-Analysis for Research Applications) temperature and water vapor data to estimate the sampling biases of climatologies derived from the AIRS/AMSU-A (Atmospheric Infrared Sounder/Advanced Microwave Sounding Unit-A) suite of instruments. We separate the total sampling bias into temporal and instrumental components. The temporal component is caused by the AIRS/AMSU-A orbit and swath that are not able to sample all of time and space. The instrumental component is caused by scenes that prevent successful retrievals. The temporal sampling biases are generally smaller than the instrumental sampling biases except in regions with large diurnal variations, such as the boundary layer, where the temporal sampling biases of temperature can be +/- 2 K and water vapor can be 10% wet. The instrumental sampling biases are the main contributor to the total sampling biases and are mainly caused by clouds. They are up to 2 K cold and greater than 30% dry over mid-latitude storm tracks and tropical deep convective cloudy regions and up to 20% wet over stratus regions. However, other factors such as surface emissivity and temperature can also influence the instrumental sampling bias over deserts where the biases can be up to 1 K cold and 10% wet. Some instrumental sampling biases can vary seasonally and/or diurnally. We also estimate the combined measurement uncertainties of temperature and water vapor from AIRS/AMSU-A and MERRA by comparing similarly sampled climatologies from both data sets. The measurement differences are often larger than the sampling biases and have longitudinal variations.

  7. The Protocol of Choice for Treatment of Snake Bite

    Directory of Open Access Journals (Sweden)

    Afshin Mohammad Alizadeh

    2016-01-01

    Full Text Available The aim of the current study is to compare three different methods of treatment of snake bite to determine the most efficient one. To unify the protocol of snake bite treatment in our center, we retrospectively reviewed files of the snake-bitten patients who had been referred to us between 2010 and 2014. They were contacted for follow-up using phone calls. Demographic and on-arrival characteristics, protocol used for treatment (WHO/Haddad/GF, and outcome/complications were evaluated. Patients were entered into one of the protocol groups and compared. Of a total of 63 patients, 56 (89% were males. Five, 19, and 28 patients were managed by Haddad, WHO, or GF protocols, respectively. Eleven patients had fallen into both GF and WHO protocols and were excluded. Serum sickness was significantly more common when WHO protocol was used while 100% of the compartment syndromes and 71% of deformities had been reported after GF protocol. The most important complications were considered to be deformity, compartment syndrome, and amputation and were more frequent after the use of WHO and GF protocols (23.1% versus 76.9%; none in Haddad; P = NS. Haddad protocol seems to be the best for treatment of snake-bitten patients in our region. However, this cannot be strictly concluded because of the limited sample size and nonsignificant P values.

  8. Analytical Method to Estimate the Complex Permittivity of Oil Samples

    Directory of Open Access Journals (Sweden)

    Lijuan Su

    2018-03-01

    Full Text Available In this paper, an analytical method to estimate the complex dielectric constant of liquids is presented. The method is based on the measurement of the transmission coefficient in an embedded microstrip line loaded with a complementary split ring resonator (CSRR, which is etched in the ground plane. From this response, the dielectric constant and loss tangent of the liquid under test (LUT can be extracted, provided that the CSRR is surrounded by such LUT, and the liquid level extends beyond the region where the electromagnetic fields generated by the CSRR are present. For that purpose, a liquid container acting as a pool is added to the structure. The main advantage of this method, which is validated from the measurement of the complex dielectric constant of olive and castor oil, is that reference samples for calibration are not required.

  9. Seasonal and temporal variation in release of antibiotics in hospital wastewater: estimation using continuous and grab sampling.

    Science.gov (United States)

    Diwan, Vishal; Stålsby Lundborg, Cecilia; Tamhankar, Ashok J

    2013-01-01

    The presence of antibiotics in the environment and their subsequent impact on resistance development has raised concerns globally. Hospitals are a major source of antibiotics released into the environment. To reduce these residues, research to improve knowledge of the dynamics of antibiotic release from hospitals is essential. Therefore, we undertook a study to estimate seasonal and temporal variation in antibiotic release from two hospitals in India over a period of two years. For this, 6 sampling sessions of 24 hours each were conducted in the three prominent seasons of India, at all wastewater outlets of the two hospitals, using continuous and grab sampling methods. An in-house wastewater sampler was designed for continuous sampling. Eight antibiotics from four major antibiotic groups were selected for the study. To understand the temporal pattern of antibiotic release, each of the 24-hour sessions were divided in three sub-sampling sessions of 8 hours each. Solid phase extraction followed by liquid chromatography/tandem mass spectrometry (LC-MS/MS) was used to determine the antibiotic residues. Six of the eight antibiotics studied were detected in the wastewater samples. Both continuous and grab sampling methods indicated that the highest quantities of fluoroquinolones were released in winter followed by the rainy season and the summer. No temporal pattern in antibiotic release was detected. In general, in a common timeframe, continuous sampling showed less concentration of antibiotics in wastewater as compared to grab sampling. It is suggested that continuous sampling should be the method of choice as grab sampling gives erroneous results, it being indicative of the quantities of antibiotics present in wastewater only at the time of sampling. Based on our studies, calculations indicate that from hospitals in India, an estimated 89, 1 and 25 ng/L/day of fluroquinolones, metronidazole and sulfamethoxazole respectively, might be getting released into the

  10. Protein blotting protocol for beginners.

    Science.gov (United States)

    Petrasovits, Lars A

    2014-01-01

    The transfer and immobilization of biological macromolecules onto solid nitrocellulose or nylon (polyvinylidene difluoride (PVDF)) membranes subsequently followed by specific detection is referred to as blotting. DNA blots are called Southerns after the inventor of the technique, Edwin Southern. By analogy, RNA blots are referred to as northerns and protein blots as westerns (Burnette, Anal Biochem 112:195-203, 1981). With few exceptions, western blotting involves five steps, namely, sample collection, preparation, separation, immobilization, and detection. In this chapter, protocols for the entire process from sample collection to detection are described.

  11. Simulating quantum correlations as a distributed sampling problem

    International Nuclear Information System (INIS)

    Degorre, Julien; Laplante, Sophie; Roland, Jeremie

    2005-01-01

    It is known that quantum correlations exhibited by a maximally entangled qubit pair can be simulated with the help of shared randomness, supplemented with additional resources, such as communication, postselection or nonlocal boxes. For instance, in the case of projective measurements, it is possible to solve this problem with protocols using one bit of communication or making one use of a nonlocal box. We show that this problem reduces to a distributed sampling problem. We give a new method to obtain samples from a biased distribution, starting with shared random variables following a uniform distribution, and use it to build distributed sampling protocols. This approach allows us to derive, in a simpler and unified way, many existing protocols for projective measurements, and extend them to positive operator value measurements. Moreover, this approach naturally leads to a local hidden variable model for Werner states

  12. Sampling, feasibility, and priors in Bayesian estimation

    OpenAIRE

    Chorin, Alexandre J.; Lu, Fei; Miller, Robert N.; Morzfeld, Matthias; Tu, Xuemin

    2015-01-01

    Importance sampling algorithms are discussed in detail, with an emphasis on implicit sampling, and applied to data assimilation via particle filters. Implicit sampling makes it possible to use the data to find high-probability samples at relatively low cost, making the assimilation more efficient. A new analysis of the feasibility of data assimilation is presented, showing in detail why feasibility depends on the Frobenius norm of the covariance matrix of the noise and not on the number of va...

  13. Effect of Small Numbers of Test Results on Accuracy of Hoek-Brown Strength Parameter Estimations: A Statistical Simulation Study

    Science.gov (United States)

    Bozorgzadeh, Nezam; Yanagimura, Yoko; Harrison, John P.

    2017-12-01

    The Hoek-Brown empirical strength criterion for intact rock is widely used as the basis for estimating the strength of rock masses. Estimations of the intact rock H-B parameters, namely the empirical constant m and the uniaxial compressive strength σc, are commonly obtained by fitting the criterion to triaxial strength data sets of small sample size. This paper investigates how such small sample sizes affect the uncertainty associated with the H-B parameter estimations. We use Monte Carlo (MC) simulation to generate data sets of different sizes and different combinations of H-B parameters, and then investigate the uncertainty in H-B parameters estimated from these limited data sets. We show that the uncertainties depend not only on the level of variability but also on the particular combination of parameters being investigated. As particular combinations of H-B parameters can informally be considered to represent specific rock types, we discuss that as the minimum number of required samples depends on rock type it should correspond to some acceptable level of uncertainty in the estimations. Also, a comparison of the results from our analysis with actual rock strength data shows that the probability of obtaining reliable strength parameter estimations using small samples may be very low. We further discuss the impact of this on ongoing implementation of reliability-based design protocols and conclude with suggestions for improvements in this respect.

  14. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi

    2015-10-21

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  15. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi; Zhou, Lan; Najibi, Seyed Morteza; Gao, Xin; Huang, Jianhua Z.

    2015-01-01

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  16. Sample Based Unit Liter Dose Estimates

    International Nuclear Information System (INIS)

    JENSEN, L.

    2000-01-01

    The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999a) and the Final Safety Analysis Report (FSAR) (FDH 1999b) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new data to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in producing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks. The results given in this report are a revision to similar results given in an earlier version of the document (Jensen and Wilmarth 1999). The main difference between the results in this document and the earlier version is that the dose conversion factors (DCF) for converting μCi/g or μCi/L to Sv/L (sieverts per liter) have changed. There are now two DCFs, one based on ICRP-68 and one based on ICW-71 (Brevick 2000)

  17. An electronic specimen collection protocol schema (eSCPS). Document architecture for specimen management and the exchange of specimen collection protocols between biobanking information systems.

    Science.gov (United States)

    Eminaga, O; Semjonow, A; Oezguer, E; Herden, J; Akbarov, I; Tok, A; Engelmann, U; Wille, S

    2014-01-01

    The integrity of collection protocols in biobanking is essential for a high-quality sample preparation process. However, there is not currently a well-defined universal method for integrating collection protocols in the biobanking information system (BIMS). Therefore, an electronic schema of the collection protocol that is based on Extensible Markup Language (XML) is required to maintain the integrity and enable the exchange of collection protocols. The development and implementation of an electronic specimen collection protocol schema (eSCPS) was performed at two institutions (Muenster and Cologne) in three stages. First, we analyzed the infrastructure that was already established at both the biorepository and the hospital information systems of these institutions and determined the requirements for the sufficient preparation of specimens and documentation. Second, we designed an eSCPS according to these requirements. Finally, a prospective study was conducted to implement and evaluate the novel schema in the current BIMS. We designed an eSCPS that provides all of the relevant information about collection protocols. Ten electronic collection protocols were generated using the supplementary Protocol Editor tool, and these protocols were successfully implemented in the existing BIMS. Moreover, an electronic list of collection protocols for the current studies being performed at each institution was included, new collection protocols were added, and the existing protocols were redesigned to be modifiable. The documentation time was significantly reduced after implementing the eSCPS (5 ± 2 min vs. 7 ± 3 min; p = 0.0002). The eSCPS improves the integrity and facilitates the exchange of specimen collection protocols in the existing open-source BIMS.

  18. Adaptive phase estimation with squeezed thermal light

    DEFF Research Database (Denmark)

    Berni, A. A.; Madsen, Lars Skovgaard; Lassen, Mikael Østergaard

    2013-01-01

    Summary form only given. The use of quantum states of light in optical interferometry improves the precision in the estimation of a phase shift, paving the way for applications in quantum metrology, computation and cryptography. Sub-shot noise phase sensing can for example be achieved by injecting...... investigate the performances of such protocol under the realistic assumption of thermalization of the probe state. Indeed, adaptive phase estimation schemes with squeezed states and Bayesian processing of homodyne data have been shown to be asymptotically optimal in the pure case, thus approaching the quantum...... Cramér-Rao bound. In our protocol we take advantage of the enhanced sensitivity of homodyne detection in proximity of the optimal phase which maximizes the homodyne Fisher information. A squeezed thermal probe state (signal) undergoes an unknown phase shift. The first estimation step involves...

  19. Modern survey sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...

  20. Mac protocols for wireless sensor network (wsn): a comparative study

    International Nuclear Information System (INIS)

    Arshad, J.; Akram, Q.; Saleem, Y.

    2014-01-01

    Data communication between nodes is carried out under Medium Access Control (MAC) protocol which is defined at data link layer. The MAC protocols are responsible to communicate and coordinate between nodes according to the defined standards in WSN (Wireless Sensor Networks). The design of a MAC protocol should also address the issues of energy efficiency and transmission efficiency. There are number of MAC protocols that exist in the literature proposed for WSN. In this paper, nine MAC protocols which includes S-MAC, T-MAC, Wise-MAC, Mu-MAC, Z-MAC, A-MAC, D-MAC, B-MAC and B-MAC+ for WSN have been explored, studied and analyzed. These nine protocols are classified in contention based and hybrid (combination of contention and schedule based) MAC protocols. The goal of this comparative study is to provide a basis for MAC protocols and to highlight different mechanisms used with respect to parameters for the evaluation of energy and transmission efficiency in WSN. This study also aims to give reader a better understanding of the concepts, processes and flow of information used in these MAC protocols for WSN. A comparison with respect to energy reservation scheme, idle listening avoidance, latency, fairness, data synchronization, and throughput maximization has been presented. It was analyzed that contention based MAC protocols are less energy efficient as compared to hybrid MAC protocols. From the analysis of contention based MAC protocols in term of energy consumption, it was being observed that protocols based on preamble sampling consume lesser energy than protocols based on static or dynamic sleep schedule. (author)

  1. Respondent-Driven Sampling – Testing Assumptions: Sampling with Replacement

    Directory of Open Access Journals (Sweden)

    Barash Vladimir D.

    2016-03-01

    Full Text Available Classical Respondent-Driven Sampling (RDS estimators are based on a Markov Process model in which sampling occurs with replacement. Given that respondents generally cannot be interviewed more than once, this assumption is counterfactual. We join recent work by Gile and Handcock in exploring the implications of the sampling-with-replacement assumption for bias of RDS estimators. We differ from previous studies in examining a wider range of sampling fractions and in using not only simulations but also formal proofs. One key finding is that RDS estimates are surprisingly stable even in the presence of substantial sampling fractions. Our analyses show that the sampling-with-replacement assumption is a minor contributor to bias for sampling fractions under 40%, and bias is negligible for the 20% or smaller sampling fractions typical of field applications of RDS.

  2. Two mini-preparation protocols to DNA extraction from plants with ...

    African Journals Online (AJOL)

    AJB SERVER

    2006-10-16

    Oct 16, 2006 ... samples to process and it is also a non expensive protocol. This method also ... because many of those chemicals inhibit PCR reactions. (Pandey et al., 1996) ... Spin at 15,000 rpm for 15 min and wash the DNA pellet with 1.2 ml ... Protocol: To 200 mg frozen and ground tissue plant material, add 900 µl of.

  3. Better Fire Emissions Estimates for Tricky Species Illustrated with a Simple Empirical Burn-to-Sample Plume Mode

    Science.gov (United States)

    Chatfield, R. B.; Andreae, M. O.; Lareau, N.

    2017-12-01

    Methodologies for estimating emission factors (EFs) and broader emission relationship (ERs) (for e.g., O3 production or aerosol absorption) have been difficult to make accurate and convincing; this is largely due to non-fire effects on both CO2 and also fire-emitted trace species. We present a new view of these multiple effects as they affect downwind tracer samples observed by aircraft in NASA's ARCTAS and SEAC4RS airborne missions. This view leads to our method for estimates of ERs and EFs that allow spatially detailed views focusing on individual samples, a Mixed Effects Emission Ratio Technique (MERET). We concentrate on presenting a generalized viewpoint: a simple idealized model of a fire plume entraining air from near-flames upward and then outward to a sampling point, a view base on observations of typical situations. Actual evolution of a plume can depend intricately on the fully history of entrainment, entraining concentration levels of CO2 and tracer species, and mixing. Observations suggest that our simple plume model with just two (analyzed) values for entrained CO2 and one or potentially two values for environmental concentrations for each tracer can serve surprisingly well for mixed-effects regression estimates. Such details appears imperative for long-lived gases like CH4, CO, and N2O. In particular, it is difficult to distinguish fire-sourced emissions from air entrained near the flames, entrained in a way proportional to fire intensity. These entraining concentrations may vary significantly from those later in plume evolution. In addition, such detail also highlights behavior of emissions that react on the path to sampling, e.g. fire-sourced or entrained urban NOx. Some caveats regarding poor sampling situations, and some warning signs, based on this empirical plume description and on MERET analyses, are demonstrated. Some information is available when multiple tracers are analyzed. MERET estimates for ERs of short and these long-lived species are

  4. Global, regional and national levels and trends of preterm birth rates for 1990 to 2014: protocol for development of World Health Organization estimates.

    Science.gov (United States)

    Vogel, Joshua P; Chawanpaiboon, Saifon; Watananirun, Kanokwaroon; Lumbiganon, Pisake; Petzold, Max; Moller, Ann-Beth; Thinkhamrop, Jadsada; Laopaiboon, Malinee; Seuc, Armando H; Hogan, Daniel; Tunçalp, Ozge; Allanson, Emma; Betrán, Ana Pilar; Bonet, Mercedes; Oladapo, Olufemi T; Gülmezoglu, A Metin

    2016-06-17

    The official WHO estimates of preterm birth are an essential global resource for assessing the burden of preterm birth and developing public health programmes and policies. This protocol describes the methods that will be used to identify, critically appraise and analyse all eligible preterm birth data, in order to develop global, regional and national level estimates of levels and trends in preterm birth rates for the period 1990 - 2014. We will conduct a systematic review of civil registration and vital statistics (CRVS) data on preterm birth for all WHO Member States, via national Ministries of Health and Statistics Offices. For Member States with absent, limited or lower-quality CRVS data, a systematic review of surveys and/or research studies will be conducted. Modelling will be used to develop country, regional and global rates for 2014, with time trends for Member States where sufficient data are available. Member States will be invited to review the methodology and provide additional eligible data via a country consultation before final estimates are developed and disseminated. This research will be used to generate estimates on the burden of preterm birth globally for 1990 to 2014. We invite feedback on the methodology described, and call on the public health community to submit pertinent data for consideration. Registered at PROSPERO CRD42015027439 CONTACT: pretermbirth@who.int.

  5. Respondent driven sampling: determinants of recruitment and a method to improve point estimation.

    Directory of Open Access Journals (Sweden)

    Nicky McCreesh

    Full Text Available Respondent-driven sampling (RDS is a variant of a link-tracing design intended for generating unbiased estimates of the composition of hidden populations that typically involves giving participants several coupons to recruit their peers into the study. RDS may generate biased estimates if coupons are distributed non-randomly or if potential recruits present for interview non-randomly. We explore if biases detected in an RDS study were due to either of these mechanisms, and propose and apply weights to reduce bias due to non-random presentation for interview.Using data from the total population, and the population to whom recruiters offered their coupons, we explored how age and socioeconomic status were associated with being offered a coupon, and, if offered a coupon, with presenting for interview. Population proportions were estimated by weighting by the assumed inverse probabilities of being offered a coupon (as in existing RDS methods, and also of presentation for interview if offered a coupon by age and socioeconomic status group.Younger men were under-recruited primarily because they were less likely to be offered coupons. The under-recruitment of higher socioeconomic status men was due in part to them being less likely to present for interview. Consistent with these findings, weighting for non-random presentation for interview by age and socioeconomic status group greatly improved the estimate of the proportion of men in the lowest socioeconomic group, reducing the root-mean-squared error of RDS estimates of socioeconomic status by 38%, but had little effect on estimates for age. The weighting also improved estimates for tribe and religion (reducing root-mean-squared-errors by 19-29%, but had little effect for sexual activity or HIV status.Data collected from recruiters on the characteristics of men to whom they offered coupons may be used to reduce bias in RDS studies. Further evaluation of this new method is required.

  6. Modeling the potential area of occupancy at fine resolution may reduce uncertainty in species range estimates

    DEFF Research Database (Denmark)

    Jiménez-Alfaro, Borja; Draper, David; Nogues, David Bravo

    2012-01-01

    and maximum entropy modeling to assess whether different sampling (expert versus systematic surveys) may affect AOO estimates based on habitat suitability maps, and the differences between such measurements and traditional coarse-grid methods. Fine-scale models performed robustly and were not influenced...... by survey protocols, providing similar habitat suitability outputs with high spatial agreement. Model-based estimates of potential AOO were significantly smaller than AOO measures obtained from coarse-scale grids, even if the first were obtained from conservative thresholds based on the Minimal Predicted...... permit comparable measures among species. We conclude that estimates of AOO based on fine-resolution distribution models are more robust tools for risk assessment than traditional systems, allowing a better understanding of species ranges at habitat level....

  7. The Efficacy of Consensus Tree Methods for Summarizing Phylogenetic Relationships from a Posterior Sample of Trees Estimated from Morphological Data.

    Science.gov (United States)

    O'Reilly, Joseph E; Donoghue, Philip C J

    2018-03-01

    Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data.

  8. A Comprehensive Software and Database Management System for Glomerular Filtration Rate Estimation by Radionuclide Plasma Sampling and Serum Creatinine Methods.

    Science.gov (United States)

    Jha, Ashish Kumar

    2015-01-01

    Glomerular filtration rate (GFR) estimation by plasma sampling method is considered as the gold standard. However, this method is not widely used because the complex technique and cumbersome calculations coupled with the lack of availability of user-friendly software. The routinely used Serum Creatinine method (SrCrM) of GFR estimation also requires the use of online calculators which cannot be used without internet access. We have developed user-friendly software "GFR estimation software" which gives the options to estimate GFR by plasma sampling method as well as SrCrM. We have used Microsoft Windows(®) as operating system and Visual Basic 6.0 as the front end and Microsoft Access(®) as database tool to develop this software. We have used Russell's formula for GFR calculation by plasma sampling method. GFR calculations using serum creatinine have been done using MIRD, Cockcroft-Gault method, Schwartz method, and Counahan-Barratt methods. The developed software is performing mathematical calculations correctly and is user-friendly. This software also enables storage and easy retrieval of the raw data, patient's information and calculated GFR for further processing and comparison. This is user-friendly software to calculate the GFR by various plasma sampling method and blood parameter. This software is also a good system for storing the raw and processed data for future analysis.

  9. Can a sample of Landsat sensor scenes reliably estimate the global extent of tropical deforestation?

    Science.gov (United States)

    R. L. Czaplewski

    2003-01-01

    Tucker and Townshend (2000) conclude that wall-to-wall coverage is needed to avoid gross errors in estimations of deforestation rates' because tropical deforestation is concentrated along roads and rivers. They specifically question the reliability of the 10% sample of Landsat sensor scenes used in the global remote sensing survey conducted by the Food and...

  10. Optimum sample length for estimating anchovy size distribution and the proportion of juveniles per fishing set for the Peruvian purse-seine fleet

    Directory of Open Access Journals (Sweden)

    Rocío Joo

    2017-04-01

    Full Text Available The length distribution of catches represents a fundamental source of information for estimating growth and spatio-temporal dynamics of cohorts. The length distribution of caught is estimated based on samples of catched individuals. This work studies the optimum sample size of individuals at each fishing set in order to obtain a representative sample of the length and the proportion of juveniles in the fishing set. For that matter, we use anchovy (Engraulis ringens length data from different fishing sets recorded by observers at-sea from the On-board Observers Program from the Peruvian Marine Research Institute. Finally, we propose an optimum sample size for obtaining robust size and juvenile estimations. Though the application of this work corresponds to the anchovy fishery, the procedure can be applied to any fishery, either for on board or inland biometric measurements.

  11. Robust experiment design for estimating myocardial β adrenergic receptor concentration using PET

    International Nuclear Information System (INIS)

    Salinas, Cristian; Muzic, Raymond F. Jr.; Ernsberger, Paul; Saidel, Gerald M.

    2007-01-01

    Myocardial β adrenergic receptor (β-AR) concentration can substantially decrease in congestive heart failure and significantly increase in chronic volume overload, such as in severe aortic valve regurgitation. Positron emission tomography (PET) with an appropriate ligand-receptor model can be used for noninvasive estimation of myocardial β-AR concentration in vivo. An optimal design of the experiment protocol, however, is needed for sufficiently precise estimates of β-AR concentration in a heterogeneous population. Standard methods of optimal design do not account for a heterogeneous population with a wide range of β-AR concentrations and other physiological parameters and consequently are inadequate. To address this, we have developed a methodology to design a robust two-injection protocol that provides reliable estimates of myocardial β-AR concentration in normal and pathologic states. A two-injection protocol of the high affinity β-AR antagonist [ 18 F]-(S)-fluorocarazolol was designed based on a computer-generated (or synthetic) population incorporating a wide range of β-AR concentrations. Timing and dosage of the ligand injections were optimally designed with minimax criterion to provide the least bad β-AR estimates for the worst case in the synthetic population. This robust experiment design for PET was applied to experiments with pigs before and after β-AR upregulation by chemical sympathectomy. Estimates of β-AR concentration were found by minimizing the difference between the model-predicted and experimental PET data. With this robust protocol, estimates of β-AR concentration showed high precision in both normal and pathologic states. The increase in β-AR concentration after sympathectomy predicted noninvasively with PET is consistent with the increase shown by in vitro assays in pig myocardium. A robust experiment protocol was designed for PET that yields reliable estimates of β-AR concentration in a population with normal and pathologic

  12. PROTOCOL FOR EXAMINATION OF THE INNER CAN CLOSURE WELD REGION FOR 3013 DE CONTAINERS

    Energy Technology Data Exchange (ETDEWEB)

    Mickalonis, J.

    2014-09-16

    The protocol for the examination of the inner can closure weld region (ICCWR) for 3013 DE containers is presented within this report. The protocol includes sectioning of the inner can lid section, documenting the surface condition, measuring corrosion parameters, and storing of samples. This protocol may change as the investigation develops since findings may necessitate additional steps be taken. Details of the previous analyses, which formed the basis for this protocol, are also presented.

  13. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  14. Evaluating Protocol Lifecycle Time Intervals in HIV/AIDS Clinical Trials

    Science.gov (United States)

    Schouten, Jeffrey T.; Dixon, Dennis; Varghese, Suresh; Cope, Marie T.; Marci, Joe; Kagan, Jonathan M.

    2014-01-01

    Background Identifying efficacious interventions for the prevention and treatment of human diseases depends on the efficient development and implementation of controlled clinical trials. Essential to reducing the time and burden of completing the clinical trial lifecycle is determining which aspects take the longest, delay other stages, and may lead to better resource utilization without diminishing scientific quality, safety, or the protection of human subjects. Purpose In this study we modeled time-to-event data to explore relationships between clinical trial protocol development and implementation times, as well as identify potential correlates of prolonged development and implementation. Methods We obtained time interval and participant accrual data from 111 interventional clinical trials initiated between 2006 and 2011 by NIH’s HIV/AIDS Clinical Trials Networks. We determined the time (in days) required to complete defined phases of clinical trial protocol development and implementation. Kaplan-Meier estimates were used to assess the rates at which protocols reached specified terminal events, stratified by study purpose (therapeutic, prevention) and phase group (pilot/phase I, phase II, and phase III/ IV). We also examined several potential correlates to prolonged development and implementation intervals. Results Even though phase grouping did not determine development or implementation times of either therapeutic or prevention studies, overall we observed wide variation in protocol development times. Moreover, we detected a trend toward phase III/IV therapeutic protocols exhibiting longer developmental (median 2 ½ years) and implementation times (>3years). We also found that protocols exceeding the median number of days for completing the development interval had significantly longer implementation. Limitations The use of a relatively small set of protocols may have limited our ability to detect differences across phase groupings. Some timing effects

  15. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    Energy Technology Data Exchange (ETDEWEB)

    Price, Oliver R., E-mail: oliver.price@unilever.co [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Oliver, Margaret A. [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Walker, Allan [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); Wood, Martin [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom)

    2009-05-15

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  16. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    International Nuclear Information System (INIS)

    Price, Oliver R.; Oliver, Margaret A.; Walker, Allan; Wood, Martin

    2009-01-01

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  17. Wipe sampling - review of the literature

    International Nuclear Information System (INIS)

    Souza, Daiane Cristini Barbosa de; Vicente, Roberto

    2011-01-01

    Methods for characterization of solid, non-compactable radioactive wastes contaminated in the surface are developed aiming at estimating the waste radioisotopic inventory for regulatory compliance and operational purposes. The wastes of interest here are mainly composed of plastic, metallic, or other materials parts originated in the decommissioning and maintenance operations of nuclear facilities. One way of measuring surface contamination is the indirect method of wiping the contaminated surface and counting the wipe, a common method of detecting non-fixed contamination in the radiation protection routine. The wipe sampling is an important tool in controlling the quality of the workplace in nuclear and radioactive facilities. Although radioprotection regulations establish quantitative limits, the practice in the radiation protection routine is to use wipe sampling as a qualitative measurement. To product useful quantitative results for inventorying radioactive wastes, a quantitative approach must be adopted. A previous paper presented by the authors in the last INAC Conference discussed alternative wipe materials and protocols. The method of wipe sampling underwent small changes since it started to be used but still is the object of study, as it is attested by many recent papers and patents on the subject. This article consists of a literature review. Results of a survey in the literature about wipe sampling techniques that can be applied to waste characterization are presented. (author)

  18. Estimating cigarette tax avoidance and evasion: evidence from a national sample of littered packs.

    Science.gov (United States)

    Barker, Dianne C; Wang, Shu; Merriman, David; Crosby, Andrew; Resnick, Elissa A; Chaloupka, Frank J

    2016-10-01

    A number of recent studies document the proportion of all cigarette packs that are 'contraband' using discarded packs to measure tax avoidance and evasion, which we call tax non-compliance. To date, academic studies using discarded packs focused on relatively small geographical areas such as a city or a neighbourhood. We visited 160 communities across 38 US states in 2012 and collected data from littered cigarette packs as part of the State and Community Tobacco Control (SCTC) Research Initiative and the Bridging the Gap Community Obesity Measures Project (BTG-COMP). Data collectors were trained in a previously tested littered pack data collection protocol. Field teams collected 2116 packs with cellophane across 132 communities. We estimate a national tax non-compliance rate of 18.5% with considerable variation across regions. Suburban areas had lower non-compliance than urban areas as well as areas with high and low median household income areas compared with middle income areas. We present the first academic national study of tax non-compliance using littered cigarette packs. We demonstrate the feasibility of meaningful large-scale data collection using this methodology and document considerable variation in tax non-compliance across areas, suggesting that both policy differences and geography may be important in control of illicit tobacco use. Given the geography of open borders among countries with varying tax rates, this simple methodology may be appropriate to estimate tax non-compliance in countries that use tax stamps or other pack markings, such as health warnings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  19. National protocol framework for the inventory and monitoring of bees

    Science.gov (United States)

    Droege, Sam; Engler, Joseph D.; Sellers, Elizabeth A.; Lee O'Brien,

    2016-01-01

    This national protocol framework is a standardized tool for the inventory and monitoring of the approximately 4,200 species of native and non-native bee species that may be found within the National Wildlife Refuge System (NWRS) administered by the U.S. Fish and Wildlife Service (USFWS). However, this protocol framework may also be used by other organizations and individuals to monitor bees in any given habitat or location. Our goal is to provide USFWS stations within the NWRS (NWRS stations are land units managed by the USFWS such as national wildlife refuges, national fish hatcheries, wetland management districts, conservation areas, leased lands, etc.) with techniques for developing an initial baseline inventory of what bee species are present on their lands and to provide an inexpensive, simple technique for monitoring bees continuously and for monitoring and evaluating long-term population trends and management impacts. The latter long-term monitoring technique requires a minimal time burden for the individual station, yet can provide a good statistical sample of changing populations that can be investigated at the station, regional, and national levels within the USFWS’ jurisdiction, and compared to other sites within the United States and Canada. This protocol framework was developed in cooperation with the United States Geological Survey (USGS), the USFWS, and a worldwide network of bee researchers who have investigated the techniques and methods for capturing bees and tracking population changes. The protocol framework evolved from field and lab-based investigations at the USGS Bee Inventory and Monitoring Laboratory at the Patuxent Wildlife Research Center in Beltsville, Maryland starting in 2002 and was refined by a large number of USFWS, academic, and state groups. It includes a Protocol Introduction and a set of 8 Standard Operating Procedures or SOPs and adheres to national standards of protocol content and organization. The Protocol Narrative

  20. An Anonymous Surveying Protocol via Greenberger-Horne-Zeilinger States

    Science.gov (United States)

    Naseri, Mosayeb; Gong, Li-Hua; Houshmand, Monireh; Matin, Laleh Farhang

    2016-10-01

    A new experimentally feasible anonymous survey protocol with authentication using Greenberger-Horne-Zeilinger (GHZ) entangled states is proposed. In this protocol, a chief executive officer (CEO) of a firm or company is trying to find out the effect of a possible action. In order to prepare a fair voting, the CEO would like to make an anonymous survey and is also interested in the total action for the whole company and he doesn't want to have a partial estimate for each department. In our proposal, there are two voters, Alice and Bob, voting on a question with a response of either "yes" or "no" and a tallyman, whose responsibility is to determine whether they have cast the same vote or not. In the proposed protocol the total response of the voters is calculated without revealing the actual votes of the voters.