WorldWideScience

Sample records for random sampling design

  1. Performance of Random Effects Model Estimators under Complex Sampling Designs

    Science.gov (United States)

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  2. Conflict-cost based random sampling design for parallel MRI with low rank constraints

    Science.gov (United States)

    Kim, Wan; Zhou, Yihang; Lyu, Jingyuan; Ying, Leslie

    2015-05-01

    In compressed sensing MRI, it is very important to design sampling pattern for random sampling. For example, SAKE (simultaneous auto-calibrating and k-space estimation) is a parallel MRI reconstruction method using random undersampling. It formulates image reconstruction as a structured low-rank matrix completion problem. Variable density (VD) Poisson discs are typically adopted for 2D random sampling. The basic concept of Poisson disc generation is to guarantee samples are neither too close to nor too far away from each other. However, it is difficult to meet such a condition especially in the high density region. Therefore the sampling becomes inefficient. In this paper, we present an improved random sampling pattern for SAKE reconstruction. The pattern is generated based on a conflict cost with a probability model. The conflict cost measures how many dense samples already assigned are around a target location, while the probability model adopts the generalized Gaussian distribution which includes uniform and Gaussian-like distributions as special cases. Our method preferentially assigns a sample to a k-space location with the least conflict cost on the circle of the highest probability. To evaluate the effectiveness of the proposed random pattern, we compare the performance of SAKEs using both VD Poisson discs and the proposed pattern. Experimental results for brain data show that the proposed pattern yields lower normalized mean square error (NMSE) than VD Poisson discs.

  3. HABITAT ASSESSMENT USING A RANDOM PROBABILITY BASED SAMPLING DESIGN: ESCAMBIA RIVER DELTA, FLORIDA

    Science.gov (United States)

    Smith, Lisa M., Darrin D. Dantin and Steve Jordan. In press. Habitat Assessment Using a Random Probability Based Sampling Design: Escambia River Delta, Florida (Abstract). To be presented at the SWS/GERS Fall Joint Society Meeting: Communication and Collaboration: Coastal Systems...

  4. Sample size and power for a stratified doubly randomized preference design.

    Science.gov (United States)

    Cameron, Briana; Esserman, Denise A

    2016-11-21

    The two-stage (or doubly) randomized preference trial design is an important tool for researchers seeking to disentangle the role of patient treatment preference on treatment response through estimation of selection and preference effects. Up until now, these designs have been limited by their assumption of equal preference rates and effect sizes across the entire study population. We propose a stratified two-stage randomized trial design that addresses this limitation. We begin by deriving stratified test statistics for the treatment, preference, and selection effects. Next, we develop a sample size formula for the number of patients required to detect each effect. The properties of the model and the efficiency of the design are established using a series of simulation studies. We demonstrate the applicability of the design using a study of Hepatitis C treatment modality, specialty clinic versus mobile medical clinic. In this example, a stratified preference design (stratified by alcohol/drug use) may more closely capture the true distribution of patient preferences and allow for a more efficient design than a design which ignores these differences (unstratified version). © The Author(s) 2016.

  5. Random sampling or geostatistical modelling? Choosing between design-based and model-based sampling strategies for soil (with discussion)

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    1997-01-01

    Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based

  6. Incorporating covariance estimation uncertainty in spatial sampling design for prediction with trans-Gaussian random fields

    Directory of Open Access Journals (Sweden)

    Gunter eSpöck

    2015-05-01

    Full Text Available Recently, Spock and Pilz [38], demonstratedthat the spatial sampling design problem forthe Bayesian linear kriging predictor can betransformed to an equivalent experimentaldesign problem for a linear regression modelwith stochastic regression coefficients anduncorrelated errors. The stochastic regressioncoefficients derive from the polar spectralapproximation of the residual process. Thus,standard optimal convex experimental designtheory can be used to calculate optimal spatialsampling designs. The design functionals ̈considered in Spock and Pilz [38] did nottake into account the fact that kriging isactually a plug-in predictor which uses theestimated covariance function. The resultingoptimal designs were close to space-fillingconfigurations, because the design criteriondid not consider the uncertainty of thecovariance function.In this paper we also assume that thecovariance function is estimated, e.g., byrestricted maximum likelihood (REML. Wethen develop a design criterion that fully takesaccount of the covariance uncertainty. Theresulting designs are less regular and space-filling compared to those ignoring covarianceuncertainty. The new designs, however, alsorequire some closely spaced samples in orderto improve the estimate of the covariancefunction. We also relax the assumption ofGaussian observations and assume that thedata is transformed to Gaussianity by meansof the Box-Cox transformation. The resultingprediction method is known as trans-Gaussiankriging. We apply the Smith and Zhu [37]approach to this kriging method and show thatresulting optimal designs also depend on theavailable data. We illustrate our results witha data set of monthly rainfall measurementsfrom Upper Austria.

  7. Rationale, design, methodology and sample characteristics for the Vietnam pre-conceptual micronutrient supplementation trial (PRECONCEPT: a randomized controlled study

    Directory of Open Access Journals (Sweden)

    Nguyen Phuong H

    2012-10-01

    Full Text Available Abstract Background Low birth weight and maternal anemia remain intractable problems in many developing countries. The adequacy of the current strategy of providing iron-folic acid (IFA supplements only during pregnancy has been questioned given many women enter pregnancy with poor iron stores, the substantial micronutrient demand by maternal and fetal tissues, and programmatic issues related to timing and coverage of prenatal care. Weekly IFA supplementation for women of reproductive age (WRA improves iron status and reduces the burden of anemia in the short term, but few studies have evaluated subsequent pregnancy and birth outcomes. The Preconcept trial aims to determine whether pre-pregnancy weekly IFA or multiple micronutrient (MM supplementation will improve birth outcomes and maternal and infant iron status compared to the current practice of prenatal IFA supplementation only. This paper provides an overview of study design, methodology and sample characteristics from baseline survey data and key lessons learned. Methods/design We have recruited 5011 WRA in a double-blind stratified randomized controlled trial in rural Vietnam and randomly assigned them to receive weekly supplements containing either: 1 2800 μg folic acid 2 60 mg iron and 2800 μg folic acid or 3 MM. Women who become pregnant receive daily IFA, and are being followed through pregnancy, delivery, and up to three months post-partum. Study outcomes include birth outcomes and maternal and infant iron status. Data are being collected on household characteristics, maternal diet and mental health, anthropometry, infant feeding practices, morbidity and compliance. Discussion The study is timely and responds to the WHO Global Expert Consultation which identified the need to evaluate the long term benefits of weekly IFA and MM supplementation in WRA. Findings will generate new information to help guide policy and programs designed to reduce the burden of anemia in women and

  8. Rationale, Design, Samples, and Baseline Sun Protection in a Randomized Trial on a Skin Cancer Prevention Intervention in Resort Environments

    Science.gov (United States)

    Buller, David B.; Andersen, Peter A.; Walkosz, Barbara J.; Scott, Michael D.; Beck, Larry; Cutter, Gary R.

    2016-01-01

    Introduction Exposure to solar ultraviolet radiation during recreation is a risk factor for skin cancer. A trial evaluating an intervention to promote advanced sun protection (sunscreen pre-application/reapplication; protective hats and clothing; use of shade) during vacations. Materials and Methods Adult visitors to hotels/resorts with outdoor recreation (i.e., vacationers) participated in a group-randomized pretest-posttest controlled quasi-experimental design in 2012–14. Hotels/resorts were pair-matched and randomly assigned to the intervention or untreated control group. Sun protection (e.g., clothing, hats, shade and sunscreen) was measured in cross-sectional samples by observation and a face-to-face intercept survey during two-day visits. Results Initially, 41 hotel/resorts (11%) participated but 4 dropped out before posttest. Hotel/resorts were diverse (employees=30 to 900; latitude=24o 78′ N to 50o 52′ N; elevation=2 ft. to 9,726 ft. above sea level), and had a variety of outdoor venues (beaches/pools, court/lawn games, golf courses, common areas, and chairlifts). At pretest, 4,347 vacationers were observed and 3,531 surveyed. More females were surveyed (61%) than observed (50%). Vacationers were mostly 35–60 years old, highly educated (college education = 68%) and non-Hispanic white (93%), with high-risk skin types (22%). Vacationers reported covering 60% of their skin with clothing. Also, 40% of vacationers used shade; 60% applied sunscreen; and 42% had been sunburned. Conclusions The trial faced challenges recruiting resorts but result show that the large, multi-state sample of vacationers were at high risk for solar UV exposure. PMID:26593781

  9. From Planning to Implementation: An Examination of Changes in the Research Design, Sample Size, and Precision of Group Randomized Trials Launched by the Institute of Education Sciences

    Science.gov (United States)

    Spybrook, Jessaca; Puente, Anne Cullen; Lininger, Monica

    2013-01-01

    This article examines changes in the research design, sample size, and precision between the planning phase and implementation phase of group randomized trials (GRTs) funded by the Institute of Education Sciences. Thirty-eight GRTs funded between 2002 and 2006 were examined. Three studies revealed changes in the experimental design. Ten studies…

  10. Evaluating effectiveness of down-sampling for stratified designs and unbalanced prevalence in Random Forest models of tree species distributions in Nevada

    Science.gov (United States)

    Elizabeth A. Freeman; Gretchen G. Moisen; Tracy S. Frescino

    2012-01-01

    Random Forests is frequently used to model species distributions over large geographic areas. Complications arise when data used to train the models have been collected in stratified designs that involve different sampling intensity per stratum. The modeling process is further complicated if some of the target species are relatively rare on the landscape leading to an...

  11. Comparing attitudes about legal sanctions and teratogenic effects for cocaine, alcohol, tobacco and caffeine: A randomized, independent samples design

    Directory of Open Access Journals (Sweden)

    Alanis Kelly L

    2006-02-01

    Full Text Available Abstract Background Establishing more sensible measures to treat cocaine-addicted mothers and their children is essential for improving U.S. drug policy. Favorable post-natal environments have moderated potential deleterious prenatal effects. However, since cocaine is an illicit substance having long been demonized, we hypothesized that attitudes toward prenatal cocaine exposure would be more negative than for licit substances, alcohol, nicotine and caffeine. Further, media portrayals about long-term outcomes were hypothesized to influence viewers' attitudes, measured immediately post-viewing. Reducing popular crack baby stigmas could influence future policy decisions by legislators. In Study 1, 336 participants were randomly assigned to 1 of 4 conditions describing hypothetical legal sanction scenarios for pregnant women using cocaine, alcohol, nicotine or caffeine. Participants rated legal sanctions against pregnant women who used one of these substances and risk potential for developing children. In Study 2, 139 participants were randomly assigned to positive, neutral and negative media conditions. Immediately post-viewing, participants rated prenatal cocaine-exposed or non-exposed teens for their academic performance and risk for problems at age18. Results Participants in Study 1 imposed significantly greater legal sanctions for cocaine, perceiving prenatal cocaine exposure as more harmful than alcohol, nicotine or caffeine. A one-way ANOVA for independent samples showed significant differences, beyond .0001. Post-hoc Sheffe test illustrated that cocaine was rated differently from other substances. In Study 2, a one-way ANOVA for independent samples was performed on difference scores for the positive, neutral or negative media conditions about prenatal cocaine exposure. Participants in the neutral and negative media conditions estimated significantly lower grade point averages and more problems for the teen with prenatal cocaine exposure

  12. Comparing attitudes about legal sanctions and teratogenic effects for cocaine, alcohol, tobacco and caffeine: A randomized, independent samples design

    Science.gov (United States)

    Ginsburg, Harvey J; Raffeld, Paul; Alanis, Kelly L; Boyce, Angela S

    2006-01-01

    Background Establishing more sensible measures to treat cocaine-addicted mothers and their children is essential for improving U.S. drug policy. Favorable post-natal environments have moderated potential deleterious prenatal effects. However, since cocaine is an illicit substance having long been demonized, we hypothesized that attitudes toward prenatal cocaine exposure would be more negative than for licit substances, alcohol, nicotine and caffeine. Further, media portrayals about long-term outcomes were hypothesized to influence viewers' attitudes, measured immediately post-viewing. Reducing popular crack baby stigmas could influence future policy decisions by legislators. In Study 1, 336 participants were randomly assigned to 1 of 4 conditions describing hypothetical legal sanction scenarios for pregnant women using cocaine, alcohol, nicotine or caffeine. Participants rated legal sanctions against pregnant women who used one of these substances and risk potential for developing children. In Study 2, 139 participants were randomly assigned to positive, neutral and negative media conditions. Immediately post-viewing, participants rated prenatal cocaine-exposed or non-exposed teens for their academic performance and risk for problems at age18. Results Participants in Study 1 imposed significantly greater legal sanctions for cocaine, perceiving prenatal cocaine exposure as more harmful than alcohol, nicotine or caffeine. A one-way ANOVA for independent samples showed significant differences, beyond .0001. Post-hoc Sheffe test illustrated that cocaine was rated differently from other substances. In Study 2, a one-way ANOVA for independent samples was performed on difference scores for the positive, neutral or negative media conditions about prenatal cocaine exposure. Participants in the neutral and negative media conditions estimated significantly lower grade point averages and more problems for the teen with prenatal cocaine exposure than for the non-exposed teen

  13. Estimation of the Box Correction for Degrees of Freedom from Sample Data in Randomized Block and Split-Plot Designs

    Science.gov (United States)

    Huynh, Huynh; Feldt, Leonard S.

    1976-01-01

    When the variance assumptions of a repeated measures ANOVA are not met, the F distribution of the mean square ratio should be adjusted by the sample estimate of the Box correction factor. An alternative is proposed which is shown by Monte Carlo methods to be less biased for a moderately large factor. (RC)

  14. A cluster-randomized trial of a middle school gender violence prevention program: Design, rationale, and sample characteristics.

    Science.gov (United States)

    Abebe, Kaleab Z; Jones, Kelley A; Ciaravino, Samantha; Ripper, Lisa; Paglisotti, Taylor; Morrow, Sarah Elizabeth; Grafals, Melanie; Van Dusen, Courtney; Miller, Elizabeth

    2017-11-01

    High rates of adolescent relationship abuse (ARA) and sexual violence (SV) reported among adolescents point to the need for prevention among middle school-age youth. This is a cluster randomized controlled trial to test an athletic coach-delivered ARA/SV prevention program in 41 middle schools (38 clusters). Trained coaches talk to their male athletes about 1) what constitutes harmful vs. respectful relationship behaviors, 2) dispelling myths that glorify male sexual aggression and promoting more gender-equitable attitudes, and 3) positive bystander intervention when aggressive male behaviors toward females are witnessed. A total of 973 male athletes (ages 11-14, grades 6-8) are participating. Athletes complete surveys at the beginning and end of sports season (Time 2), and one year later (Time 3). The primary outcome is an increase in positive bystander behaviors (i.e., intervening in peers' disrespectful or harmful behaviors); secondary outcomes are changes in recognition of what constitutes abusive behavior, intentions to intervene, and gender equitable attitudes (Time 2 and 3) as well as reduction in abuse perpetration (Time 3). Participating schools have a greater proportion of non-White students and students on free/reduced lunch compared to schools that declined participation. Participants' self-reported ethnicities are 54.5% White, 29.0% Black, 1.4% Hispanic and the remainder, multi-racial, other, or not reported. This study will evaluate the effectiveness of a coach-delivered ARA/SV prevention program for middle school male athletes. Findings will add to the evidence base regarding developmentally appropriate violence prevention programs as well as the role of coaches in adolescent health promotion. Clinical Trials #: NCT02331238. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Random-effects linear modeling and sample size tables for two special crossover designs of average bioequivalence studies: the four-period, two-sequence, two-formulation and six-period, three-sequence, three-formulation designs.

    Science.gov (United States)

    Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael

    2013-12-01

    Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.

  16. k-Means: Random Sampling Procedure

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. k-Means: Random Sampling Procedure. Optimal 1-Mean is. Approximation of Centroid (Inaba et al). S = random sample of size O(1/ ); Centroid of S is a (1+ )-approx centroid of P with constant probability.

  17. Methods for sample size determination in cluster randomized trials.

    Science.gov (United States)

    Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

    2015-06-01

    The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. © The Author 2015. Published by Oxford University Press on behalf of the International Epidemiological Association.

  18. Sample design for Understanding Society

    OpenAIRE

    Lynn, Peter

    2009-01-01

    This paper describes the design of the sample for “Understanding Society†. The sample consists of five components. The largest component is a newly-selected general population sample. The other four components are an ethnic minority ‘boost’ sample, a general population comparison sample, the ex-BHPS (British Household Panel Survey) sample, and the innovation panel sample. For each component, the paper outlines the design and explains the rationale behind the main features of the desig...

  19. A description of the demographic characteristics of the New Zealand non-commercial horse population with data collected using a generalised random-tessellation stratified sampling design.

    Science.gov (United States)

    Rosanowski, S M; Cogger, N; Rogers, C W; Benschop, J; Stevenson, M A

    2012-12-01

    We conducted a cross-sectional survey to determine the demographic characteristics of non-commercial horses in New Zealand. A sampling frame of properties with non-commercial horses was derived from the national farms database, AgriBase™. Horse properties were stratified by property size and a generalised random-tessellated stratified (GRTS) sampling strategy was used to select properties (n=2912) to take part in the survey. The GRTS sampling design allowed for the selection of properties that were spatially balanced relative to the distribution of horse properties throughout the country. The registered decision maker of the property, as identified in AgriBase™, was sent a questionnaire asking them to describe the demographic characteristics of horses on the property, including the number and reason for keeping horses, as well as information about other animals kept on the property and the proximity of boundary neighbours with horses. The response rate to the survey was 38% (1044/2912) and the response rate was not associated with property size or region. A total of 5322 horses were kept for recreation, competition, racing, breeding, stock work, or as pets. The reasons for keeping horses and the number and class of horses varied significantly between regions and by property size. Of the properties sampled, less than half kept horses that could have been registered with Equestrian Sports New Zealand or either of the racing codes. Of the respondents that reported knowing whether their neighbours had horses, 58.6% (455/776) of properties had at least one boundary neighbour that kept horses. The results of this study have important implications for New Zealand, which has an equine population that is naïve to many equine diseases considered endemic worldwide. The ability to identify, and apply accurate knowledge of the population at risk to infectious disease control strategies would lead to more effective strategies to control and prevent disease spread during an

  20. K-Median: Random Sampling Procedure

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. K-Median: Random Sampling Procedure. Sample a set of 1/ + 1 points from P. Let Q = first 1/ points, p = last point. Let T = Avg. 1-Median cost of P, c=1-Median. Let B1 = B(c,T/ 2), B2 = B(p, T). Let P' = points in B1.

  1. Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling

    Science.gov (United States)

    Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah

    2014-01-01

    Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…

  2. Acceptance sampling using judgmental and randomly selected samples

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  3. GSAMPLE: Stata module to draw a random sample

    OpenAIRE

    Jann, Ben

    2006-01-01

    gsample draws a random sample from the data in memory. Simple random sampling (SRS) is supported, as well as unequal probability sampling (UPS), of which sampling with probabilities proportional to size (PPS) is a special case. Both methods, SRS and UPS/PPS, provide sampling with replacement and sampling without replacement. Furthermore, stratified sampling and cluster sampling is supported.

  4. Generation and Analysis of Constrained Random Sampling Patterns

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas

    2016-01-01

    Random sampling is a technique for signal acquisition which is gaining popularity in practical signal processing systems. Nowadays, event-driven analog-to-digital converters make random sampling feasible in practical applications. A process of random sampling is defined by a sampling pattern, whi...

  5. [Variance estimation considering multistage sampling design in multistage complex sample analysis].

    Science.gov (United States)

    Li, Yichong; Zhao, Yinjun; Wang, Limin; Zhang, Mei; Zhou, Maigeng

    2016-03-01

    Multistage sampling is a frequently-used method in random sampling survey in public health. Clustering or independence between observations often exists in the sampling, often called complex sample, generated by multistage sampling. Sampling error may be underestimated and the probability of type I error may be increased if the multistage sample design was not taken into consideration in analysis. As variance (error) estimator in complex sample is often complicated, statistical software usually adopt ultimate cluster variance estimate (UCVE) to approximate the estimation, which simply assume that the sample comes from one-stage sampling. However, with increased sampling fraction of primary sampling unit, contribution from subsequent sampling stages is no more trivial, and the ultimate cluster variance estimate may, therefore, lead to invalid variance estimation. This paper summarize a method of variance estimation considering multistage sampling design. The performances are compared with UCVE and the method considering multistage sampling design by simulating random sampling under different sampling schemes using real world data. Simulation showed that as primary sampling unit (PSU) sampling fraction increased, UCVE tended to generate increasingly biased estimation, whereas accurate estimates were obtained by using the method considering multistage sampling design.

  6. [A comparative study of different sampling designs in fish community estimation].

    Science.gov (United States)

    Zhao, Jing; Zhang, Shou-Yu; Lin, Jun; Zhou, Xi-Jie

    2014-04-01

    The study of fishery community ecology depends on quality and quantity of data collected from well-designed sampling programs. The optimal sampling design must be cost-efficient, and the sampling results have been recognized as a significant factor affecting resources management. In this paper, the performances of stationary sampling, simple random sampling and stratified random sampling in estimating fish community were compared based on computer simulation by design effect (De), relative error (REE) and relative bias (RB). The results showed that, De of stationary sampling (average De was 3.37) was worse than simple random sampling and stratified random sampling (average De was 0.961). Stratified random sampling performed best among the three designs in terms of De, REE and RB. With the sample size increased, the design effect of stratified random sampling decreased but the precision and accuracy increased.

  7. The design of cluster randomized crossover trials

    NARCIS (Netherlands)

    Rietbergen, C.; Moerbeek, M.

    2011-01-01

    The inefficiency induced by between-cluster variation in cluster randomized (CR) trials can be reduced by implementing a crossover (CO) design. In a simple CO trial, each subject receives each treatment in random order. A powerful characteristic of this design is that each subject serves as its own

  8. Sampling designs dependent on sample parameters of auxiliary variables

    CERN Document Server

    Wywiał, Janusz L

    2015-01-01

    The book offers a valuable resource for students and statisticians whose work involves survey sampling. An estimation of the population parameters in finite and fixed populations assisted by auxiliary variables is considered. New sampling designs dependent on moments or quantiles of auxiliary variables are presented on the background of the classical methods. Accuracies of the estimators based on original sampling design are compared with classical estimation procedures. Specific conditional sampling designs are applied to problems of small area estimation as well as to estimation of quantiles of variables under study. .

  9. Basic design of sample container for transport of extraterrestrial samples

    Science.gov (United States)

    Dirri, F.; Longobardo, A.; Palomba, E.; Hutzler, A.; Ferrière, L.

    2017-09-01

    The aim of this work is to provide, in the framework of the EURO-CARES (European Curation of Astromaterials Returned from Exploration of Space) project, a technical overview based on the sample container used in previous sample return missions (e.g., Hayabusa1, Stardust, etc.) and to define a basic design of a sample container aimed at transporting the extraterrestrial returned samples within a Sample Curation Facility (SCF) or from a SCF to another laboratory (and vice versa). The sample container structure and the transportation criticalities (such as contamination and mechanical stress) are discussed in detail in each scenario.

  10. A random spatial sampling method in a rural developing nation

    Science.gov (United States)

    Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas

    2014-01-01

    Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...

  11. Assessment of efficient sampling designs for urban stormwater monitoring.

    Science.gov (United States)

    Leecaster, Molly K; Schiff, Kenneth; Tiefenthaler, Liesl L

    2002-03-01

    Monitoring programs for urban runoff have not been assessed for effectiveness or efficiency in estimating mass emissions. In order to determine appropriate designs for stormwater, total suspended solids (TSS) and flow information from the Santa Ana River was collected nearly every 15 min for every storm of the 1998 water year. All samples were used to calculate the "true load" and then three within-storm sampling designs (flow-interval, time-interval, and simple random) and five among-storm sampling designs (stratified by size, stratified by season, simple random, simple random of medium and large storms, and the first m storms of the season) were simulated. Using these designs, we evaluated three estimators for storm mass emissions (mean, volume-weighted, and ratio) and three estimators for annual mass emissions (median, ratio, and regular). Designs and estimators were evaluated with respect to accuracy and precision. The optimal strategy was used to determine the appropriate number of storms to sample annually based upon confidence interval width for estimates of annual mass emissions and concentration. The amount of detectable trend in mass emissions and concentration was determined for sample sizes 3 and 7. Single storms were most efficiently characterized (small bias and standard error) by taking 12 samples following a flow-interval schedule and using a volume-weighted estimator of mass emissions. The ratio estimator, when coupled with the simple random sample of medium and large storms within a season, most accurately estimated concentration and mass emissions; and had low bias over all of the designs. Sampling seven storms is the most efficient method for attaining small confidence interval width for annual concentration. Sampling three storms per year allows a 20% trend to be detected in mass emissions or concentration over five years. These results are decreased by 10% by sampling seven storms per year.

  12. Power Spectrum Estimation of Randomly Sampled Signals

    DEFF Research Database (Denmark)

    Velte, Clara M.; Buchhave, Preben; K. George, William

    2014-01-01

    with high data rate and low inherent bias, respectively, while residence time weighting provides non-biased estimates regardless of setting. The free-running processor was also tested and compared to residence time weighting using actual LDA measurements in a turbulent round jet. Power spectra from...... of alternative methods attempting to produce correct power spectra have been invented andtested. The objective of the current study is to create a simple computer generated signal for baseline testing of residence time weighting and some of the most commonly proposed algorithms (or algorithms which most...... modernalgorithms ultimately are based on), sample-and-hold and the direct spectral estimator without residence time weighting, and compare how they perform in relation to power spectra based on the equidistantly sampled reference signal. The computer generated signal is a Poisson process with a sample rate...

  13. Random constraint sampling and duality for convex optimization

    OpenAIRE

    Haskell, William B.; Pengqian, Yu

    2016-01-01

    We are interested in solving convex optimization problems with large numbers of constraints. Randomized algorithms, such as random constraint sampling, have been very successful in giving nearly optimal solutions to such problems. In this paper, we combine random constraint sampling with the classical primal-dual algorithm for convex optimization problems with large numbers of constraints, and we give a convergence rate analysis. We then report numerical experiments that verify the effectiven...

  14. The Vermont Diabetes Information System (VDIS): Study Design and Subject Recruitment for a Cluster Randomized Trial of a Decision Support System in a Regional Sample of Primary Care Practices

    Science.gov (United States)

    MacLean, Charles D.; Littenberg, Benjamin; Gagnon, Michael; Reardon, Mimi; Turner, Paul D.; Jordan, Cy

    2008-01-01

    Background Despite evidence that optimal care for diabetes can result in reduced complications and improved economic outcomes, such care is often not achieved. The Vermont Diabetes Information System (VDIS) is a registry-based decision support and reminder system based on the Chronic Care Model and targeted to primary care physicians and their patients with diabetes. Purpose To develop and evaluate a regional decision support system for patients with diabetes. Methods Randomized trial of an information system with clustering at the practice level. Ten percent random sub sample of patients selected for a home interview. Subject and setting includes 10 hospitals, 121 primary care providers, and 7,348 patients in 55 Vermont and New York primary care practices. Results We report on the study design and baseline characteristics of the population. Patients have a mean age of 63 years and a mean glycosolated hemoglobin A1C of 7.1%. Sixty percent of the population has excellent glycemic control (A1C<7%); 45% have excellent lipid control (serum LDL-cholesterol < 100mg/dl and serum triglycerides < 400mg/dl). Twenty-five percent have excellent blood pressure control (<130/80 mm Hg). These results compare favorably to recent national reports. However, only 8% are in optimal control for all three of hyperglycemia, lipids and blood pressure. Conclusions Our experience to date indicates that a low cost decision support and information system based on the chronic care model is feasible in primary care practices that lack sophisticated electronic information systems. VDIS is well accepted by patients, providers, and laboratory staff. If proven beneficial in a rigorous, randomized, controlled evaluation, the intervention could be widely disseminated to practices across America and the world with a substantial impact on the outcomes and costs of diabetes. It could also be adapted to other chronic conditions. We anticipate the results of the study will be available in 2006. PMID

  15. Random number datasets generated from statistical analysis of randomly sampled GSM recharge cards.

    Science.gov (United States)

    Okagbue, Hilary I; Opanuga, Abiodun A; Oguntunde, Pelumi E; Ugwoke, Paulinus O

    2017-02-01

    In this article, a random number of datasets was generated from random samples of used GSM (Global Systems for Mobile Communications) recharge cards. Statistical analyses were performed to refine the raw data to random number datasets arranged in table. A detailed description of the method and relevant tests of randomness were also discussed.

  16. Power Spectrum Estimation of Randomly Sampled Signals

    DEFF Research Database (Denmark)

    Velte, C. M.; Buchhave, P.; K. George, W.

    . Residence time weighting provides non-biased estimates regardless of setting. The free-running processor was also tested and compared to residence time weighting using actual LDA measurements in a turbulent round jet. Power spectra from measurements on the jet centerline and the outer part of the jet...... sine waves. The primary signal and the corresponding power spectrum are shown in Figure 1. The conventional spectrum shows multiple erroneous mixing frequencies and the peak values are too low. The residence time weighted spectrum is correct. The sample-and-hold spectrum has lower power than...... the correct spectrum, and the f -2-filtering effect appearing for low data densities is evident (Adrian and Yao 1987). The remaining tests also show that sample-and-hold and the free-running processor perform well only under very particular circumstances with high data rate and low inherent bias, respectively...

  17. Randomized Clinical Trials With Biomarkers: Design Issues

    Science.gov (United States)

    McShane, Lisa M.; Korn, Edward L.

    2010-01-01

    Clinical biomarker tests that aid in making treatment decisions will play an important role in achieving personalized medicine for cancer patients. Definitive evaluation of the clinical utility of these biomarkers requires conducting large randomized clinical trials (RCTs). Efficient RCT design is therefore crucial for timely introduction of these medical advances into clinical practice, and a variety of designs have been proposed for this purpose. To guide design and interpretation of RCTs evaluating biomarkers, we present an in-depth comparison of advantages and disadvantages of the commonly used designs. Key aspects of the discussion include efficiency comparisons and special interim monitoring issues that arise because of the complexity of these RCTs. Important ongoing and completed trials are used as examples. We conclude that, in most settings, randomized biomarker-stratified designs (ie, designs that use the biomarker to guide analysis but not treatment assignment) should be used to obtain a rigorous assessment of biomarker clinical utility. PMID:20075367

  18. Are most samples of animals systematically biased? Consistent individual trait differences bias samples despite random sampling.

    Science.gov (United States)

    Biro, Peter A

    2013-02-01

    Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.

  19. Extending cluster lot quality assurance sampling designs for surveillance programs.

    Science.gov (United States)

    Hund, Lauren; Pagano, Marcello

    2014-07-20

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Randomness as a resource for design

    DEFF Research Database (Denmark)

    Leong, T.; Vetere, F.; Howard, Steve

    2006-01-01

    is used to engender certain affective responses (such as feeling refreshed) by using various constraining techniques (such as playlists) whilst engaging in everyday activities (such as driving a car). The paper argues that randomness can be used as an innovative design resource for supporting rich...... examining the influence of randomness on the user experience of music listening. 113 instances of self-reporting were collected and analysed according to four themes: listening mode, content organisation, activities during listening, and affective outcomes. The analysis provides insights into how randomness...

  1. Hermetic Seal Designs for Sample Return Sample Tubes

    Science.gov (United States)

    Younse, Paulo J.

    2013-01-01

    Prototypes have been developed of potential hermetic sample sealing techniques for encapsulating samples in a ˜1-cm-diameter thin-walled sample tube that are compatible with IMSAH (Integrated Mars Sample Acquisition and Handling) architecture. Techniques include a heat-activated, finned, shape memory alloy plug; a contracting shape memory alloy activated cap; an expanding shape memory alloy plug; and an expanding torque plug. Initial helium leak testing of the shape memory alloy cap and finned shape memory alloy plug seals showed hermetic- seal capability compared against an industry standard of seal integrity after Martian diurnal cycles. Developmental testing is currently being done on the expanding torque plug, and expanding shape memory alloy plug seal designs. The finned shape memory alloy (SMA) plug currently shows hermetic sealing capability based on preliminary tests.

  2. Accuracy assessment with complex sampling designs

    Science.gov (United States)

    Raymond L. Czaplewski

    2010-01-01

    A reliable accuracy assessment of remotely sensed geospatial data requires a sufficiently large probability sample of expensive reference data. Complex sampling designs reduce cost or increase precision, especially with regional, continental and global projects. The General Restriction (GR) Estimator and the Recursive Restriction (RR) Estimator separate a complex...

  3. SOME SYSTEMATIC SAMPLING STRATEGIES USING MULTIPLE RANDOM STARTS

    Directory of Open Access Journals (Sweden)

    Sampath Sundaram

    2010-09-01

    Full Text Available In this paper an attempt is made to extend linear systematic sampling using multiple random starts due to Gautschi(1957for various types of systematic sampling schemes available in literature, namely(i  Balanced Systematic Sampling (BSS of  Sethi (1965 and (ii Modified Systematic Sampling (MSS of Singh, Jindal, and Garg  (1968. Further, the proposed methods were compared with Yates corrected estimator developed with reference to Gautschi’s Linear systematic sampling (LSS with two random starts using appropriate super population models with the  help of R package for statistical computing.

  4. Efficient sampling of complex network with modified random walk strategies

    Science.gov (United States)

    Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei

    2018-02-01

    We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.

  5. Dealing with daily challenges in dementia (deal-id study): effectiveness of the experience sampling method intervention 'Partner in Sight' for spousal caregivers of people with dementia: design of a randomized controlled trial.

    Science.gov (United States)

    van Knippenberg, Rosalia J M; de Vugt, Marjolein E; Ponds, Rudolf W; Myin-Germeys, Inez; Verhey, Frans R J

    2016-05-11

    There is an urgent need for psychosocial interventions that effectively support dementia caregivers in daily life. The Experience Sampling Methodology (ESM) offers the possibility to provide a more dynamic view of caregiver functioning. ESM-derived feedback may help to redirect caregivers' behavior towards situations that elicit positive emotions and to increase their feelings of competence in the caretaking process. This paper presents the design of a study that evaluates the process characteristics and effects of the ESM-based intervention 'Partner in Sight'. A randomized controlled trial with 90 spousal caregivers of people with dementia will be conducted. Participants will be randomly assigned to the experimental (6-week ESM intervention including feedback), pseudo-experimental (6-week ESM intervention without feedback), or control group (care as usual). Assessments will be performed pre- and post-intervention and at 2-, and 6-month follow-up. Main outcomes will be sense of competence, perceived control, momentary positive affect, and psychological complaints (depressive symptoms, perceived stress, anxiety, momentary negative affect). In addition to the effect evaluation, a process and economic evaluation will be conducted to investigate the credibility and generalizability of the intervention, and its cost-effectiveness. The potential effects of the ESM intervention may help caregivers to endure their care responsibilities and prevent them from becoming overburdened. This is the first ESM intervention for caregivers of people with dementia. The results of this study, therefore, provide a valuable contribution to the growing knowledge on m-health interventions for dementia caregivers. Dutch Trial Register NTR4847 ; date registered Oct 9, 2014.

  6. Spatial Random Sampling: A Structure-Preserving Data Sketching Tool

    Science.gov (United States)

    Rahmani, Mostafa; Atia, George K.

    2017-09-01

    Random column sampling is not guaranteed to yield data sketches that preserve the underlying structures of the data and may not sample sufficiently from less-populated data clusters. Also, adaptive sampling can often provide accurate low rank approximations, yet may fall short of producing descriptive data sketches, especially when the cluster centers are linearly dependent. Motivated by that, this paper introduces a novel randomized column sampling tool dubbed Spatial Random Sampling (SRS), in which data points are sampled based on their proximity to randomly sampled points on the unit sphere. The most compelling feature of SRS is that the corresponding probability of sampling from a given data cluster is proportional to the surface area the cluster occupies on the unit sphere, independently from the size of the cluster population. Although it is fully randomized, SRS is shown to provide descriptive and balanced data representations. The proposed idea addresses a pressing need in data science and holds potential to inspire many novel approaches for analysis of big data.

  7. SOME SYSTEMATIC SAMPLING STRATEGIES USING MULTIPLE RANDOM STARTS

    OpenAIRE

    Sampath Sundaram; Ammani Sivaraman

    2010-01-01

    In this paper an attempt is made to extend linear systematic sampling using multiple random starts due to Gautschi(1957)for various types of systematic sampling schemes available in literature, namely(i)  Balanced Systematic Sampling (BSS) of  Sethi (1965) and (ii) Modified Systematic Sampling (MSS) of Singh, Jindal, and Garg  (1968). Further, the proposed methods were compared with Yates corrected estimator developed with reference to Gautschi’s Linear systematic samplin...

  8. Sampling large random knots in a confined space

    Energy Technology Data Exchange (ETDEWEB)

    Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)

    2007-09-28

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  9. Latent spatial models and sampling design for landscape genetics

    Science.gov (United States)

    Hanks, Ephraim M.; Hooten, Mevin B.; Knick, Steven T.; Oyler-McCance, Sara J.; Fike, Jennifer A.; Cross, Todd B.; Schwartz, Michael K.

    2016-01-01

    We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial random effect to allow for spatial correlation between genetic observations. We illustrate how modern dimension reduction approaches to spatial statistics can allow for efficient computation in landscape genetic statistical models covering large spatial domains. We apply our approach to propose a retrospective spatial sampling design for greater sage-grouse (Centrocercus urophasianus) population genetics in the western United States.

  10. Sequential time interleaved random equivalent sampling for repetitive signal

    Science.gov (United States)

    Zhao, Yijiu; Liu, Jingjing

    2016-12-01

    Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.

  11. Sequential multiple assignment randomization trials with enrichment design.

    Science.gov (United States)

    Liu, Ying; Wang, Yuanjia; Zeng, Donglin

    2017-06-01

    Sequential multiple assignment randomization trial (SMART) is a powerful design to study Dynamic Treatment Regimes (DTRs) and allows causal comparisons of DTRs. To handle practical challenges of SMART, we propose a SMART with Enrichment (SMARTER) design, which performs stage-wise enrichment for SMART. SMARTER can improve design efficiency, shorten the recruitment period, and partially reduce trial duration to make SMART more practical with limited time and resource. Specifically, at each subsequent stage of a SMART, we enrich the study sample with new patients who have received previous stages' treatments in a naturalistic fashion without randomization, and only randomize them among the current stage treatment options. One extreme case of the SMARTER is to synthesize separate independent single-stage randomized trials with patients who have received previous stage treatments. We show data from SMARTER allows for unbiased estimation of DTRs as SMART does under certain assumptions. Furthermore, we show analytically that the efficiency gain of the new design over SMART can be significant especially when the dropout rate is high. Lastly, extensive simulation studies are performed to demonstrate performance of SMARTER design, and sample size estimation in a scenario informed by real data from a SMART study is presented. © 2016, The International Biometric Society.

  12. Optimum allocation in multivariate stratified random sampling: Stochastic matrix optimisation

    OpenAIRE

    Diaz-Garcia, Jose A.; Ramos-Quiroga, Rogelio

    2011-01-01

    The allocation problem for multivariate stratified random sampling as a problem of stochastic matrix integer mathematical programming is considered. With these aims the asymptotic normality of sample covariance matrices for each strata is established. Some alternative approaches are suggested for its solution. An example is solved by applying the proposed techniques.

  13. Thermal probe design for Europa sample acquisition

    Science.gov (United States)

    Horne, Mera F.

    2018-01-01

    The planned lander missions to the surface of Europa will access samples from the subsurface of the ice in a search for signs of life. A small thermal drill (probe) is proposed to meet the sample requirement of the Science Definition Team's (SDT) report for the Europa mission. The probe is 2 cm in diameter and 16 cm in length and is designed to access the subsurface to 10 cm deep and to collect five ice samples of 7 cm3 each, approximately. The energy required to penetrate the top 10 cm of ice in a vacuum is 26 Wh, approximately, and to melt 7 cm3 of ice is 1.2 Wh, approximately. The requirement stated in the SDT report of collecting samples from five different sites can be accommodated with repeated use of the same thermal drill. For smaller sample sizes, a smaller probe of 1.0 cm in diameter with the same length of 16 cm could be utilized that would require approximately 6.4 Wh to penetrate the top 10 cm of ice, and 0.02 Wh to collect 0.1 g of sample. The thermal drill has the advantage of simplicity of design and operations and the ability to penetrate ice over a range of densities and hardness while maintaining sample integrity.

  14. On the Robustness and Efficiency of Some Randomized Designs

    OpenAIRE

    Wu, Chien-Fu

    1981-01-01

    A concept of model-robustness is defined in terms of the performance of the design in the presence of model violations. The robustness problem is discussed for several randomization procedures commonly used in experimental design situations. Among them, the balanced completely randomized design, the randomized complete block design and the randomized Latin square design are shown to be model-robust in their own settings. To compare different randomization procedures, we define a concept of ef...

  15. Fast Moving Sampling Designs in Temporal Networks

    CERN Document Server

    Thompson, Steven K

    2015-01-01

    In a study related to this one I set up a temporal network simulation environment for evaluating network intervention strategies. A network intervention strategy consists of a sampling design to select nodes in the network. An intervention is applied to nodes in the sample for the purpose of changing the wider network in some desired way. The network intervention strategies can represent natural agents such as viruses that spread in the network, programs to prevent or reduce the virus spread, and the agency of individual nodes, such as people, in forming and dissolving the links that create, maintain or change the network. The present paper examines idealized versions of the sampling designs used to that study. The purpose is to better understand the natural and human network designs in real situations and to provide a simple inference of design-based properties that in turn measure properties of the time-changing network. The designs use link tracing and sometimes other probabilistic procedures to add units ...

  16. Sampling design for spatially distributed hydrogeologic and environmental processes

    Science.gov (United States)

    Christakos, G.; Olea, R.A.

    1992-01-01

    A methodology for the design of sampling networks over space is proposed. The methodology is based on spatial random field representations of nonhomogeneous natural processes, and on optimal spatial estimation techniques. One of the most important results of random field theory for physical sciences is its rationalization of correlations in spatial variability of natural processes. This correlation is extremely important both for interpreting spatially distributed observations and for predictive performance. The extent of site sampling and the types of data to be collected will depend on the relationship of subsurface variability to predictive uncertainty. While hypothesis formulation and initial identification of spatial variability characteristics are based on scientific understanding (such as knowledge of the physics of the underlying phenomena, geological interpretations, intuition and experience), the support offered by field data is statistically modelled. This model is not limited by the geometric nature of sampling and covers a wide range in subsurface uncertainties. A factorization scheme of the sampling error variance is derived, which possesses certain atttactive properties allowing significant savings in computations. By means of this scheme, a practical sampling design procedure providing suitable indices of the sampling error variance is established. These indices can be used by way of multiobjective decision criteria to obtain the best sampling strategy. Neither the actual implementation of the in-situ sampling nor the solution of the large spatial estimation systems of equations are necessary. The required values of the accuracy parameters involved in the network design are derived using reference charts (readily available for various combinations of data configurations and spatial variability parameters) and certain simple yet accurate analytical formulas. Insight is gained by applying the proposed sampling procedure to realistic examples related

  17. Experimental and Sampling Design for the INL-2 Sample Collection Operational Test

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Amidan, Brett G.; Matzke, Brett D.

    2009-02-16

    This report describes the experimental and sampling design developed to assess sampling approaches and methods for detecting contamination in a building and clearing the building for use after decontamination. An Idaho National Laboratory (INL) building will be contaminated with BG (Bacillus globigii, renamed Bacillus atrophaeus), a simulant for Bacillus anthracis (BA). The contamination, sampling, decontamination, and re-sampling will occur per the experimental and sampling design. This INL-2 Sample Collection Operational Test is being planned by the Validated Sampling Plan Working Group (VSPWG). The primary objectives are: 1) Evaluate judgmental and probabilistic sampling for characterization as well as probabilistic and combined (judgment and probabilistic) sampling approaches for clearance, 2) Conduct these evaluations for gradient contamination (from low or moderate down to absent or undetectable) for different initial concentrations of the contaminant, 3) Explore judgment composite sampling approaches to reduce sample numbers, 4) Collect baseline data to serve as an indication of the actual levels of contamination in the tests. A combined judgmental and random (CJR) approach uses Bayesian methodology to combine judgmental and probabilistic samples to make clearance statements of the form "X% confidence that at least Y% of an area does not contain detectable contamination” (X%/Y% clearance statements). The INL-2 experimental design has five test events, which 1) vary the floor of the INL building on which the contaminant will be released, 2) provide for varying the amount of contaminant released to obtain desired concentration gradients, and 3) investigate overt as well as covert release of contaminants. Desirable contaminant gradients would have moderate to low concentrations of contaminant in rooms near the release point, with concentrations down to zero in other rooms. Such gradients would provide a range of contamination levels to challenge the sampling

  18. A random spatial sampling method in a rural developing nation.

    Science.gov (United States)

    Kondo, Michelle C; Bream, Kent D W; Barg, Frances K; Branas, Charles C

    2014-04-10

    Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method using geographical information system (GIS) software and global positioning system (GPS) technology for application in a health survey in a rural region of Guatemala, as well as a qualitative study of the enumeration process. This method offers an alternative sampling technique that could reduce opportunities for bias in household selection compared to cluster methods. However, its use is subject to issues surrounding survey preparation, technological limitations and in-the-field household selection. Application of this method in remote areas will raise challenges surrounding the boundary delineation process, use and translation of satellite imagery between GIS and GPS, and household selection at each survey point in varying field conditions. This method favors household selection in denser urban areas and in new residential developments. Random spatial sampling methodology can be used to survey a random sample of population in a remote region of a developing nation. Although this method should be further validated and compared with more established methods to determine its utility in social survey applications, it shows promise for use in developing nations with resource-challenged environments where detailed geographic and human census data are less available.

  19. Designing an enhanced groundwater sample collection system

    Energy Technology Data Exchange (ETDEWEB)

    Schalla, R.

    1994-10-01

    As part of an ongoing technical support mission to achieve excellence and efficiency in environmental restoration activities at the Laboratory for Energy and Health-Related Research (LEHR), Pacific Northwest Laboratory (PNL) provided guidance on the design and construction of monitoring wells and identified the most suitable type of groundwater sampling pump and accessories for monitoring wells. The goal was to utilize a monitoring well design that would allow for hydrologic testing and reduce turbidity to minimize the impact of sampling. The sampling results of the newly designed monitoring wells were clearly superior to those of the previously installed monitoring wells. The new wells exhibited reduced turbidity, in addition to improved access for instrumentation and hydrologic testing. The variable frequency submersible pump was selected as the best choice for obtaining groundwater samples. The literature references are listed at the end of this report. Despite some initial difficulties, the actual performance of the variable frequency, submersible pump and its accessories was effective in reducing sampling time and labor costs, and its ease of use was preferred over the previously used bladder pumps. The surface seals system, called the Dedicator, proved to be useful accessory to prevent surface contamination while providing easy access for water-level measurements and for connecting the pump. Cost savings resulted from the use of the pre-production pumps (beta units) donated by the manufacturer for the demonstration. However, larger savings resulted from shortened field time due to the ease in using the submersible pumps and the surface seal access system. Proper deployment of the monitoring wells also resulted in cost savings and ensured representative samples.

  20. Maximin Optimal Designs for Cluster Randomized Trials

    Science.gov (United States)

    Wu, Sheng; Wong, Weng Kee; Crespi, Catherine M.

    2017-01-01

    Summary We consider design issues for cluster randomized trials (CRTs) with a binary outcome where both unit costs and intraclass correlation coefficients (ICCs) in the two arms may be unequal. We first propose a design that maximizes cost efficiency (CE), defined as the ratio of the precision of the efficacy measure to the study cost. Because such designs can be highly sensitive to the unknown ICCs and the anticipated success rates in the two arms, a local strategy based on a single set of best guesses for the ICCs and success rates can be risky. To mitigate this issue, we propose a maximin optimal design that permits ranges of values to be specified for the success rate and the ICC in each arm. We derive maximin optimal designs for three common measures of the efficacy of the intervention, risk difference, relative risk and odds ratio, and study their properties. Using a real cancer control and prevention trial example, we ascertain the efficiency of the widely used balanced design relative to the maximin optimal design and show that the former can be quite inefficient and less robust to mis-specifications of the ICCs and the success rates in the two arms. PMID:28182835

  1. Implications of sampling design and sample size for national carbon accounting systems.

    Science.gov (United States)

    Köhl, Michael; Lister, Andrew; Scott, Charles T; Baldauf, Thomas; Plugge, Daniel

    2011-11-08

    Countries willing to adopt a REDD regime need to establish a national Measurement, Reporting and Verification (MRV) system that provides information on forest carbon stocks and carbon stock changes. Due to the extensive areas covered by forests the information is generally obtained by sample based surveys. Most operational sampling approaches utilize a combination of earth-observation data and in-situ field assessments as data sources. We compared the cost-efficiency of four different sampling design alternatives (simple random sampling, regression estimators, stratified sampling, 2-phase sampling with regression estimators) that have been proposed in the scope of REDD. Three of the design alternatives provide for a combination of in-situ and earth-observation data. Under different settings of remote sensing coverage, cost per field plot, cost of remote sensing imagery, correlation between attributes quantified in remote sensing and field data, as well as population variability and the percent standard error over total survey cost was calculated. The cost-efficiency of forest carbon stock assessments is driven by the sampling design chosen. Our results indicate that the cost of remote sensing imagery is decisive for the cost-efficiency of a sampling design. The variability of the sample population impairs cost-efficiency, but does not reverse the pattern of cost-efficiency of the individual design alternatives. Our results clearly indicate that it is important to consider cost-efficiency in the development of forest carbon stock assessments and the selection of remote sensing techniques. The development of MRV-systems for REDD need to be based on a sound optimization process that compares different data sources and sampling designs with respect to their cost-efficiency. This helps to reduce the uncertainties related with the quantification of carbon stocks and to increase the financial benefits from adopting a REDD regime.

  2. Random sampling and validation of covariance matrices of resonance parameters

    Science.gov (United States)

    Plevnik, Lucijan; Zerovnik, Gašper

    2017-09-01

    Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices) in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.

  3. Generalized and synthetic regression estimators for randomized branch sampling

    Science.gov (United States)

    David L. R. Affleck; Timothy G. Gregoire

    2015-01-01

    In felled-tree studies, ratio and regression estimators are commonly used to convert more readily measured branch characteristics to dry crown mass estimates. In some cases, data from multiple trees are pooled to form these estimates. This research evaluates the utility of both tactics in the estimation of crown biomass following randomized branch sampling (...

  4. Effective sampling of random surfaces by baby universe surgery

    NARCIS (Netherlands)

    Ambjørn, J.; Białas, P.; Jurkiewicz, J.; Burda, Z.; Petersson, B.

    1994-01-01

    We propose a new, very efficient algorithm for sampling of random surfaces in the Monte Carlo simulations, based on so-called baby universe surgery, i.e. cutting and pasting of baby universe. It drastically reduces slowing down as compared to the standard local flip algorithm, thereby allowing

  5. Stratified random sampling plan for an irrigation customer telephone survey

    Energy Technology Data Exchange (ETDEWEB)

    Johnston, J.W.; Davis, L.J.

    1986-05-01

    This report describes the procedures used to design and select a sample for a telephone survey of individuals who use electricity in irrigating agricultural cropland in the Pacific Northwest. The survey is intended to gather information on the irrigated agricultural sector that will be useful for conservation assessment, load forecasting, rate design, and other regional power planning activities.

  6. Permutation Test Approach for Ordered Alternatives in Randomized Complete Block Design: A Comparative Study

    OpenAIRE

    GOKPINAR, Esra; GUL, Hasan; GOKPINAR, Fikri; BAYRAK, Hülya; OZONUR, Deniz

    2013-01-01

    Randomized complete block design is one of the most used experimental designs in statistical analysis. For testing ordered alternatives in randomized complete block design, parametric tests are used if random sample are drawn from Normal distribution. If normality assumption is not provide, nonparametric methods are used. In this study, we are interested nonparametric tests and we introduce briefly the nonparametric tests, such as Page, Modified Page and Hollander tests. We also give Permutat...

  7. Random sampling and validation of covariance matrices of resonance parameters

    Directory of Open Access Journals (Sweden)

    Plevnik Lucijan

    2017-01-01

    Full Text Available Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.

  8. Variance estimation, design effects, and sample size calculations for respondent-driven sampling.

    Science.gov (United States)

    Salganik, Matthew J

    2006-11-01

    Hidden populations, such as injection drug users and sex workers, are central to a number of public health problems. However, because of the nature of these groups, it is difficult to collect accurate information about them, and this difficulty complicates disease prevention efforts. A recently developed statistical approach called respondent-driven sampling improves our ability to study hidden populations by allowing researchers to make unbiased estimates of the prevalence of certain traits in these populations. Yet, not enough is known about the sample-to-sample variability of these prevalence estimates. In this paper, we present a bootstrap method for constructing confidence intervals around respondent-driven sampling estimates and demonstrate in simulations that it outperforms the naive method currently in use. We also use simulations and real data to estimate the design effects for respondent-driven sampling in a number of situations. We conclude with practical advice about the power calculations that are needed to determine the appropriate sample size for a study using respondent-driven sampling. In general, we recommend a sample size twice as large as would be needed under simple random sampling.

  9. Misrepresenting random sampling? A systematic review of research papers in the Journal of Advanced Nursing.

    Science.gov (United States)

    Williamson, Graham R

    2003-11-01

    This paper discusses the theoretical limitations of the use of random sampling and probability theory in the production of a significance level (or P-value) in nursing research. Potential alternatives, in the form of randomization tests, are proposed. Research papers in nursing, medicine and psychology frequently misrepresent their statistical findings, as the P-values reported assume random sampling. In this systematic review of studies published between January 1995 and June 2002 in the Journal of Advanced Nursing, 89 (68%) studies broke this assumption because they used convenience samples or entire populations. As a result, some of the findings may be questionable. The key ideas of random sampling and probability theory for statistical testing (for generating a P-value) are outlined. The result of a systematic review of research papers published in the Journal of Advanced Nursing is then presented, showing how frequently random sampling appears to have been misrepresented. Useful alternative techniques that might overcome these limitations are then discussed. REVIEW LIMITATIONS: This review is limited in scope because it is applied to one journal, and so the findings cannot be generalized to other nursing journals or to nursing research in general. However, it is possible that other nursing journals are also publishing research articles based on the misrepresentation of random sampling. The review is also limited because in several of the articles the sampling method was not completely clearly stated, and in this circumstance a judgment has been made as to the sampling method employed, based on the indications given by author(s). Quantitative researchers in nursing should be very careful that the statistical techniques they use are appropriate for the design and sampling methods of their studies. If the techniques they employ are not appropriate, they run the risk of misinterpreting findings by using inappropriate, unrepresentative and biased samples.

  10. Accuracy Sampling Design Bias on Coarse Spatial Resolution Land Cover Data in the Great Lakes Region (United States and Canada)

    Science.gov (United States)

    A number of articles have investigated the impact of sampling design on remotely sensed landcover accuracy estimates. Gong and Howarth (1990) found significant differences for Kappa accuracy values when comparing purepixel sampling, stratified random sampling, and stratified sys...

  11. Sample size calculations for micro-randomized trials in mHealth.

    Science.gov (United States)

    Liao, Peng; Klasnja, Predrag; Tewari, Ambuj; Murphy, Susan A

    2016-05-30

    The use and development of mobile interventions are experiencing rapid growth. In "just-in-time" mobile interventions, treatments are provided via a mobile device, and they are intended to help an individual make healthy decisions 'in the moment,' and thus have a proximal, near future impact. Currently, the development of mobile interventions is proceeding at a much faster pace than that of associated data science methods. A first step toward developing data-based methods is to provide an experimental design for testing the proximal effects of these just-in-time treatments. In this paper, we propose a 'micro-randomized' trial design for this purpose. In a micro-randomized trial, treatments are sequentially randomized throughout the conduct of the study, with the result that each participant may be randomized at the 100s or 1000s of occasions at which a treatment might be provided. Further, we develop a test statistic for assessing the proximal effect of a treatment as well as an associated sample size calculator. We conduct simulation evaluations of the sample size calculator in various settings. Rules of thumb that might be used in designing a micro-randomized trial are discussed. This work is motivated by our collaboration on the HeartSteps mobile application designed to increase physical activity. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Sampling Polymorphs of Ionic Solids using Random Superlattices.

    Science.gov (United States)

    Stevanović, Vladan

    2016-02-19

    Polymorphism offers rich and virtually unexplored space for discovering novel functional materials. To harness this potential approaches capable of both exploring the space of polymorphs and assessing their realizability are needed. One such approach devised for partially ionic solids is presented. The structure prediction part is carried out by performing local density functional theory relaxations on a large set of random supperlattices (RSLs) with atoms distributed randomly over different planes in a way that favors cation-anion coordination. Applying the RSL sampling on MgO, ZnO, and SnO_{2} reveals that the resulting probability of occurrence of a given structure offers a measure of its realizability explaining fully the experimentally observed, metastable polymorphs in these three systems.

  13. Stratified random sampling for estimating billing accuracy in health care systems.

    Science.gov (United States)

    Buddhakulsomsiri, Jirachai; Parthanadee, Parthana

    2008-03-01

    This paper presents a stratified random sampling plan for estimating accuracy of bill processing performance for the health care bills submitted to third party payers in health care systems. Bill processing accuracy is estimated with two measures: percent accuracy and total dollar accuracy. Difficulties in constructing a sampling plan arise when the population strata structure is unknown, and when the two measures require different sampling schemes. To efficiently utilize sample resource, the sampling plan is designed to effectively estimate both measures from the same sample. The sampling plan features a simple but efficient strata construction method, called rectangular method, and two accuracy estimation methods, one for each measure. The sampling plan is tested on actual populations from an insurance company. Accuracy estimates obtained are then used to compare the rectangular method to other potential clustering methods for strata construction, and compare the accuracy estimation methods to other eligible methods. Computational study results show effectiveness of the proposed sampling plan.

  14. Analysis of a global random stratified sample of nurse legislation.

    Science.gov (United States)

    Benton, D C; Fernández-Fernández, M P; González-Jurado, M A; Beneit-Montesinos, J V

    2015-06-01

    To identify, compare and contrast the major component parts of heterogeneous stratified sample of nursing legislation. Nursing legislation varies from one jurisdiction to another. Up until now no research exists into whether the variations of such legislation are random or if variations are related to a set of key attributes. This mixed method study used a random stratified sample of legislation to map through documentary analysis the content of 14 nursing acts and then explored, using quantitative techniques, whether the material contained relates to a number of key attributes. These attributes include: legal tradition of the jurisdiction; model of regulation; administrative approach; area of the world; and the economic status of the jurisdiction. Twelve component parts of nursing legislation were identified. These were remarkably similar irrespective of attributes of interest. However, not all component parts were specified in the same level of detail and the manner by which the elements were addressed did vary. A number of potential relationships between the structure of the legislation and the key attributes of interest were identified. This study generated a comprehensive and integrated map of a global sample of nursing legislation. It provides a set of descriptors to be used to undertake further quantitative work and provides an important policy tool to facilitate dialogue between regulatory bodies. At the individual nurse level it offers insights that can help nurses pursue recognition of credentials across jurisdictions. © 2015 International Council of Nurses.

  15. Determining optimal sample sizes for multi-stage randomized clinical trials using value of information methods.

    Science.gov (United States)

    Willan, Andrew; Kowgier, Matthew

    2008-01-01

    Traditional sample size calculations for randomized clinical trials depend on somewhat arbitrarily chosen factors, such as Type I and II errors. An effectiveness trial (otherwise known as a pragmatic trial or management trial) is essentially an effort to inform decision-making, i.e., should treatment be adopted over standard? Taking a societal perspective and using Bayesian decision theory, Willan and Pinto (Stat. Med. 2005; 24:1791-1806 and Stat. Med. 2006; 25:720) show how to determine the sample size that maximizes the expected net gain, i.e., the difference between the cost of doing the trial and the value of the information gained from the results. These methods are extended to include multi-stage adaptive designs, with a solution given for a two-stage design. The methods are applied to two examples. As demonstrated by the two examples, substantial increases in the expected net gain (ENG) can be realized by using multi-stage adaptive designs based on expected value of information methods. In addition, the expected sample size and total cost may be reduced. Exact solutions have been provided for the two-stage design. Solutions for higher-order designs may prove to be prohibitively complex and approximate solutions may be required. The use of multi-stage adaptive designs for randomized clinical trials based on expected value of sample information methods leads to substantial gains in the ENG and reductions in the expected sample size and total cost.

  16. Experimental Design and Validation of an Accelerated Random Vibration Fatigue Testing Methodology

    OpenAIRE

    Yu Jiang(Center for Statistical and Theoretical Condensed Matter Physics, Zhejiang Normal University, Jinhua City, Zhejiang Province 321004, China); Gun Jin Yun; Li Zhao; Junyong Tao

    2015-01-01

    Novel accelerated random vibration fatigue test methodology and strategy are proposed, which can generate a design of the experimental test plan significantly reducing the test time and the sample size. Based on theoretical analysis and fatigue damage model, several groups of random vibration fatigue tests were designed and conducted with the aim of investigating effects of both Gaussian and non-Gaussian random excitation on the vibration fatigue. First, stress responses at a weak point of a ...

  17. Sample size determination for longitudinal designs with binary response.

    Science.gov (United States)

    Kapur, Kush; Bhaumik, Runa; Tang, X Charlene; Hur, Kwan; Reda, Domenic J; Bhaumik, Dulal K

    2014-09-28

    In this article, we develop appropriate statistical methods for determining the required sample size while comparing the efficacy of an intervention to a control with repeated binary response outcomes. Our proposed methodology incorporates the complexity of the hierarchical nature of underlying designs and provides solutions when varying attrition rates are present over time. We explore how the between-subject variability and attrition rates jointly influence the computation of sample size formula. Our procedure also shows how efficient estimation methods play a crucial role in power analysis. A practical guideline is provided when information regarding individual variance component is unavailable. The validity of our methods is established by extensive simulation studies. Results are illustrated with the help of two randomized clinical trials in the areas of contraception and insomnia. Copyright © 2014 John Wiley & Sons, Ltd.

  18. The Study on Mental Health at Work: Design and sampling.

    Science.gov (United States)

    Rose, Uwe; Schiel, Stefan; Schröder, Helmut; Kleudgen, Martin; Tophoven, Silke; Rauch, Angela; Freude, Gabriele; Müller, Grit

    2017-08-01

    The Study on Mental Health at Work (S-MGA) generates the first nationwide representative survey enabling the exploration of the relationship between working conditions, mental health and functioning. This paper describes the study design, sampling procedures and data collection, and presents a summary of the sample characteristics. S-MGA is a representative study of German employees aged 31-60 years subject to social security contributions. The sample was drawn from the employment register based on a two-stage cluster sampling procedure. Firstly, 206 municipalities were randomly selected from a pool of 12,227 municipalities in Germany. Secondly, 13,590 addresses were drawn from the selected municipalities for the purpose of conducting 4500 face-to-face interviews. The questionnaire covers psychosocial working and employment conditions, measures of mental health, work ability and functioning. Data from personal interviews were combined with employment histories from register data. Descriptive statistics of socio-demographic characteristics and logistic regressions analyses were used for comparing population, gross sample and respondents. In total, 4511 face-to-face interviews were conducted. A test for sampling bias revealed that individuals in older cohorts participated more often, while individuals with an unknown educational level, residing in major cities or with a non-German ethnic background were slightly underrepresented. There is no indication of major deviations in characteristics between the basic population and the sample of respondents. Hence, S-MGA provides representative data for research on work and health, designed as a cohort study with plans to rerun the survey 5 years after the first assessment.

  19. Optimal Design in Geostatistics under Preferential Sampling

    OpenAIRE

    Ferreira, Gustavo da Silva; Gamerman, Dani

    2015-01-01

    This paper analyses the effect of preferential sampling in Geostatistics when the choice of new sampling locations is the main interest of the researcher. A Bayesian criterion based on maximizing utility functions is used. Simulated studies are presented and highlight the strong influence of preferential sampling in the decisions. The computational complexity is faced by treating the new local sampling locations as a model parameter and the optimal choice is then made by analysing its posteri...

  20. On efficiency of some ratio estimators in double sampling design ...

    African Journals Online (AJOL)

    In this paper, three sampling ratio estimators in double sampling design were proposed with the intention of finding an alternative double sampling design estimator to the conventional ratio estimator in double sampling design discussed by Cochran (1997), Okafor (2002) , Raj (1972) and Raj and Chandhok (1999).

  1. A comparison of two sampling designs for fish assemblage assessment in a large river

    Science.gov (United States)

    Kiraly, Ian A.; Coghlan, Stephen M.; Zydlewski, Joseph; Hayes, Daniel

    2014-01-01

    We compared the efficiency of stratified random and fixed-station sampling designs to characterize fish assemblages in anticipation of dam removal on the Penobscot River, the largest river in Maine. We used boat electrofishing methods in both sampling designs. Multiple 500-m transects were selected randomly and electrofished in each of nine strata within the stratified random sampling design. Within the fixed-station design, up to 11 transects (1,000 m) were electrofished, all of which had been sampled previously. In total, 88 km of shoreline were electrofished during summer and fall in 2010 and 2011, and 45,874 individuals of 34 fish species were captured. Species-accumulation and dissimilarity curve analyses indicated that all sampling effort, other than fall 2011 under the fixed-station design, provided repeatable estimates of total species richness and proportional abundances. Overall, our sampling designs were similar in precision and efficiency for sampling fish assemblages. The fixed-station design was negatively biased for estimating the abundance of species such as Common Shiner Luxilus cornutus and Fallfish Semotilus corporalis and was positively biased for estimating biomass for species such as White Sucker Catostomus commersonii and Atlantic Salmon Salmo salar. However, we found no significant differences between the designs for proportional catch and biomass per unit effort, except in fall 2011. The difference observed in fall 2011 was due to limitations on the number and location of fixed sites that could be sampled, rather than an inherent bias within the design. Given the results from sampling in the Penobscot River, application of the stratified random design is preferable to the fixed-station design due to less potential for bias caused by varying sampling effort, such as what occurred in the fall 2011 fixed-station sample or due to purposeful site selection.

  2. SNP selection and classification of genome-wide SNP data using stratified sampling random forests.

    Science.gov (United States)

    Wu, Qingyao; Ye, Yunming; Liu, Yang; Ng, Michael K

    2012-09-01

    For high dimensional genome-wide association (GWA) case-control data of complex disease, there are usually a large portion of single-nucleotide polymorphisms (SNPs) that are irrelevant with the disease. A simple random sampling method in random forest using default mtry parameter to choose feature subspace, will select too many subspaces without informative SNPs. Exhaustive searching an optimal mtry is often required in order to include useful and relevant SNPs and get rid of vast of non-informative SNPs. However, it is too time-consuming and not favorable in GWA for high-dimensional data. The main aim of this paper is to propose a stratified sampling method for feature subspace selection to generate decision trees in a random forest for GWA high-dimensional data. Our idea is to design an equal-width discretization scheme for informativeness to divide SNPs into multiple groups. In feature subspace selection, we randomly select the same number of SNPs from each group and combine them to form a subspace to generate a decision tree. The advantage of this stratified sampling procedure can make sure each subspace contains enough useful SNPs, but can avoid a very high computational cost of exhaustive search of an optimal mtry, and maintain the randomness of a random forest. We employ two genome-wide SNP data sets (Parkinson case-control data comprised of 408 803 SNPs and Alzheimer case-control data comprised of 380 157 SNPs) to demonstrate that the proposed stratified sampling method is effective, and it can generate better random forest with higher accuracy and lower error bound than those by Breiman's random forest generation method. For Parkinson data, we also show some interesting genes identified by the method, which may be associated with neurological disorders for further biological investigations.

  3. Analysing designed experiments in distance sampling

    Science.gov (United States)

    Stephen T. Buckland; Robin E. Russell; Brett G. Dickson; Victoria A. Saab; Donal N. Gorman; William M. Block

    2009-01-01

    Distance sampling is a survey technique for estimating the abundance or density of wild animal populations. Detection probabilities of animals inherently differ by species, age class, habitats, or sex. By incorporating the change in an observer's ability to detect a particular class of animals as a function of distance, distance sampling leads to density estimates...

  4. Impact of spatial variability and sampling design on model performance

    Science.gov (United States)

    Schrape, Charlotte; Schneider, Anne-Kathrin; Schröder, Boris; van Schaik, Loes

    2017-04-01

    Many environmental physical and chemical parameters as well as species distributions display a spatial variability at different scales. In case measurements are very costly in labour time or money a choice has to be made between a high sampling resolution at small scales and a low spatial cover of the study area or a lower sampling resolution at the small scales resulting in local data uncertainties with a better spatial cover of the whole area. This dilemma is often faced in the design of field sampling campaigns for large scale studies. When the gathered field data are subsequently used for modelling purposes the choice of sampling design and resulting data quality influence the model performance criteria. We studied this influence with a virtual model study based on a large dataset of field information on spatial variation of earthworms at different scales. Therefore we built a virtual map of anecic earthworm distributions over the Weiherbach catchment (Baden-Württemberg in Germany). First of all the field scale abundance of earthworms was estimated using a catchment scale model based on 65 field measurements. Subsequently the high small scale variability was added using semi-variograms, based on five fields with a total of 430 measurements divided in a spatially nested sampling design over these fields, to estimate the nugget, range and standard deviation of measurements within the fields. With the produced maps, we performed virtual samplings of one up to 50 random points per field. We then used these data to rebuild the catchment scale models of anecic earthworm abundance with the same model parameters as in the work by Palm et al. (2013). The results of the models show clearly that a large part of the non-explained deviance of the models is due to the very high small scale variability in earthworm abundance: the models based on single virtual sampling points on average obtain an explained deviance of 0.20 and a correlation coefficient of 0.64. With

  5. Randomly Sampled-Data Control Systems. Ph.D. Thesis

    Science.gov (United States)

    Han, Kuoruey

    1990-01-01

    The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.

  6. A Table-Based Random Sampling Simulation for Bioluminescence Tomography

    Directory of Open Access Journals (Sweden)

    Xiaomeng Zhang

    2006-01-01

    Full Text Available As a popular simulation of photon propagation in turbid media, the main problem of Monte Carlo (MC method is its cumbersome computation. In this work a table-based random sampling simulation (TBRS is proposed. The key idea of TBRS is to simplify multisteps of scattering to a single-step process, through randomly table querying, thus greatly reducing the computing complexity of the conventional MC algorithm and expediting the computation. The TBRS simulation is a fast algorithm of the conventional MC simulation of photon propagation. It retained the merits of flexibility and accuracy of conventional MC method and adapted well to complex geometric media and various source shapes. Both MC simulations were conducted in a homogeneous medium in our work. Also, we present a reconstructing approach to estimate the position of the fluorescent source based on the trial-and-error theory as a validation of the TBRS algorithm. Good agreement is found between the conventional MC simulation and the TBRS simulation.

  7. Designing optimal sampling schemes for field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...

  8. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    Science.gov (United States)

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  9. A comparison of methods for representing sparsely sampled random quantities.

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Vicente Jose; Swiler, Laura Painton; Urbina, Angel; Mullins, Joshua

    2013-09-01

    This report discusses the treatment of uncertainties stemming from relatively few samples of random quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse data samples it is not practical to have a goal of accurately estimating the underlying probability density function (PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a specified percentile range of the actual PDF, say the range between 0.025 and .975 percentiles, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the desired percentile range of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem interesting and difficult. In this report, five uncertainty representation techniques are characterized for their performance on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods, statistical Tolerance Intervals and a kernel density approach specifically developed for handling sparse data, exhibit significantly better overall performance than the others.

  10. Random sampling of elementary flux modes in large-scale metabolic networks.

    Science.gov (United States)

    Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel

    2012-09-15

    The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

  11. Software package for analysis of completely randomized block design

    African Journals Online (AJOL)

    This study is to design and develop statistical software (package), OYSP1.0 which conveniently accommodates and analyzes large mass of data emanating from experimental designs, in particular, completely Randomized Block design. Visual Basic programming is used in the design. The statistical package OYSP 1.0 ...

  12. Power and sample size calculations for Mendelian randomization studies using one genetic instrument.

    Science.gov (United States)

    Freeman, Guy; Cowling, Benjamin J; Schooling, C Mary

    2013-08-01

    Mendelian randomization, which is instrumental variable analysis using genetic variants as instruments, is an increasingly popular method of making causal inferences from observational studies. In order to design efficient Mendelian randomization studies, it is essential to calculate the sample sizes required. We present formulas for calculating the power of a Mendelian randomization study using one genetic instrument to detect an effect of a given size, and the minimum sample size required to detect effects for given levels of significance and power, using asymptotic statistical theory. We apply the formulas to some example data and compare the results with those from simulation methods. Power and sample size calculations using these formulas should be more straightforward to carry out than simulation approaches. These formulas make explicit that the sample size needed for Mendelian randomization study is inversely proportional to the square of the correlation between the genetic instrument and the exposure and proportional to the residual variance of the outcome after removing the effect of the exposure, as well as inversely proportional to the square of the effect size.

  13. A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.

    Science.gov (United States)

    Yu, Qingzhao; Zhu, Lin; Zhu, Han

    2017-11-01

    Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Precision of systematic and random sampling in clustered populations: habitat patches and aggregating organisms.

    Science.gov (United States)

    McGarvey, Richard; Burch, Paul; Matthews, Janet M

    2016-01-01

    Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with

  15. Randomized controlled trials 5: Determining the sample size and power for clinical trials and cohort studies.

    Science.gov (United States)

    Greene, Tom

    2015-01-01

    Performing well-powered randomized controlled trials is of fundamental importance in clinical research. The goal of sample size calculations is to assure that statistical power is acceptable while maintaining a small probability of a type I error. This chapter overviews the fundamentals of sample size calculation for standard types of outcomes for two-group studies. It considers (1) the problems of determining the size of the treatment effect that the studies will be designed to detect, (2) the modifications to sample size calculations to account for loss to follow-up and nonadherence, (3) the options when initial calculations indicate that the feasible sample size is insufficient to provide adequate power, and (4) the implication of using multiple primary endpoints. Sample size estimates for longitudinal cohort studies must take account of confounding by baseline factors.

  16. Optimisation of sampling windows design for population pharmacokinetic experiments.

    Science.gov (United States)

    Ogungbenro, Kayode; Aarons, Leon

    2008-08-01

    This paper describes an approach for optimising sampling windows for population pharmacokinetic experiments. Sampling windows designs are more practical in late phase drug development where patients are enrolled in many centres and in out-patient clinic settings. Collection of samples under the uncontrolled environment at these centres at fixed times may be problematic and can result in uninformative data. Population pharmacokinetic sampling windows design provides an opportunity to control when samples are collected by allowing some flexibility and yet provide satisfactory parameter estimation. This approach uses information obtained from previous experiments about the model and parameter estimates to optimise sampling windows for population pharmacokinetic experiments within a space of admissible sampling windows sequences. The optimisation is based on a continuous design and in addition to sampling windows the structure of the population design in terms of the proportion of subjects in elementary designs, number of elementary designs in the population design and number of sampling windows per elementary design is also optimised. The results obtained showed that optimal sampling windows designs obtained using this approach are very efficient for estimating population PK parameters and provide greater flexibility in terms of when samples are collected. The results obtained also showed that the generalized equivalence theorem holds for this approach.

  17. Description of sampling designs using a comprehensive data structure

    Science.gov (United States)

    John C. Byrne; Albert R. Stage

    1988-01-01

    Maintaining permanent plot data with different sampling designs over long periods within an organization, as well as sharing such information between organizations, requires that common standards be used. A data structure for the description of the sampling design within a stand is proposed. It is based on the definition of subpopulations of trees sampled, the rules...

  18. On analysis-based two-step interpolation methods for randomly sampled seismic data

    Science.gov (United States)

    Yang, Pengliang; Gao, Jinghuai; Chen, Wenchao

    2013-02-01

    Interpolating the missing traces of regularly or irregularly sampled seismic record is an exceedingly important issue in the geophysical community. Many modern acquisition and reconstruction methods are designed to exploit the transform domain sparsity of the few randomly recorded but informative seismic data using thresholding techniques. In this paper, to regularize randomly sampled seismic data, we introduce two accelerated, analysis-based two-step interpolation algorithms, the analysis-based FISTA (fast iterative shrinkage-thresholding algorithm) and the FPOCS (fast projection onto convex sets) algorithm from the IST (iterative shrinkage-thresholding) algorithm and the POCS (projection onto convex sets) algorithm. A MATLAB package is developed for the implementation of these thresholding-related interpolation methods. Based on this package, we compare the reconstruction performance of these algorithms, using synthetic and real seismic data. Combined with several thresholding strategies, the accelerated convergence of the proposed methods is also highlighted.

  19. Hemodynamic and glucometabolic factors fail to predict renal function in a random population sample

    DEFF Research Database (Denmark)

    Pareek, M.; Nielsen, M.; Olesen, Thomas Bastholm

    2015-01-01

    Objective: To determine whether baseline hemodynamic and/or glucometabolic risk factors could predict renal function at follow-up, independently of baseline serum creatinine, in survivors from a random population sample. Design and method: We examined associations between baseline serum creatinine...... indices of beta-cell function (HOMA-2B), insulin sensitivity (HOMA-2S), and insulin resistance (HOMA-2IR)), traditional cardiovascular risk factors (age, sex, smoking status, body mass index, diabetes mellitus, total serum cholesterol), and later renal function determined as serum cystatin C in 238 men...... and 7 women aged 38 to 49 years at the time of inclusion, using multivariable linear regression analysis (p-entry 0.05, p-removal 0.20). Study subjects came from a random population based sample and were included 1974-1992, whilst the follow-up with cystatin C measurement was performed 2002...

  20. Random sampling for a mental health survey in a deprived multi-ethnic area of Berlin.

    Science.gov (United States)

    Mundt, Adrian P; Aichberger, Marion C; Kliewe, Thomas; Ignatyev, Yuriy; Yayla, Seda; Heimann, Hannah; Schouler-Ocak, Meryam; Busch, Markus; Rapp, Michael; Heinz, Andreas; Ströhle, Andreas

    2012-12-01

    The aim of the study was to assess the response to random sampling for a mental health survey in a deprived multi-ethnic area of Berlin, Germany, with a large Turkish-speaking population. A random list from the registration office with 1,000 persons stratified by age and gender was retrieved from the population registry and these persons were contacted using a three-stage design including written information, telephone calls and personal contact at home. A female bilingual interviewer contacted persons with Turkish names. Of the persons on the list, 202 were not living in the area, one was deceased, 502 did not respond. Of the 295 responders, 152 explicitly refused(51.5%) to participate. We retained a sample of 143 participants(48.5%) representing the rate of multi-ethnicity in the area (52.1% migrants in the sample vs. 53.5% in the population). Turkish migrants were over-represented(28.9% in the sample vs. 18.6% in the population). Polish migrants (2.1 vs. 5.3% in the population) and persons from the former Yugoslavia (1.4 vs. 4.8% in the population)were under-represented. Bilingual contact procedures can improve the response rates of the most common migrant populations to random sampling if migrants of the same origin gate the contact. High non-contact and non-response rates for migrant and non-migrant populations in deprived urban areas remain a challenge for obtaining representative random samples.

  1. Optimal flexible sample size design with robust power.

    Science.gov (United States)

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. A spatially balanced design with probability function proportional to the within sample distance.

    Science.gov (United States)

    Benedetti, Roberto; Piersimoni, Federica

    2017-09-01

    The units observed in a biological, agricultural, and environmental survey are often randomly selected from a finite population whose main feature is to be geo-referenced thus its spatial distribution should be used as essential information in designing the sample. In particular our interest is focused on probability samples that are well spread over the population in every dimension which in recent literature are defined as spatially balanced samples. To approach the problem we used the within sample distance as the summary index of the spatial distribution of a random selection criterion. Moreover numerical comparisons are made between the relative efficiency, measured with respect to the simple random sampling, of the suggested design and some other classical solutions as the Generalized Random Tessellation Stratified (GRTS) design used by the US Environmental Protection Agency (EPA) and other balanced or spatially balanced selection procedures as the Spatially Correlated Poisson Sampling (SCPS), the balanced sampling (CUBE), and the Local Pivotal method (LPM). These experiments on real and simulated data show that the design based on the within sample distance selects samples with a better spatial balance thus gives estimates with a lower sampling error than those obtained by using the other methods. The suggested method is very flexible to the introduction of stratification and coordination of samples and, even if in its nature it is computationally intensive, it is shown to be a suitable solution even when dealing with high sampling rates and large population frames where the main problem arises from the size of the distance matrix. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. A Frequency Domain Design Method For Sampled-Data Compensators

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Jannerup, Ole Erik

    1990-01-01

    A new approach to the design of a sampled-data compensator in the frequency domain is investigated. The starting point is a continuous-time compensator for the continuous-time system which satisfy specific design criteria. The new design method will graphically show how the discrete...

  4. The design effect and cluster samples: optimising tuberculosis prevalence surveys

    NARCIS (Netherlands)

    Williams, B.; Gopi, P. G.; Borgdorff, M. W.; Yamada, N.; Dye, C.

    2008-01-01

    Cross-sectional surveys of disease prevalence, including for tuberculosis (TB), often use a two (or more) stage sampling procedure. By choosing clusters of people randomly from all possible clusters, the logistic costs of doing the survey can be reduced. However, this increases the statistical

  5. Sample size calculations for 3-level cluster randomized trials

    NARCIS (Netherlands)

    Teerenstra, S.; Moerbeek, M.; Achterberg, T. van; Pelzer, B.J.; Borm, G.F.

    2008-01-01

    BACKGROUND: The first applications of cluster randomized trials with three instead of two levels are beginning to appear in health research, for instance, in trials where different strategies to implement best-practice guidelines are compared. In such trials, the strategy is implemented in health

  6. Sample size calculations for 3-level cluster randomized trials

    NARCIS (Netherlands)

    Teerenstra, S.; Moerbeek, M.; Achterberg, T. van; Pelzer, B.J.; Borm, G.F.

    2008-01-01

    Background The first applications of cluster randomized trials with three instead of two levels are beginning to appear in health research, for instance, in trials where different strategies to implement best-practice guidelines are compared. In such trials, the strategy is implemented in health

  7. [Comparison of the designing effects (DE) among different designs related to complex sampling methods].

    Science.gov (United States)

    Wang, Jian-Sheng; Feng, Guo-Shuang; Yu, Shi-Cheng; Ma, Lin-Mao; Zhou, Mai-Geng; Liu, Shi-Yao

    2012-10-01

    To compare the designing effects (DE) among different complex sampling designing programs. Data from the '2002 Chinese Nutrition and Health Survey' was used as an example to generate the sampling population, and statistical simulation method was used to estimate the values of DEs from six complex sampling designing programs. It was found that the values of DEs varied among the six complex sampling designing programs. The values of the DEs were associated with the sample sizes in a positive way, with more sample stages and less stratified categories. Reduction of the numbers of sample stages and detailing stratified categories could decrease the DE values so as to improve the DE.

  8. Complex sampling designs for the Customer Satisfaction Index estimation

    Directory of Open Access Journals (Sweden)

    Tonio Di Battista

    2013-05-01

    Full Text Available In this paper we focus on sampling designs best suited to meeting the needs of Customer Satisfaction (CS assessment with particular attention being paid to adaptive sampling which may be useful. Complex sampling designs are illustrated in order to build CS indices that may be used for inference purposes. When the phenomenon of satisfaction is rare, adaptive designs can produce gains in efficiency, relative to conventional designs, for estimating the population parameters. For such sampling design, nonlinear estimators may be used to estimate customer satisfaction indices which are generally biased and the variance estimator may not be obtained in a closed-form solution. Delta, jackknfe and bootstrap procedures are introduced in order to reduce bias and estimating variance. The paper ends up with a simulation study in order to estimate the variance of the proposed estimator.

  9. Improved estimator of finite population mean using auxiliary attribute in stratified random sampling

    OpenAIRE

    Verma, Hemant K.; Sharma, Prayas; Singh, Rajesh

    2014-01-01

    The present study discuss the problem of estimating the finite population mean using auxiliary attribute in stratified random sampling. In this paper taking the advantage of point bi-serial correlation between the study variable and auxiliary attribute, we have improved the estimation of population mean in stratified random sampling. The expressions for Bias and Mean square error have been derived under stratified random sampling. In addition, an empirical study has been carried out to examin...

  10. A Family of Estimators of a Sensitive Variable Using Auxiliary Information in Stratified Random Sampling

    Directory of Open Access Journals (Sweden)

    Nadia Mushtaq

    2017-03-01

    Full Text Available In this article, a combined general family of estimators is proposed for estimating finite population mean of a sensitive variable in stratified random sampling with non-sensitive auxiliary variable based on randomized response technique. Under stratified random sampling without replacement scheme, the expression of bias and mean square error (MSE up to the first-order approximations are derived. Theoretical and empirical results through a simulation study show that the proposed class of estimators is more efficient than the existing estimators, i.e., usual stratified random sample mean estimator, Sousa et al (2014 ratio and regression estimator of the sensitive variable in stratified sampling.

  11. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  12. Design of a gravity corer for near shore sediment sampling

    Digital Repository Service at National Institute of Oceanography (India)

    Bhat, S.T.; Sonawane, A.V.; Nayak, B.U.

    For the purpose of geotechnical investigation a gravity corer has been designed and fabricated to obtain undisturbed sediment core samples from near shore waters. The corer was successfully operated at 75 stations up to water depth 30 m. Simplicity...

  13. Precision and cost considerations for two-stage sampling in a panelized forest inventory design.

    Science.gov (United States)

    Westfall, James A; Lister, Andrew J; Scott, Charles T

    2016-01-01

    Due to the relatively high cost of measuring sample plots in forest inventories, considerable attention is given to sampling and plot designs during the forest inventory planning phase. A two-stage design can be efficient from a field work perspective as spatially proximate plots are grouped into work zones. A comparison between subsampling with units of unequal size (SUUS) and a simple random sample (SRS) design in a panelized framework assessed the statistical and economic implications of using the SUUS design for a case study in the Northeastern USA. The sampling errors for estimates of forest land area and biomass were approximately 1.5-2.2 times larger with SUUS prior to completion of the inventory cycle. Considerable sampling error reductions were realized by using the zones within a post-stratified sampling paradigm; however, post-stratification of plots in the SRS design always provided smaller sampling errors in comparison. Cost differences between the two designs indicated the SUUS design could reduce the field work expense by 2-7 %. The results also suggest the SUUS design may provide substantial economic advantage for tropical forest inventories, where remote areas, poor access, and lower wages are typically encountered.

  14. Computer Corner: A Note on Pascal's Triangle and Simple Random Sampling.

    Science.gov (United States)

    Wright, Tommy

    1989-01-01

    Describes the algorithm used to select a simple random sample of certain size without having to list all possible samples and a justification based on Pascal's triangle. Provides testing results by various computers. (YP)

  15. Spatial Sampling Design for Estimating Regional GPP With Spatial Heterogeneities

    NARCIS (Netherlands)

    Wang, J.H.; Ge, Y.; Heuvelink, G.B.M.; Zhou, C.H.

    2014-01-01

    The estimation of regional gross primary production (GPP) is a crucial issue in carbon cycle studies. One commonly used way to estimate the characteristics of GPP is to infer the total amount of GPP by collecting field samples. In this process, the spatial sampling design will affect the error

  16. Designing and testing the representative samplers for sampling a ...

    African Journals Online (AJOL)

    establishing the optimum mesh of grind for the various ores, to achieve effective separation of the cobalt minerals from those of copper. This prompted the designing and testing of representative samplers for sampling the milling circuit at Nkana Concentrator. In the design of the samplers, use was made of the Gy's formula to ...

  17. Nicotine therapy sampling to induce quit attempts among smokers unmotivated to quit: a randomized clinical trial.

    Science.gov (United States)

    Carpenter, Matthew J; Hughes, John R; Gray, Kevin M; Wahlquist, Amy E; Saladin, Michael E; Alberg, Anthony J

    2011-11-28

    Rates of smoking cessation have not changed in a decade, accentuating the need for novel approaches to prompt quit attempts. Within a nationwide randomized clinical trial (N = 849) to induce further quit attempts and cessation, smokers currently unmotivated to quit were randomized to a practice quit attempt (PQA) alone or to nicotine replacement therapy (hereafter referred to as nicotine therapy), sampling within the context of a PQA. Following a 6-week intervention period, participants were followed up for 6 months to assess outcomes. The PQA intervention was designed to increase motivation, confidence, and coping skills. The combination of a PQA plus nicotine therapy sampling added samples of nicotine lozenges to enhance attitudes toward pharmacotherapy and to promote the use of additional cessation resources. Primary outcomes included the incidence of any ever occurring self-defined quit attempt and 24-hour quit attempt. Secondary measures included 7-day point prevalence abstinence at any time during the study (ie, floating abstinence) and at the final follow-up assessment. Compared with PQA intervention, nicotine therapy sampling was associated with a significantly higher incidence of any quit attempt (49% vs 40%; relative risk [RR], 1.2; 95% CI, 1.1-1.4) and any 24-hour quit attempt (43% vs 34%; 1.3; 1.1-1.5). Nicotine therapy sampling was marginally more likely to promote floating abstinence (19% vs 15%; RR, 1.3; 95% CI, 1.0-1.7); 6-month point prevalence abstinence rates were no different between groups (16% vs 14%; 1.2; 0.9-1.6). Nicotine therapy sampling during a PQA represents a novel strategy to motivate smokers to make a quit attempt. clinicaltrials.gov Identifier: NCT00706979.

  18. Modal generation of the design parameters of an elastic spacecraft by the random search method

    Science.gov (United States)

    Titov, B. A.

    A method for the modal generation of the dynamic properties of an elastic spacecraft is proposed which is based on algorithms of random search in the space of design parameters. A practical implementation of this approach is illustrated by an example. It is shown that the modal parameter generation procedure based on the random search solves the problem of parameter selection. However, as in any other method, the accuracy of the computation of matrix elements is largely determined by the initial set of permissible values and the number of random samples in determining the subgradient of the objective function.

  19. Exponential ratio-product type estimators under second order approximation in stratified random sampling

    OpenAIRE

    Singh, Rajesh; Sharma, Prayas; Smarandache, Florentin

    2014-01-01

    Singh et al (20009) introduced a family of exponential ratio and product type estimators in stratified random sampling. Under stratified random sampling without replacement scheme, the expressions of bias and mean square error (MSE) of Singh et al (2009) and some other estimators, up to the first- and second-order approximations are derived. Also, the theoretical findings are supported by a numerical example.

  20. Synchronized Lunar Pole Impact Plume Sample Return Trajectory Design

    Science.gov (United States)

    Genova, Anthony L.; Foster, Cyrus; Colaprete, Tony

    2016-01-01

    The presented trajectory design enables two maneuverable spacecraft launched onto the same trans-lunar injection trajectory to coordinate a steep impact of a lunar pole and subsequent sample return of the ejecta plume to Earth. To demonstrate this concept, the impactor is assumed to use the LCROSS missions trajectory and spacecraft architecture, thus the permanently-shadowed Cabeus crater on the lunar south pole is assumed as the impact site. The sample-return spacecraft is assumed to be a CubeSat that requires a complimentary trajectory design that avoids lunar impact after passing through the ejecta plume to enable sample-return to Earth via atmospheric entry.

  1. Sample design for the residential energy consumption survey

    Energy Technology Data Exchange (ETDEWEB)

    1994-08-01

    The purpose of this report is to provide detailed information about the multistage area-probability sample design used for the Residential Energy Consumption Survey (RECS). It is intended as a technical report, for use by statisticians, to better understand the theory and procedures followed in the creation of the RECS sample frame. For a more cursory overview of the RECS sample design, refer to the appendix entitled ``How the Survey was Conducted,`` which is included in the statistical reports produced for each RECS survey year.

  2. Analysis of Covariance and Randomized Block Design with Heterogeneous Slopes.

    Science.gov (United States)

    Klockars, Alan J.; Beretvas, S. Natasha

    2001-01-01

    Compared the Type I error rate and the power to detect differences in slopes and additive treatment effects of analysis of covariance (ANCOVA) and randomized block designs through a Monte Carlo simulation. Results show that the more powerful option in almost all simulations for tests of both slope and means was ANCOVA. (SLD)

  3. Query-Based Sampling: Can we do Better than Random?

    NARCIS (Netherlands)

    Tigelaar, A.S.; Hiemstra, Djoerd

    2010-01-01

    Many servers on the web offer content that is only accessible via a search interface. These are part of the deep web. Using conventional crawling to index the content of these remote servers is impossible without some form of cooperation. Query-based sampling provides an alternative to crawling

  4. Comparing cluster-level dynamic treatment regimens using sequential, multiple assignment, randomized trials: Regression estimation and sample size considerations.

    Science.gov (United States)

    NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel

    2017-08-01

    Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.

  5. Experimental Design for the INL Sample Collection Operational Test

    Energy Technology Data Exchange (ETDEWEB)

    Amidan, Brett G.; Piepel, Gregory F.; Matzke, Brett D.; Filliben, James J.; Jones, Barbara

    2007-12-13

    This document describes the test events and numbers of samples comprising the experimental design that was developed for the contamination, decontamination, and sampling of a building at the Idaho National Laboratory (INL). This study is referred to as the INL Sample Collection Operational Test. Specific objectives were developed to guide the construction of the experimental design. The main objective is to assess the relative abilities of judgmental and probabilistic sampling strategies to detect contamination in individual rooms or on a whole floor of the INL building. A second objective is to assess the use of probabilistic and Bayesian (judgmental + probabilistic) sampling strategies to make clearance statements of the form “X% confidence that at least Y% of a room (or floor of the building) is not contaminated. The experimental design described in this report includes five test events. The test events (i) vary the floor of the building on which the contaminant will be released, (ii) provide for varying or adjusting the concentration of contaminant released to obtain the ideal concentration gradient across a floor of the building, and (iii) investigate overt as well as covert release of contaminants. The ideal contaminant gradient would have high concentrations of contaminant in rooms near the release point, with concentrations decreasing to zero in rooms at the opposite end of the building floor. For each of the five test events, the specified floor of the INL building will be contaminated with BG, a stand-in for Bacillus anthracis. The BG contaminant will be disseminated from a point-release device located in the room specified in the experimental design for each test event. Then judgmental and probabilistic samples will be collected according to the pre-specified sampling plan. Judgmental samples will be selected based on professional judgment and prior information. Probabilistic samples will be selected in sufficient numbers to provide desired confidence

  6. Statistical inferences for data from studies conducted with an aggregated multivariate outcome-dependent sample design.

    Science.gov (United States)

    Lu, Tsui-Shan; Longnecker, Matthew P; Zhou, Haibo

    2017-03-15

    Outcome-dependent sampling (ODS) scheme is a cost-effective sampling scheme where one observes the exposure with a probability that depends on the outcome. The well-known such design is the case-control design for binary response, the case-cohort design for the failure time data, and the general ODS design for a continuous response. While substantial work has been carried out for the univariate response case, statistical inference and design for the ODS with multivariate cases remain under-developed. Motivated by the need in biological studies for taking the advantage of the available responses for subjects in a cluster, we propose a multivariate outcome-dependent sampling (multivariate-ODS) design that is based on a general selection of the continuous responses within a cluster. The proposed inference procedure for the multivariate-ODS design is semiparametric where all the underlying distributions of covariates are modeled nonparametrically using the empirical likelihood methods. We show that the proposed estimator is consistent and developed the asymptotically normality properties. Simulation studies show that the proposed estimator is more efficient than the estimator obtained using only the simple-random-sample portion of the multivariate-ODS or the estimator from a simple random sample with the same sample size. The multivariate-ODS design together with the proposed estimator provides an approach to further improve study efficiency for a given fixed study budget. We illustrate the proposed design and estimator with an analysis of association of polychlorinated biphenyl exposure to hearing loss in children born to the Collaborative Perinatal Study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Fast egg collection method greatly improves randomness of egg sampling in Drosophila melanogaster

    DEFF Research Database (Denmark)

    Schou, Mads Fristrup

    2013-01-01

    When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column and dimini......When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column...... and to obtain a representative collection of genotypes, the method presented here is strongly recommended when collecting eggs from Drosophila....

  8. Estimating the prevalence of sensitive behaviour and cheating with a dual design for direct questioning and randomized response.

    Science.gov (United States)

    van den Hout, Ardo; Böckenholt, Ulf; van der Heijden, Peter G M

    2010-08-01

    Randomized response is a misclassification design to estimate the prevalence of sensitive behaviour. Respondents who do not follow the instructions of the design are considered to be cheating. A mixture model is proposed to estimate the prevalence of sensitive behaviour and cheating in the case of a dual sampling scheme with direct questioning and randomized response. The mixing weight is the probability of cheating, where cheating is modelled separately for direct questioning and randomized response. For Bayesian inference, Markov chain Monte Carlo sampling is applied to sample parameter values from the posterior. The model makes it possible to analyse dual sample scheme data in a unified way and to assess cheating for direct questions as well as for randomized response questions. The research is illustrated with randomized response data concerning violations of regulations for social benefit.

  9. Concurrent design of quasi-random photonic nanostructures

    Science.gov (United States)

    Lee, Won-Kyu; Yu, Shuangcheng; Engel, Clifford J.; Reese, Thaddeus; Rhee, Dongjoon; Chen, Wei; Odom, Teri W.

    2017-08-01

    Nanostructured surfaces with quasi-random geometries can manipulate light over broadband wavelengths and wide ranges of angles. Optimization and realization of stochastic patterns have typically relied on serial, direct-write fabrication methods combined with real-space design. However, this approach is not suitable for customizable features or scalable nanomanufacturing. Moreover, trial-and-error processing cannot guarantee fabrication feasibility because processing-structure relations are not included in conventional designs. Here, we report wrinkle lithography integrated with concurrent design to produce quasi-random nanostructures in amorphous silicon at wafer scales that achieved over 160% light absorption enhancement from 800 to 1,200 nm. The quasi-periodicity of patterns, materials filling ratio, and feature depths could be independently controlled. We statistically represented the quasi-random patterns by Fourier spectral density functions (SDFs) that could bridge the processing-structure and structure-performance relations. Iterative search of the optimal structure via the SDF representation enabled concurrent design of nanostructures and processing.

  10. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Science.gov (United States)

    Hund, Lauren; Bedrick, Edward J; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  11. A descriptive analysis of a representative sample of pediatric randomized controlled trials published in 2007

    Directory of Open Access Journals (Sweden)

    Thomson Denise

    2010-12-01

    Full Text Available Abstract Background Randomized controlled trials (RCTs are the gold standard for trials assessing the effects of therapeutic interventions; therefore it is important to understand how they are conducted. Our objectives were to provide an overview of a representative sample of pediatric RCTs published in 2007 and assess the validity of their results. Methods We searched Cochrane Central Register of Controlled Trials using a pediatric filter and randomly selected 300 RCTs published in 2007. We extracted data on trial characteristics; outcomes; methodological quality; reporting; and registration and protocol characteristics. Trial registration and protocol availability were determined for each study based on the publication, an Internet search and an author survey. Results Most studies (83% were efficacy trials, 40% evaluated drugs, and 30% were placebo-controlled. Primary outcomes were specified in 41%; 43% reported on adverse events. At least one statistically significant outcome was reported in 77% of trials; 63% favored the treatment group. Trial registration was declared in 12% of publications and 23% were found through an Internet search. Risk of bias (ROB was high in 59% of trials, unclear in 33%, and low in 8%. Registered trials were more likely to have low ROB than non-registered trials (16% vs. 5%; p = 0.008. Effect sizes tended to be larger for trials at high vs. low ROB (0.28, 95% CI 0.21,0.35 vs. 0.16, 95% CI 0.07,0.25. Among survey respondents (50% response rate, the most common reason for trial registration was a publication requirement and for non-registration, a lack of familiarity with the process. Conclusions More than half of this random sample of pediatric RCTs published in 2007 was at high ROB and three quarters of trials were not registered. There is an urgent need to improve the design, conduct, and reporting of child health research.

  12. Parameter Estimation in Stratified Cluster Sampling under Randomized Response Models for Sensitive Question Survey.

    Science.gov (United States)

    Pu, Xiangke; Gao, Ge; Fan, Yubo; Wang, Mian

    2016-01-01

    Randomized response is a research method to get accurate answers to sensitive questions in structured sample survey. Simple random sampling is widely used in surveys of sensitive questions but hard to apply on large targeted populations. On the other side, more sophisticated sampling regimes and corresponding formulas are seldom employed to sensitive question surveys. In this work, we developed a series of formulas for parameter estimation in cluster sampling and stratified cluster sampling under two kinds of randomized response models by using classic sampling theories and total probability formulas. The performances of the sampling methods and formulas in the survey of premarital sex and cheating on exams at Soochow University were also provided. The reliability of the survey methods and formulas for sensitive question survey was found to be high.

  13. Parameter Estimation in Stratified Cluster Sampling under Randomized Response Models for Sensitive Question Survey.

    Directory of Open Access Journals (Sweden)

    Xiangke Pu

    Full Text Available Randomized response is a research method to get accurate answers to sensitive questions in structured sample survey. Simple random sampling is widely used in surveys of sensitive questions but hard to apply on large targeted populations. On the other side, more sophisticated sampling regimes and corresponding formulas are seldom employed to sensitive question surveys. In this work, we developed a series of formulas for parameter estimation in cluster sampling and stratified cluster sampling under two kinds of randomized response models by using classic sampling theories and total probability formulas. The performances of the sampling methods and formulas in the survey of premarital sex and cheating on exams at Soochow University were also provided. The reliability of the survey methods and formulas for sensitive question survey was found to be high.

  14. Optimal Design and Purposeful Sampling: Complementary Methodologies for Implementation Research.

    Science.gov (United States)

    Duan, Naihua; Bhaumik, Dulal K; Palinkas, Lawrence A; Hoagwood, Kimberly

    2015-09-01

    Optimal design has been an under-utilized methodology. However, it has significant real-world applications, particularly in mixed methods implementation research. We review the concept and demonstrate how it can be used to assess the sensitivity of design decisions and balance competing needs. For observational studies, this methodology enables selection of the most informative study units. For experimental studies, it entails selecting and assigning study units to intervention conditions in the most informative manner. We blend optimal design methods with purposeful sampling to show how these two concepts balance competing needs when there are multiple study aims, a common situation in implementation research.

  15. The OSIRIS-REx Asteroid Sample Return Mission Operations Design

    Science.gov (United States)

    Gal-Edd, Jonathan S.; Cheuvront, Allan

    2015-01-01

    OSIRIS-REx is an acronym that captures the scientific objectives: Origins, Spectral Interpretation, Resource Identification, and Security-Regolith Explorer. OSIRIS-REx will thoroughly characterize near-Earth asteroid Bennu (Previously known as 1019551999 RQ36). The OSIRIS-REx Asteroid Sample Return Mission delivers its science using five instruments and radio science along with the Touch-And-Go Sample Acquisition Mechanism (TAGSAM). All of the instruments and data analysis techniques have direct heritage from flown planetary missions. The OSIRIS-REx mission employs a methodical, phased approach to ensure success in meeting the mission's science requirements. OSIRIS-REx launches in September 2016, with a backup launch period occurring one year later. Sampling occurs in 2019. The departure burn from Bennu occurs in March 2021. On September 24, 2023, the Sample Return Capsule (SRC) lands at the Utah Test and Training Range (UTTR). Stardust heritage procedures are followed to transport the SRC to Johnson Space Center, where the samples are removed and delivered to the OSIRIS-REx curation facility. After a six-month preliminary examination period the mission will produce a catalog of the returned sample, allowing the worldwide community to request samples for detailed analysis. Traveling and returning a sample from an Asteroid that has not been explored before requires unique operations consideration. The Design Reference Mission (DRM) ties together spacecraft, instrument and operations scenarios. Asteroid Touch and Go (TAG) has various options varying from ground only to fully automated (natural feature tracking). Spacecraft constraints such as thermo and high gain antenna pointing impact the timeline. The mission is sensitive to navigation errors, so a late command update has been implemented. The project implemented lessons learned from other "small body" missions. The key lesson learned was 'expect the unexpected' and implement planning tools early in the lifecycle

  16. Latent spatial models and sampling design for landscape genetics

    Science.gov (United States)

    Ephraim M. Hanks; Melvin B. Hooten; Steven T. Knick; Sara J. Oyler-McCance; Jennifer A. Fike; Todd B. Cross; Michael K. Schwartz

    2016-01-01

    We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial...

  17. A Unified Approach to Power Calculation and Sample Size Determination for Random Regression Models

    Science.gov (United States)

    Shieh, Gwowen

    2007-01-01

    The underlying statistical models for multiple regression analysis are typically attributed to two types of modeling: fixed and random. The procedures for calculating power and sample size under the fixed regression models are well known. However, the literature on random regression models is limited and has been confined to the case of all…

  18. A Sample Handling System for Mars Sample Return - Design and Status

    Science.gov (United States)

    Allouis, E.; Renouf, I.; Deridder, M.; Vrancken, D.; Gelmi, R.; Re, E.

    2009-04-01

    A mission to return atmosphere and soil samples form the Mars is highly desired by planetary scientists from around the world and space agencies are starting preparation for the launch of a sample return mission in the 2020 timeframe. Such a mission would return approximately 500 grams of atmosphere, rock and soil samples to Earth by 2025. Development of a wide range of new technology will be critical to the successful implementation of such a challenging mission. Technical developments required to realise the mission include guided atmospheric entry, soft landing, sample handling robotics, biological sealing, Mars atmospheric ascent sample rendezvous & capture and Earth return. The European Space Agency has been performing system definition studies along with numerous technology development studies under the framework of the Aurora programme. Within the scope of these activities Astrium has been responsible for defining an overall sample handling architecture in collaboration with European partners (sample acquisition and sample capture, Galileo Avionica; sample containment and automated bio-sealing, Verhaert). Our work has focused on the definition and development of the robotic systems required to move the sample through the transfer chain. This paper presents the Astrium team's high level design for the surface transfer system and the orbiter transfer system. The surface transfer system is envisaged to use two robotic arms of different sizes to allow flexible operations and to enable sample transfer over relatively large distances (~2 to 3 metres): The first to deploy/retract the Drill Assembly used for sample collection, the second for the transfer of the Sample Container (the vessel containing all the collected samples) from the Drill Assembly to the Mars Ascent Vehicle (MAV). The sample transfer actuator also features a complex end-effector for handling the Sample Container. The orbiter transfer system will transfer the Sample Container from the capture

  19. Spatiotemporally Representative and Cost-Efficient Sampling Design for Validation Activities in Wanglang Experimental Site

    Directory of Open Access Journals (Sweden)

    Gaofei Yin

    2017-11-01

    Full Text Available Spatiotemporally representative Elementary Sampling Units (ESUs are required for capturing the temporal variations in surface spatial heterogeneity through field measurements. Since inaccessibility often coexists with heterogeneity, a cost-efficient sampling design is mandatory. We proposed a sampling strategy to generate spatiotemporally representative and cost-efficient ESUs based on the conditioned Latin hypercube sampling scheme. The proposed strategy was constrained by multi-temporal Normalized Difference Vegetation Index (NDVI imagery, and the ESUs were limited within a sampling feasible region established based on accessibility criteria. A novel criterion based on the Overlapping Area (OA between the NDVI frequency distribution histogram from the sampled ESUs and that from the entire study area was used to assess the sampling efficiency. A case study in Wanglang National Nature Reserve in China showed that the proposed strategy improves the spatiotemporally representativeness of sampling (mean annual OA = 74.7% compared to the single-temporally constrained (OA = 68.7% and the random sampling (OA = 63.1% strategies. The introduction of the feasible region constraint significantly reduces in-situ labour-intensive characterization necessities at expenses of about 9% loss in the spatiotemporal representativeness of the sampling. Our study will support the validation activities in Wanglang experimental site providing a benchmark for locating the nodes of automatic observation systems (e.g., LAINet which need a spatially distributed and temporally fixed sampling design.

  20. A Family of Estimators of a Sensitive Variable Using Auxiliary Information in Stratified Random Sampling

    National Research Council Canada - National Science Library

    Nadia Mushtaq; Noor Ul Amin; Muhammad Hanif

    2017-01-01

    In this article, a combined general family of estimators is proposed for estimating finite population mean of a sensitive variable in stratified random sampling with non-sensitive auxiliary variable...

  1. Sampling designs matching species biology produce accurate and affordable abundance indices

    Science.gov (United States)

    Farley, Sean; Russell, Gareth J.; Butler, Matthew J.; Selinger, Jeff

    2013-01-01

    Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling), it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS) data from 42 Alaskan brown bears (Ursus arctos). Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion), and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km2 cells) and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture sessions, which

  2. Sampling designs matching species biology produce accurate and affordable abundance indices

    Directory of Open Access Journals (Sweden)

    Grant Harris

    2013-12-01

    Full Text Available Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling, it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS data from 42 Alaskan brown bears (Ursus arctos. Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion, and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km2 cells and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture

  3. A New Estimator For Population Mean Using Two Auxiliary Variables in Stratified random Sampling

    OpenAIRE

    Singh, Rajesh; Malik, Sachin

    2014-01-01

    In this paper, we suggest an estimator using two auxiliary variables in stratified random sampling. The propose estimator has an improvement over mean per unit estimator as well as some other considered estimators. Expressions for bias and MSE of the estimator are derived up to first degree of approximation. Moreover, these theoretical findings are supported by a numerical example with original data. Key words: Study variable, auxiliary variable, stratified random sampling, bias and mean squa...

  4. Impacts of Sample Design for Validation Data on the Accuracy of Feedforward Neural Network Classification

    Directory of Open Access Journals (Sweden)

    Giles M. Foody

    2017-08-01

    Full Text Available Validation data are often used to evaluate the performance of a trained neural network and used in the selection of a network deemed optimal for the task at-hand. Optimality is commonly assessed with a measure, such as overall classification accuracy. The latter is often calculated directly from a confusion matrix showing the counts of cases in the validation set with particular labelling properties. The sample design used to form the validation set can, however, influence the estimated magnitude of the accuracy. Commonly, the validation set is formed with a stratified sample to give balanced classes, but also via random sampling, which reflects class abundance. It is suggested that if the ultimate aim is to accurately classify a dataset in which the classes do vary in abundance, a validation set formed via random, rather than stratified, sampling is preferred. This is illustrated with the classification of simulated and remotely-sensed datasets. With both datasets, statistically significant differences in the accuracy with which the data could be classified arose from the use of validation sets formed via random and stratified sampling (z = 2.7 and 1.9 for the simulated and real datasets respectively, for both p < 0.05%. The accuracy of the classifications that used a stratified sample in validation were smaller, a result of cases of an abundant class being commissioned into a rarer class. Simple means to address the issue are suggested.

  5. Design Issues in Randomized Clinical Trials of Maintenance Therapies.

    Science.gov (United States)

    Freidlin, Boris; Little, Richard F; Korn, Edward L

    2015-11-01

    A potential therapeutic strategy for patients who respond (or have stable disease) on a fixed-duration induction therapy is to receive maintenance therapy, typically given for a prolonged period of time. To enable patients and clinicians to make informed treatment decisions, the designs of phase III randomized clinical trials (RCTs) assessing maintenance strategies need to be such that their results will provide clear assessment of the relevant risks and benefits of these strategies. We review the key aspects of maintenance RCT designs. Important design considerations include choice of first-line and second-line therapies, minimizing between-arm differences in follow-up schedules, and choice of the primary endpoint. In order to change clinical practice, RCTs should be designed to accurately isolate and quantify the clinical benefit of maintenance as compared with the standard approach of fixed-duration induction followed by the second-line treatment at progression. To accomplish this, RCTs need to utilize an overall survival (or quality of life) endpoint or, in settings where this is not feasible, endpoints that incorporate the effects of the subsequent line of therapy (eg, time from randomization to second progression or death). Toxicity and symptom information over both the study treatment (maintenance) and the second-line treatment should also be collected and reported. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  6. ACS sampling system: design, implementation, and performance evaluation

    Science.gov (United States)

    Di Marcantonio, Paolo; Cirami, Roberto; Chiozzi, Gianluca

    2004-09-01

    By means of ACS (ALMA Common Software) framework we designed and implemented a sampling system which allows sampling of every Characteristic Component Property with a specific, user-defined, sustained frequency limited only by the hardware. Collected data are sent to various clients (one or more Java plotting widgets, a dedicated GUI or a COTS application) using the ACS/CORBA Notification Channel. The data transport is optimized: samples are cached locally and sent in packets with a lower and user-defined frequency to keep network load under control. Simultaneous sampling of the Properties of different Components is also possible. Together with the design and implementation issues we present the performance of the sampling system evaluated on two different platforms: on a VME based system using VxWorks RTOS (currently adopted by ALMA) and on a PC/104+ embedded platform using Red Hat 9 Linux operating system. The PC/104+ solution offers, as an alternative, a low cost PC compatible hardware environment with free and open operating system.

  7. The OSIRIS-Rex Asteroid Sample Return: Mission Operations Design

    Science.gov (United States)

    Gal-Edd, Jonathan; Cheuvront, Allan

    2014-01-01

    The OSIRIS-REx mission employs a methodical, phased approach to ensure success in meeting the missions science requirements. OSIRIS-REx launches in September 2016, with a backup launch period occurring one year later. Sampling occurs in 2019. The departure burn from Bennu occurs in March 2021. On September 24, 2023, the SRC lands at the Utah Test and Training Range (UTTR). Stardust heritage procedures are followed to transport the SRC to Johnson Space Center, where the samples are removed and delivered to the OSIRIS-REx curation facility. After a six-month preliminary examination period the mission will produce a catalog of the returned sample, allowing the worldwide community to request samples for detailed analysis.Traveling and returning a sample from an Asteroid that has not been explored before requires unique operations consideration. The Design Reference Mission (DRM) ties together space craft, instrument and operations scenarios. The project implemented lessons learned from other small body missions: APLNEAR, JPLDAWN and ESARosetta. The key lesson learned was expected the unexpected and implement planning tools early in the lifecycle. In preparation to PDR, the project changed the asteroid arrival date, to arrive one year earlier and provided additional time margin. STK is used for Mission Design and STKScheduler for instrument coverage analysis.

  8. Inflammatory Biomarkers and Risk of Schizophrenia: A 2-Sample Mendelian Randomization Study.

    Science.gov (United States)

    Hartwig, Fernando Pires; Borges, Maria Carolina; Horta, Bernardo Lessa; Bowden, Jack; Davey Smith, George

    2017-12-01

    Positive associations between inflammatory biomarkers and risk of psychiatric disorders, including schizophrenia, have been reported in observational studies. However, conventional observational studies are prone to bias, such as reverse causation and residual confounding, thus limiting our understanding of the effect (if any) of inflammatory biomarkers on schizophrenia risk. To evaluate whether inflammatory biomarkers have an effect on the risk of developing schizophrenia. Two-sample mendelian randomization study using genetic variants associated with inflammatory biomarkers as instrumental variables to improve inference. Summary association results from large consortia of candidate gene or genome-wide association studies, including several epidemiologic studies with different designs, were used. Gene-inflammatory biomarker associations were estimated in pooled samples ranging from 1645 to more than 80 000 individuals, while gene-schizophrenia associations were estimated in more than 30 000 cases and more than 45 000 ancestry-matched controls. In most studies included in the consortia, participants were of European ancestry, and the prevalence of men was approximately 50%. All studies were conducted in adults, with a wide age range (18 to 80 years). Genetically elevated circulating levels of C-reactive protein (CRP), interleukin-1 receptor antagonist (IL-1Ra), and soluble interleukin-6 receptor (sIL-6R). Risk of developing schizophrenia. Individuals with schizophrenia or schizoaffective disorders were included as cases. Given that many studies contributed to the analyses, different diagnostic procedures were used. The pooled odds ratio estimate using 18 CRP genetic instruments was 0.90 (random effects 95% CI, 0.84-0.97; P = .005) per 2-fold increment in CRP levels; consistent results were obtained using different mendelian randomization methods and a more conservative set of instruments. The odds ratio for sIL-6R was 1.06 (95% CI, 1.01-1.12; P = .02

  9. Validation of the 2008 Landsat Burned Area Ecv Product for North America Using Stratified Random Sampling

    Science.gov (United States)

    Brunner, N. M.; Mladinich, C. S.; Caldwell, M. K.; Beal, Y. J. G.

    2014-12-01

    The U.S. Geological Survey is generating a suite of Essential Climate Variables (ECVs) products, as defined by the Global Climate Observing System, from the Landsat data archive. Validation protocols for these products are being established, incorporating the Committee on Earth Observing Satellites Land Product Validation Subgroup's best practice guidelines and validation hierarchy stages. The sampling design and accuracy measures follow the methodology developed by the European Space Agency's Climate Change Initiative Fire Disturbance (fire_cci) project (Padilla and others, 2014). A rigorous validation was performed on the 2008 Burned Area ECV (BAECV) prototype product, using a stratified random sample of 48 Thiessen scene areas overlaying Landsat path/rows distributed across several terrestrial biomes throughout North America. The validation reference data consisted of fourteen sample sites acquired from the fire_cci project and the remaining new samples sites generated from a densification of the stratified sampling for North America. The reference burned area polygons were generated using the ABAMS (Automatic Burned Area Mapping) software (Bastarrika and others, 2011; Izagirre, 2014). Accuracy results will be presented indicating strengths and weaknesses of the BAECV algorithm.Bastarrika, A., Chuvieco, E., and Martín, M.P., 2011, Mapping burned areas from Landsat TM/ETM+ data with a two-phase algorithm: Balancing omission and commission errors: Remote Sensing of Environment, v. 115, no. 4, p. 1003-1012.Izagirre, A.B., 2014, Automatic Burned Area Mapping Software (ABAMS), Preliminary Documentation, Version 10 v4,: Vitoria-Gasteiz, Spain, University of Basque Country, p. 27.Padilla, M., Chuvieco, E., Hantson, S., Theis, R., and Sandow, C., 2014, D2.1 - Product Validation Plan: UAH - University of Alcalá de Henares (Spain), 37 p.

  10. Assessment of wadeable stream resources in the driftless area ecoregion in Western Wisconsin using a probabilistic sampling design.

    Science.gov (United States)

    Miller, Michael A; Colby, Alison C C; Kanehl, Paul D; Blocksom, Karen

    2009-03-01

    The Wisconsin Department of Natural Resources (WDNR), with support from the U.S. EPA, conducted an assessment of wadeable streams in the Driftless Area ecoregion in western Wisconsin using a probabilistic sampling design. This ecoregion encompasses 20% of Wisconsin's land area and contains 8,800 miles of perennial streams. Randomly-selected stream sites (n = 60) equally distributed among stream orders 1-4 were sampled. Watershed land use, riparian and in-stream habitat, water chemistry, macroinvertebrate, and fish assemblage data were collected at each true random site and an associated "modified-random" site on each stream that was accessed via a road crossing nearest to the true random site. Targeted least-disturbed reference sites (n = 22) were also sampled to develop reference conditions for various physical, chemical, and biological measures. Cumulative distribution function plots of various measures collected at the true random sites evaluated with reference condition thresholds, indicate that high proportions of the random sites (and by inference the entire Driftless Area wadeable stream population) show some level of degradation. Study results show no statistically significant differences between the true random and modified-random sample sites for any of the nine physical habitat, 11 water chemistry, seven macroinvertebrate, or eight fish metrics analyzed. In Wisconsin's Driftless Area, 79% of wadeable stream lengths were accessible via road crossings. While further evaluation of the statistical rigor of using a modified-random sampling design is warranted, sampling randomly-selected stream sites accessed via the nearest road crossing may provide a more economical way to apply probabilistic sampling in stream monitoring programs.

  11. Minimal sufficient balance randomization for sequential randomized controlled trial designs: results from the ESCAPE trial.

    Science.gov (United States)

    Sajobi, Tolulope T; Singh, Gurbakhshash; Lowerison, Mark W; Engbers, Jordan; Menon, Bijoy K; Demchuk, Andrew M; Goyal, Mayank; Hill, Michael D

    2017-11-02

    We describe the implementation of minimal sufficient balance randomization, a covariate-adaptive randomization technique, used for the "Endovascular treatment for Small Core and Anterior circulation Proximal occlusion with Emphasis on minimizing CT to recanalization times" (ESCAPE) trial. The ESCAPE trial is a prospective, multicenter, randomized clinical trial that enrolled subjects with the following main inclusion criteria: less than 12 h from symptom onset, age 18 years or older, baseline NIHSS score > 5, ASPECTS score > 5 and computed tomography angiography (CTA) evidence of carotid T/L or M1-segment middle cerebral artery (MCA) occlusion, and at least moderate collaterals by CTA. Patients were randomized using a real-time, dynamic, Internet-based, minimal sufficient balance randomization method that balanced the study arms with respect to baseline covariates including age, sex, baseline NIHSS score, site of arterial occlusion, baseline ASPECTS score and treatment with intravenously administered alteplase. Permutation-based tests of group differences confirmed group balance across several baseline covariates including sex (p = 1.00), baseline NIHSS score (p = 0.95), site of arterial occlusion (p = 1.00), baseline ASPECTS score (p = 0.28), treatment with intravenously administered alteplase (p = 0.31), and age (p = 0.67). Results from the ESCAPE trial demonstrate the feasibility and the benefit of this covariate adaptive randomization scheme in small-sample trials and for data monitoring endeavors. ESCAPE trial - NCT01778335 - at www.clinicaltrials.gov . Registered on 29 January 2013.

  12. Comparison of kriging interpolation precision between grid sampling scheme and simple random sampling scheme for precision agriculture

    Directory of Open Access Journals (Sweden)

    Jiang Houlong

    2016-01-01

    Full Text Available Sampling methods are important factors that can potentially limit the accuracy of predictions of spatial distribution patterns. A 10 ha tobacco-planted field was selected to compared the accuracy in predicting the spatial distribution of soil properties by using ordinary kriging and cross validation methods between grid sampling and simple random sampling scheme (SRS. To achieve this objective, we collected soil samples from the topsoil (0-20 cm in March 2012. Sample numbers of grid sampling and SRS were both 115 points each. Accuracies of spatial interpolation using the two sampling schemes were then evaluated based on validation samples (36 points and deviations of the estimates. The results suggested that soil pH and nitrate-N (NO3-N had low variation, whereas all other soil properties exhibited medium variation. Soil pH, organic matter (OM, total nitrogen (TN, cation exchange capacity (CEC, total phosphorus (TP and available phosphorus (AP matched the spherical model, whereas the remaining variables fit an exponential model with both sampling methods. The interpolation error of soil pH, TP, and AP was the lowest in SRS. The errors of interpolation for OM, CEC, TN, available potassium (AK and total potassium (TK were the lowest for grid sampling. The interpolation precisions of the soil NO3-N showed no significant differences between the two sampling schemes. Considering our data on interpolation precision and the importance of minerals for cultivation of flue-cured tobacco, the grid-sampling scheme should be used in tobacco-planted fields to determine the spatial distribution of soil properties. The grid-sampling method can be applied in a practical and cost-effective manner to facilitate soil sampling in tobacco-planted field.

  13. The Statistical Power of the Cluster Randomized Block Design with Matched Pairs--A Simulation Study

    Science.gov (United States)

    Dong, Nianbo; Lipsey, Mark

    2010-01-01

    This study uses simulation techniques to examine the statistical power of the group- randomized design and the matched-pair (MP) randomized block design under various parameter combinations. Both nearest neighbor matching and random matching are used for the MP design. The power of each design for any parameter combination was calculated from…

  14. Identifying the origin of groundwater samples in a multi-layer aquifer system with Random Forest classification

    Science.gov (United States)

    Baudron, Paul; Alonso-Sarría, Francisco; García-Aróstegui, José Luís; Cánovas-García, Fulgencio; Martínez-Vicente, David; Moreno-Brotóns, Jesús

    2013-08-01

    Accurate identification of the origin of groundwater samples is not always possible in complex multilayered aquifers. This poses a major difficulty for a reliable interpretation of geochemical results. The problem is especially severe when the information on the tubewells design is hard to obtain. This paper shows a supervised classification method based on the Random Forest (RF) machine learning technique to identify the layer from where groundwater samples were extracted. The classification rules were based on the major ion composition of the samples. We applied this method to the Campo de Cartagena multi-layer aquifer system, in southeastern Spain. A large amount of hydrogeochemical data was available, but only a limited fraction of the sampled tubewells included a reliable determination of the borehole design and, consequently, of the aquifer layer being exploited. Added difficulty was the very similar compositions of water samples extracted from different aquifer layers. Moreover, not all groundwater samples included the same geochemical variables. Despite of the difficulty of such a background, the Random Forest classification reached accuracies over 90%. These results were much better than the Linear Discriminant Analysis (LDA) and Decision Trees (CART) supervised classification methods. From a total of 1549 samples, 805 proceeded from one unique identified aquifer, 409 proceeded from a possible blend of waters from several aquifers and 335 were of unknown origin. Only 468 of the 805 unique-aquifer samples included all the chemical variables needed to calibrate and validate the models. Finally, 107 of the groundwater samples of unknown origin could be classified. Most unclassified samples did not feature a complete dataset. The uncertainty on the identification of training samples was taken in account to enhance the model. Most of the samples that could not be identified had an incomplete dataset.

  15. Randomized Controlled Trial of Attention Bias Modification in a Racially Diverse, Socially Anxious, Alcohol Dependent Sample

    Science.gov (United States)

    Clerkin, Elise M.; Magee, Joshua C.; Wells, Tony T.; Beard, Courtney; Barnett, Nancy P.

    2016-01-01

    Objective Attention biases may be an important treatment target for both alcohol dependence and social anxiety. This is the first ABM trial to investigate two (vs. one) targets of attention bias within a sample with co-occurring symptoms of social anxiety and alcohol dependence. Additionally, we used trial-level bias scores (TL-BS) to capture the phenomena of attention bias in a more ecologically valid, dynamic way compared to traditional attention bias scores. Method Adult participants (N=86; 41% Female; 52% African American; 40% White) with elevated social anxiety symptoms and alcohol dependence were randomly assigned to an 8-session training condition in this 2 (Social Anxiety ABM vs. Social Anxiety Control) by 2 (Alcohol ABM vs. Alcohol Control) design. Symptoms of social anxiety, alcohol dependence, and attention bias were assessed across time. Results Multilevel models estimated the trajectories for each measure within individuals, and tested whether these trajectories differed according to the randomized training conditions. Across time, there were significant or trending decreases in all attention TL-BS parameters (but not traditional attention bias scores) and most symptom measures. However, there were not significant differences in the trajectories of change between any ABM and control conditions for any symptom measures. Conclusions These findings add to previous evidence questioning the robustness of ABM and point to the need to extend the effects of ABM to samples that are racially diverse and/or have co-occurring psychopathology. The results also illustrate the potential importance of calculating trial-level attention bias scores rather than only including traditional bias scores. PMID:27591918

  16. Spatial distribution of ochratoxin A in vineyard and sampling design to assess must contamination.

    Science.gov (United States)

    Battilani, Paola; Barbano, Carlo; Rossi, Vittorio; Bertuzzi, Terenzio; Pietri, Amedeo

    2006-04-01

    The aim of this work was to study the spatial variability of ochratoxin A (OTA) in vineyards and to define a reliable sampling protocol for bunches in order to assess OTA content before harvesting. In 2002, two vineyards with 'Negroamaro' and 'Sangiovese' grape varieties were chosen in Southern Italy. The same sampling design was applied to both vineyards. Ten plants were collected from the X-shaped path of the whole vineyard (plants 1 through 10) and 10 plants from the two central cross-perpendicular lines (plants 11 through 20). Bunches harvested from plants 1 through 10 were numbered progressively, weighed, and crushed separately, and bunches obtained from plants 11 through 20 were crushed plant by plant. Juices obtained were analyzed for OTA content by high-performance liquid chromatography. Then, a simulated approach for sampling was applied, following randomized and systematic designs. High and random variability was observed in OTA content both among bunches and among plants in the two vineyards, independent of contamination level. Simulated sampling design markedly influenced the assessment of must contamination. The best results were obtained when sampling involved one bunch per plant, in a predefined position, from at least 10 plants. Estimated means, obtained with different sampling designs, did not differ significantly from calculated mean OTA content. If the true contamination in a vineyard is 2 microg kg(-1), the limit fixed in Europe for OTA content in must and wine, the range of OTA content assessed sampling one bunch in the central position of 10 plants should lie between 2.9 and 1.4. The accuracy could be considered acceptable.

  17. Optimal two-phase sampling design for comparing accuracies of two binary classification rules.

    Science.gov (United States)

    Xu, Huiping; Hui, Siu L; Grannis, Shaun

    2014-02-10

    In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Calculating sample sizes for cluster randomized trials: we can keep it simple and efficient !

    NARCIS (Netherlands)

    van Breukelen, Gerard J.P.; Candel, Math J.J.M.

    2012-01-01

    Objective: Simple guidelines for efficient sample sizes in cluster randomized trials with unknown intraclass correlation and varying cluster sizes. Methods: A simple equation is given for the optimal number of clusters and sample size per cluster. Here, optimal means maximizing power for a given

  19. Survey research with a random digit dial national mobile phone sample in Ghana: Methods and sample quality

    Science.gov (United States)

    Sefa, Eunice; Adimazoya, Edward Akolgo; Yartey, Emmanuel; Lenzi, Rachel; Tarpo, Cindy; Heward-Mills, Nii Lante; Lew, Katherine; Ampeh, Yvonne

    2018-01-01

    Introduction Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample. Materials and methods The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census. Results The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample. Conclusions The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in

  20. Sampling and energy evaluation challenges in ligand binding protein design

    Science.gov (United States)

    Dou, Jiayi; Doyle, Lindsey; Jr. Greisen, Per; Schena, Alberto; Park, Hahnbeom; Johnsson, Kai; Stoddard, Barry L.

    2017-01-01

    Abstract The steroid hormone 17α‐hydroxylprogesterone (17‐OHP) is a biomarker for congenital adrenal hyperplasia and hence there is considerable interest in development of sensors for this compound. We used computational protein design to generate protein models with binding sites for 17‐OHP containing an extended, nonpolar, shape‐complementary binding pocket for the four‐ring core of the compound, and hydrogen bonding residues at the base of the pocket to interact with carbonyl and hydroxyl groups at the more polar end of the ligand. Eight of 16 designed proteins experimentally tested bind 17‐OHP with micromolar affinity. A co‐crystal structure of one of the designs revealed that 17‐OHP is rotated 180° around a pseudo‐two‐fold axis in the compound and displays multiple binding modes within the pocket, while still interacting with all of the designed residues in the engineered site. Subsequent rounds of mutagenesis and binding selection improved the ligand affinity to nanomolar range, while appearing to constrain the ligand to a single bound conformation that maintains the same “flipped” orientation relative to the original design. We trace the discrepancy in the design calculations to two sources: first, a failure to model subtle backbone changes which alter the distribution of sidechain rotameric states and second, an underestimation of the energetic cost of desolvating the carbonyl and hydroxyl groups of the ligand. The difference between design model and crystal structure thus arises from both sampling limitations and energy function inaccuracies that are exacerbated by the near two‐fold symmetry of the molecule. PMID:28980354

  1. Sampling and energy evaluation challenges in ligand binding protein design.

    Science.gov (United States)

    Dou, Jiayi; Doyle, Lindsey; Jr Greisen, Per; Schena, Alberto; Park, Hahnbeom; Johnsson, Kai; Stoddard, Barry L; Baker, David

    2017-12-01

    The steroid hormone 17α-hydroxylprogesterone (17-OHP) is a biomarker for congenital adrenal hyperplasia and hence there is considerable interest in development of sensors for this compound. We used computational protein design to generate protein models with binding sites for 17-OHP containing an extended, nonpolar, shape-complementary binding pocket for the four-ring core of the compound, and hydrogen bonding residues at the base of the pocket to interact with carbonyl and hydroxyl groups at the more polar end of the ligand. Eight of 16 designed proteins experimentally tested bind 17-OHP with micromolar affinity. A co-crystal structure of one of the designs revealed that 17-OHP is rotated 180° around a pseudo-two-fold axis in the compound and displays multiple binding modes within the pocket, while still interacting with all of the designed residues in the engineered site. Subsequent rounds of mutagenesis and binding selection improved the ligand affinity to nanomolar range, while appearing to constrain the ligand to a single bound conformation that maintains the same "flipped" orientation relative to the original design. We trace the discrepancy in the design calculations to two sources: first, a failure to model subtle backbone changes which alter the distribution of sidechain rotameric states and second, an underestimation of the energetic cost of desolvating the carbonyl and hydroxyl groups of the ligand. The difference between design model and crystal structure thus arises from both sampling limitations and energy function inaccuracies that are exacerbated by the near two-fold symmetry of the molecule. © 2017 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.

  2. Optimal Design in Three-Level Block Randomized Designs with Two Levels of Nesting: An ANOVA Framework with Random Effects

    Science.gov (United States)

    Konstantopoulos, Spyros

    2013-01-01

    Large-scale experiments that involve nested structures may assign treatment conditions either to subgroups such as classrooms or to individuals such as students within subgroups. Key aspects of the design of such experiments include knowledge of the variance structure in higher levels and the sample sizes necessary to reach sufficient power to…

  3. Evaluation of a Class of Simple and Effective Uncertainty Methods for Sparse Samples of Random Variables and Functions

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Vicente [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bonney, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schroeder, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weirs, V. Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a class of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10-4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.

  4. [Simulation on design-based and model-based methods in descriptive analysis of complex samples].

    Science.gov (United States)

    Li, Yichong; Yu, Shicheng; Zhao, Yinjun; Jiang, Yong; Wang, Limin; Zhang, Mei; Jiang, Wei; Bao, Heling; Zhou, Maigeng; Jiang, Bo

    2015-01-01

    To compare design-based and model-based methods in descriptive analysis of complex sample. A total of 1 000 samples were selected and a multistage random sampling design was used in the analysis of the 2010 China chronic disease and risk factors surveillance. For each simulated sample, cases with probability proportional age were randomly deleted so that sample age structure was deviated systematically from that of the target population. Mean systolic blood pressure (SBP) and prevalence of raised blood pressure, as well as their 95% confidence intervals (95%CI) were determined using design-based and model-based methods (routine method and multi-level model). For estimators generated from those 3 methods, mean squared error(MSE) was computed to evaluate their validity. To compare performance of statistical inference of these methods, the probability of 95%CI covering the true parameter(mean SBP and raised blood pressure prevalence of the population) was used. MSE of mean estimator for routine method, design-based analysis and multilevel model was 6.41, 1.38, and 5.86, respectively; and the probability of 95%CI covering the true parameter was 24.7%, 97.5% and 84.3%, respectively. The routine method and multi-level model probably led to an increased probability of type I error in statistical inference. MSE of prevalence estimator was 4.80 for design-based method, which was far lower than those for routine method (20.9) and multilevel model (17.2). Probability of 95%CI covering the true prevalence for routine method was only 29.4%, and 86.4% for multilevel model, both of which were lower than that for design-based method (97.3%). Compared to routine method and multi-level model, design-based method had the best performance both in point estimation and confidence interval construction. Design-based method should be the first choice when doing statistical description of complex samples with a systematically biased sample structure.

  5. A Hardy-Weinberg equilibrium test for analyzing population genetic surveys with complex sample designs.

    Science.gov (United States)

    Moonesinghe, Ramal; Yesupriya, Ajay; Chang, Man-Huei; Dowling, Nicole F; Khoury, Muin J; Scott, Alastair J

    2010-04-15

    Testing for deviations from Hardy-Weinberg equilibrium is a widely recommended practice for population-based genetic association studies. However, current methods for this test assume a simple random sample and may not be appropriate for sample surveys with complex survey designs. In this paper, the authors present a test for Hardy-Weinberg equilibrium that adjusts for the sample weights and correlation of data collected in complex surveys. The authors perform this test by using a simple adjustment to procedures developed to analyze data from complex survey designs available within the SAS statistical software package (SAS Institute, Inc., Cary, North Carolina). Using 90 genetic markers from the Third National Health and Nutrition Examination Survey, the authors found that survey-adjusted and -unadjusted estimates of the disequilibrium coefficient were generally similar within self-reported races/ethnicities. However, estimates of the variance of the disequilibrium coefficient were significantly different between the 2 methods. Because the results of the survey-adjusted tests account for correlation among participants sampled within the same cluster, and the possibility of having related individuals sampled from the same household, the authors recommend use of this test when analyzing genetic data originating from sample surveys with complex survey designs to assess deviations from Hardy-Weinberg equilibrium.

  6. Finite-sample corrected generalized estimating equation of population average treatment effects in stepped wedge cluster randomized trials.

    Science.gov (United States)

    Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B

    2017-04-01

    Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.

  7. Random Walks on Directed Networks: Inference and Respondent-driven Sampling

    CERN Document Server

    Malmros, Jens; Britton, Tom

    2013-01-01

    Respondent driven sampling (RDS) is a method often used to estimate population properties (e.g. sexual risk behavior) in hard-to-reach populations. It combines an effective modified snowball sampling methodology with an estimation procedure that yields unbiased population estimates under the assumption that the sampling process behaves like a random walk on the social network of the population. Current RDS estimation methodology assumes that the social network is undirected, i.e. that all edges are reciprocal. However, empirical social networks in general also have non-reciprocated edges. To account for this fact, we develop a new estimation method for RDS in the presence of directed edges on the basis of random walks on directed networks. We distinguish directed and undirected edges and consider the possibility that the random walk returns to its current position in two steps through an undirected edge. We derive estimators of the selection probabilities of individuals as a function of the number of outgoing...

  8. Logistic Bayesian LASSO for genetic association analysis of data from complex sampling designs.

    Science.gov (United States)

    Zhang, Yuan; Hofmann, Jonathan N; Purdue, Mark P; Lin, Shili; Biswas, Swati

    2017-09-01

    Detecting gene-environment interactions with rare variants is critical in dissecting the etiology of common diseases. Interactions with rare haplotype variants (rHTVs) are of particular interest. At the same time, complex sampling designs, such as stratified random sampling, are becoming increasingly popular for designing case-control studies, especially for recruiting controls. The US Kidney Cancer Study (KCS) is an example, wherein all available cases were included while the controls at each site were randomly selected from the population by frequency matching with cases based on age, sex and race. There is currently no rHTV association method that can account for such a complex sampling design. To fill this gap, we consider logistic Bayesian LASSO (LBL), an existing rHTV approach for case-control data, and show that its model can easily accommodate the complex sampling design. We study two extensions that include stratifying variables either as main effects only or with additional modeling of their interactions with haplotypes. We conduct extensive simulation studies to compare the complex sampling methods with the original LBL methods. We find that, when there is no interaction between haplotype and stratifying variables, both extensions perform well while the original LBL methods lead to inflated type I error rates. However, when such an interaction exists, it is necessary to include the interaction effect in the model to control the type I error rate. Finally, we analyze the KCS data and find a significant interaction between (current) smoking and a specific rHTV in the N-acetyltransferase 2 gene.

  9. Stemflow estimation in a redwood forest using model-based stratified random sampling

    Science.gov (United States)

    Jack Lewis

    2003-01-01

    Model-based stratified sampling is illustrated by a case study of stemflow volume in a redwood forest. The approach is actually a model-assisted sampling design in which auxiliary information (tree diameter) is utilized in the design of stratum boundaries to optimize the efficiency of a regression or ratio estimator. The auxiliary information is utilized in both the...

  10. Randomized controlled trials – a matter of design

    Directory of Open Access Journals (Sweden)

    Spieth PM

    2016-06-01

    Full Text Available Peter Markus Spieth,1,2 Anne Sophie Kubasch,3 Ana Isabel Penzlin,4 Ben Min-Woo Illigens,2,5 Kristian Barlinn,6 Timo Siepmann2,6,7 1Department of Anesthesiology and Critical Care Medicine, University Hospital Carl Gustav Carus, Technische Universität Dresden, 2Center for Clinical Research and Management Education, Division of Health Care Sciences, Dresden International University, 3Pediatric Rheumatology and Immunology, Children’s Hospital, University Hospital Carl Gustav Carus, Technische Universität Dresden, 4Institute of Clinical Pharmacology, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Saxony, Germany; 5Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA; 6Department of Neurology, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Saxony, Germany; 7Radcliffe Department of Medicine, John Radcliffe Hospital, University of Oxford, Oxford, Oxfordshire, UK Abstract: Randomized controlled trials (RCTs are the hallmark of evidence-based medicine and form the basis for translating research data into clinical practice. This review summarizes commonly applied designs and quality indicators of RCTs to provide guidance in interpreting and critically evaluating clinical research data. It further reflects on the principle of equipoise and its practical applicability to clinical science with an emphasis on critical care and neurological research. We performed a review of educational material, review articles, methodological studies, and published clinical trials using the databases MEDLINE, PubMed, and ClinicalTrials.gov. The most relevant recommendations regarding design, conduction, and reporting of RCTs may include the following: 1 clinically relevant end points should be defined a priori, and an unbiased analysis and report of the study results should be warranted, 2 both significant and nonsignificant results should be objectively

  11. Design of a cluster-randomized minority recruitment trial: RECRUIT.

    Science.gov (United States)

    Tilley, Barbara C; Mainous, Arch G; Smith, Daniel W; McKee, M Diane; Amorrortu, Rossybelle P; Alvidrez, Jennifer; Diaz, Vanessa; Ford, Marvella E; Fernandez, Maria E; Hauser, Robert A; Singer, Carlos; Landa, Veronica; Trevino, Aron; DeSantis, Stacia M; Zhang, Yefei; Daniels, Elvan; Tabor, Derrick; Vernon, Sally W

    2017-06-01

    Racial/ethnic minority groups remain underrepresented in clinical trials. Many strategies to increase minority recruitment focus on minority communities and emphasize common diseases such as hypertension. Scant literature focuses on minority recruitment to trials of less common conditions, often conducted in specialty clinics and dependent on physician referrals. We identified trust/mistrust of specialist physician investigators and institutions conducting medical research and consequent participant reluctance to participate in clinical trials as key-shared barriers across racial/ethnic groups. We developed a trust-based continuous quality improvement intervention to build trust between specialist physician investigators and community minority-serving physicians and ultimately potential trial participants. To avoid the inherent biases of non-randomized studies, we evaluated the intervention in the national Randomized Recruitment Intervention Trial (RECRUIT). This report presents the design of RECRUIT. Specialty clinic follow-up continues through April 2017. We hypothesized that specialist physician investigators and coordinators trained in the trust-based continuous quality improvement intervention would enroll a greater proportion of minority participants in their specialty clinics than specialist physician investigators in control specialty clinics. Specialty clinic was the unit of randomization. Using continuous quality improvement, the specialist physician investigators and coordinators tailored recruitment approaches to their specialty clinic characteristics and populations. Primary analyses were adjusted for clustering by specialty clinic within parent trial and matching covariates. RECRUIT was implemented in four multi-site clinical trials (parent trials) supported by three National Institutes of Health institutes and included 50 associated specialty clinics from these parent trials. Using current data, we have 88% power or greater to detect a 0.15 or

  12. Experiments with central-limit properties of spatial samples from locally covariant random fields

    Science.gov (United States)

    Barringer, T.H.; Smith, T.E.

    1992-01-01

    When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.

  13. Estimation of Sensitive Proportion by Randomized Response Data in Successive Sampling

    Directory of Open Access Journals (Sweden)

    Bo Yu

    2015-01-01

    Full Text Available This paper considers the problem of estimation for binomial proportions of sensitive or stigmatizing attributes in the population of interest. Randomized response techniques are suggested for protecting the privacy of respondents and reducing the response bias while eliciting information on sensitive attributes. In many sensitive question surveys, the same population is often sampled repeatedly on each occasion. In this paper, we apply successive sampling scheme to improve the estimation of the sensitive proportion on current occasion.

  14. Determining optimal sample sizes for multistage adaptive randomized clinical trials from an industry perspective using value of information methods.

    Science.gov (United States)

    Chen, Maggie H; Willan, Andrew R

    2013-02-01

    Most often, sample size determinations for randomized clinical trials are based on frequentist approaches that depend on somewhat arbitrarily chosen factors, such as type I and II error probabilities and the smallest clinically important difference. As an alternative, many authors have proposed decision-theoretic (full Bayesian) approaches, often referred to as value of information methods that attempt to determine the sample size that maximizes the difference between the trial's expected utility and its expected cost, referred to as the expected net gain. Taking an industry perspective, Willan proposes a solution in which the trial's utility is the increase in expected profit. Furthermore, Willan and Kowgier, taking a societal perspective, show that multistage designs can increase expected net gain. The purpose of this article is to determine the optimal sample size using value of information methods for industry-based, multistage adaptive randomized clinical trials, and to demonstrate the increase in expected net gain realized. At the end of each stage, the trial's sponsor must decide between three actions: continue to the next stage, stop the trial and seek regulatory approval, or stop the trial and abandon the drug. A model for expected total profit is proposed that includes consideration of per-patient profit, disease incidence, time horizon, trial duration, market share, and the relationship between trial results and probability of regulatory approval. The proposed method is extended to include multistage designs with a solution provided for a two-stage design. An example is given. Significant increases in the expected net gain are realized by using multistage designs. The complexity of the solutions increases with the number of stages, although far simpler near-optimal solutions exist. The method relies on the central limit theorem, assuming that the sample size is sufficiently large so that the relevant statistics are normally distributed. From a value of

  15. Sample size calculations for evaluating treatment policies in multi-stage designs.

    Science.gov (United States)

    Dawson, Ree; Lavori, Philip W

    2010-12-01

    Sequential multiple assignment randomized (SMAR) designs are used to evaluate treatment policies, also known as adaptive treatment strategies (ATS). The determination of SMAR sample sizes is challenging because of the sequential and adaptive nature of ATS, and the multi-stage randomized assignment used to evaluate them. We derive sample size formulae appropriate for the nested structure of successive SMAR randomizations. This nesting gives rise to ATS that have overlapping data, and hence between-strategy covariance. We focus on the case when covariance is substantial enough to reduce sample size through improved inferential efficiency. Our design calculations draw upon two distinct methodologies for SMAR trials, using the equality of the optimal semi-parametric and Bayesian predictive estimators of standard error. This 'hybrid' approach produces a generalization of the t-test power calculation that is carried out in terms of effect size and regression quantities familiar to the trialist. Simulation studies support the reasonableness of underlying assumptions as well as the adequacy of the approximation to between-strategy covariance when it is substantial. Investigation of the sensitivity of formulae to misspecification shows that the greatest influence is due to changes in effect size, which is an a priori clinical judgment on the part of the trialist. We have restricted simulation investigation to SMAR studies of two and three stages, although the methods are fully general in that they apply to 'K-stage' trials. Practical guidance is needed to allow the trialist to size a SMAR design using the derived methods. To this end, we define ATS to be 'distinct' when they differ by at least the (minimal) size of effect deemed to be clinically relevant. Simulation results suggest that the number of subjects needed to distinguish distinct strategies will be significantly reduced by adjustment for covariance only when small effects are of interest.

  16. Using variance components to estimate power in a hierarchically nested sampling design improving monitoring of larval Devils Hole pupfish

    Science.gov (United States)

    Dzul, Maria C.; Dixon, Philip M.; Quist, Michael C.; Dinsomore, Stephen J.; Bower, Michael R.; Wilson, Kevin P.; Gaines, D. Bailey

    2013-01-01

    We used variance components to assess allocation of sampling effort in a hierarchically nested sampling design for ongoing monitoring of early life history stages of the federally endangered Devils Hole pupfish (DHP) (Cyprinodon diabolis). Sampling design for larval DHP included surveys (5 days each spring 2007–2009), events, and plots. Each survey was comprised of three counting events, where DHP larvae on nine plots were counted plot by plot. Statistical analysis of larval abundance included three components: (1) evaluation of power from various sample size combinations, (2) comparison of power in fixed and random plot designs, and (3) assessment of yearly differences in the power of the survey. Results indicated that increasing the sample size at the lowest level of sampling represented the most realistic option to increase the survey's power, fixed plot designs had greater power than random plot designs, and the power of the larval survey varied by year. This study provides an example of how monitoring efforts may benefit from coupling variance components estimation with power analysis to assess sampling design.

  17. Sampling versus Random Binning for Multiple Descriptions of a Bandlimited Source

    DEFF Research Database (Denmark)

    Mashiach, Adam; Østergaard, Jan; Zamir, Ram

    2013-01-01

    Random binning is an efficient, yet complex, coding technique for the symmetric L-description source coding problem. We propose an alternative approach, that uses the quantized samples of a bandlimited source as "descriptions". By the Nyquist condition, the source can be reconstructed if enough s...

  18. Recidivism among Child Sexual Abusers: Initial Results of a 13-Year Longitudinal Random Sample

    Science.gov (United States)

    Patrick, Steven; Marsh, Robert

    2009-01-01

    In the initial analysis of data from a random sample of all those charged with child sexual abuse in Idaho over a 13-year period, only one predictive variable was found that related to recidivism of those convicted. Variables such as ethnicity, relationship, gender, and age differences did not show a significant or even large association with…

  19. Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA

    Science.gov (United States)

    Taylor, Laura; Doehler, Kirsten

    2015-01-01

    This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…

  20. Effects-Driven Participatory Design: Learning from Sampling Interruptions

    DEFF Research Database (Denmark)

    Brandrup, Morten; Østergaard, Kija Lin; Hertzum, Morten

    2017-01-01

    a sustained focus on pursued effects and uses the experience sampling method (ESM) to collect real-use feedback. To illustrate the use of the method we analyze a case that involves the organizational implementation of electronic whiteboards at a Danish hospital to support the clinicians’ intra......Participatory design (PD) can play an important role in obtaining benefits from healthcare information technologies, but we contend that to fulfil this role PD must incorporate feedback from real use of the technologies. In this chapter we describe an effects-driven PD approach that revolves around....... The investigation showed that the electronic whiteboards had little potential for reducing the number of phone calls at the operating ward. The combination of quantitative data and qualitative feedback worked both as a basis for aligning assumptions to data and showed ESM as an instrument for triggering in...

  1. Assessing usual dietary intake in complex sample design surveys: the National Dietary Survey

    Directory of Open Access Journals (Sweden)

    Flávia dos Santos Barbosa

    2013-02-01

    Full Text Available The National Cancer Institute (NCI method allows the distributions of usual intake of nutrients and foods to be estimated. This method can be used in complex surveys. However, the user must perform additional calculations, such as balanced repeated replication (BRR, in order to obtain standard errors and confidence intervals for the percentiles and mean from the distribution of usual intake. The objective is to highlight adaptations of the NCI method using data from the National Dietary Survey. The application of the NCI method was exemplified analyzing the total energy (kcal and fruit (g intake, comparing estimations of mean and standard deviation that were based on the complex design of the Brazilian survey with those assuming simple random sample. Although means point estimates were similar, estimates of standard error using the complex design increased by up to 60% compared to simple random sample. Thus, for valid estimates of food and energy intake for the population, all of the sampling characteristics of the surveys should be taken into account because when these characteristics are neglected, statistical analysis may produce underestimated standard errors that would compromise the results and the conclusions of the survey.

  2. Design and implementation of a random neural network routing engine.

    Science.gov (United States)

    Kocak, T; Seeber, J; Terzioglu, H

    2003-01-01

    Random neural network (RNN) is an analytically tractable spiked neural network model that has been implemented in software for a wide range of applications for over a decade. This paper presents the hardware implementation of the RNN model. Recently, cognitive packet networks (CPN) is proposed as an alternative packet network architecture where there is no routing table, instead the RNN based reinforcement learning is used to route packets. Particularly, we describe implementation details for the RNN based routing engine of a CPN network processor chip: the smart packet processor (SPP). The SPP is a dual port device that stores, modifies, and interprets the defining characteristics of multiple RNN models. In addition to hardware design improvements over the software implementation such as the dual access memory, output calculation step, and reduced output calculation module, this paper introduces a major modification to the reinforcement learning algorithm used in the original CPN specification such that the number of weight terms are reduced from 2n/sup 2/ to 2n. This not only yields significant memory savings, but it also simplifies the calculations for the steady state probabilities (neuron outputs in RNN). Simulations have been conducted to confirm the proper functionality for the isolated SPP design as well as for the multiple SPP's in a networked environment.

  3. Design and Analysis Considerations for Cluster Randomized Controlled Trials That Have a Small Number of Clusters.

    Science.gov (United States)

    Deke, John

    2016-10-25

    Cluster randomized controlled trials (CRCTs) often require a large number of clusters in order to detect small effects with high probability. However, there are contexts where it may be possible to design a CRCT with a much smaller number of clusters (10 or fewer) and still detect meaningful effects. The objective is to offer recommendations for best practices in design and analysis for small CRCTs. I use simulations to examine alternative design and analysis approaches. Specifically, I examine (1) which analytic approaches control Type I errors at the desired rate, (2) which design and analytic approaches yield the most power, (3) what is the design effect of spurious correlations, and (4) examples of specific scenarios under which impacts of different sizes can be detected with high probability. I find that (1) mixed effects modeling and using Ordinary Least Squares (OLS) on data aggregated to the cluster level both control the Type I error rate, (2) randomization within blocks is always recommended, but how best to account for blocking through covariate adjustment depends on whether the precision gains offset the degrees of freedom loss, (3) power calculations can be accurate when design effects from small sample, spurious correlations are taken into account, and (4) it is very difficult to detect small effects with just four clusters, but with six or more clusters, there are realistic circumstances under which small effects can be detected with high probability. © The Author(s) 2016.

  4. Randomized Controlled Trials in Music Therapy: Guidelines for Design and Implementation.

    Science.gov (United States)

    Bradt, Joke

    2012-01-01

    Evidence from randomized controlled trials (RCTs) plays a powerful role in today's healthcare industry. At the same time, it is important that multiple types of evidence contribute to music therapy's knowledge base and that the dialogue of clinical effectiveness in music therapy is not dominated by the biomedical hierarchical model of evidence-based practice. Whether or not one agrees with the hierarchical model of evidence in the current healthcare climate, RCTs can contribute important knowledge to our field. Therefore, it is important that music therapists are prepared to design trials that meet current methodological standards and, equally important, are able to respond appropriately to those design aspects that may not be feasible in music therapy research. To provide practical guidelines to music therapy researchers for the design and implementation of RCTs as well as to enable music therapists to be well-informed consumers of RCT evidence. This article reviews key design aspects of RCTs and discusses how to best implement these standards in music therapy trials. A systematic presentation of basic randomization methods, allocation concealment strategies, issues related to blinding in music therapy trials and strategies for implementation, the use of treatment manuals, types of control groups, outcome selection, and sample size computation is provided. Despite the challenges of meeting all key design demands typical of an RCT, it is possible to design rigorous music therapy RCTs that accurately estimate music therapy treatment benefits.

  5. Flexible sampling large-scale social networks by self-adjustable random walk

    Science.gov (United States)

    Xu, Xiao-Ke; Zhu, Jonathan J. H.

    2016-12-01

    Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.

  6. Effects-Driven Participatory Design: Learning from Sampling Interruptions.

    Science.gov (United States)

    Brandrup, Morten; Østergaard, Kija Lin; Hertzum, Morten; Karasti, Helena; Simonsen, Jesper

    2017-01-01

    Participatory design (PD) can play an important role in obtaining benefits from healthcare information technologies, but we contend that to fulfil this role PD must incorporate feedback from real use of the technologies. In this paper we describe an effects-driven PD approach that revolves around a sustained focus on pursued effects and uses the experience sampling method (ESM) to collect real-use feedback. To illustrate the use of the method we analyze a case that involves the organizational implementation of electronic whiteboards at a Danish hospital to support the clinicians' intra- and interdepartmental coordination. The hospital aimed to reduce the number of phone calls involved in coordinating work because many phone calls were seen as unnecessary interruptions. To learn about the interruptions we introduced an app for capturing quantitative data and qualitative feedback about the phone calls. The investigation showed that the electronic whiteboards had little potential for reducing the number of phone calls at the operating ward. The combination of quantitative data and qualitative feedback worked both as a basis for aligning assumptions to data and showed ESM as an instrument for triggering in-situ reflection. The participant-driven design and redesign of the way data were captured by means of ESM is a central contribution to the understanding of how to conduct effects-driven PD.

  7. Occupational position and its relation to mental distress in a random sample of Danish residents

    DEFF Research Database (Denmark)

    Rugulies, Reiner Ernst; Madsen, Ida E H; Nielsen, Maj Britt D

    2010-01-01

    somatization symptoms (OR = 6.28, 95% CI = 1.39-28.46). CONCLUSIONS: Unskilled manual workers, the unemployed, and, to a lesser extent, the low-grade self-employed showed an increased level of mental distress. Activities to promote mental health in the Danish population should be directed toward these groups.......PURPOSE: To analyze the distribution of depressive, anxiety, and somatization symptoms across different occupational positions in a random sample of Danish residents. METHODS: The study sample consisted of 591 Danish residents (50% women), aged 20-65, drawn from an age- and gender-stratified random...... sample of the Danish population. Participants filled out a survey that included the 92 item version of the Hopkins Symptom Checklist (SCL-92). We categorized occupational position into seven groups: high- and low-grade non-manual workers, skilled and unskilled manual workers, high- and low-grade self...

  8. Assessment of proteinuria by using protein: creatinine index in random urine sample.

    Science.gov (United States)

    Khan, Dilshad Ahmed; Ahmad, Tariq Mahmood; Qureshil, Ayaz Hussain; Halim, Abdul; Ahmad, Mumtaz; Afzal, Saeed

    2005-10-01

    To assess the quantitative measurement of proteinuria by using random urine protein:creatinine index/ratio in comparison with 24 hours urinary protein excretion in patients of renal diseases having normal glomerular filtration rate. One hundred and thirty patients, 94 males and 36 females, with an age range of 5 to 60 years; having proteinuria of more than 150 mg/day were included in this study. Qualitative urinary protein estimation was done on random urine specimen by dipstick. Quantitative measurement of protein in the random and 24 hours urine specimens were carried out by a method based on the formation of a red complex of protein with pyrogallal red in acid medium on Micro lab 200 (Merck). Estimation of creatinine was done on Selectra -2 (Merck) by Jaffe's reaction. The urine protein:creatinine index and ratio were calculated by dividing the urine protein concentration (mg/L) by urine creatinine concentration (mmol/L) multilplied by 10 and mg/mg respectively. The protein:creatinine index and ratio of more than 140 and 0.18 respectively in a random urine sample indicated pathological proteinuria. An excellent correlation (r=0.96) was found between random urine protein:creatinine index/ratio and standard 24 hours urinary protein excretion in these patients (pprotein:creatinine index in random urine is a convenient, quick and reliable method of estimation of proteinuria as compared to 24 hours of urinary protein excretion for diagnosis and monitoring of renal diseases in our medical setup.

  9. Generalized essential energy space random walks to more effectively accelerate solute sampling in aqueous environment.

    Science.gov (United States)

    Lv, Chao; Zheng, Lianqing; Yang, Wei

    2012-01-28

    Molecular dynamics sampling can be enhanced via the promoting of potential energy fluctuations, for instance, based on a Hamiltonian modified with the addition of a potential-energy-dependent biasing term. To overcome the diffusion sampling issue, which reveals the fact that enlargement of event-irrelevant energy fluctuations may abolish sampling efficiency, the essential energy space random walk (EESRW) approach was proposed earlier. To more effectively accelerate the sampling of solute conformations in aqueous environment, in the current work, we generalized the EESRW method to a two-dimension-EESRW (2D-EESRW) strategy. Specifically, the essential internal energy component of a focused region and the essential interaction energy component between the focused region and the environmental region are employed to define the two-dimensional essential energy space. This proposal is motivated by the general observation that in different conformational events, the two essential energy components have distinctive interplays. Model studies on the alanine dipeptide and the aspartate-arginine peptide demonstrate sampling improvement over the original one-dimension-EESRW strategy; with the same biasing level, the present generalization allows more effective acceleration of the sampling of conformational transitions in aqueous solution. The 2D-EESRW generalization is readily extended to higher dimension schemes and employed in more advanced enhanced-sampling schemes, such as the recent orthogonal space random walk method. © 2012 American Institute of Physics

  10. Multilayer pixel super-resolution lensless in-line holographic microscope with random sample movement.

    Science.gov (United States)

    Wang, Mingjun; Feng, Shaodong; Wu, Jigang

    2017-10-06

    We report a multilayer lensless in-line holographic microscope (LIHM) with improved imaging resolution by using the pixel super-resolution technique and random sample movement. In our imaging system, a laser beam illuminated the sample and a CMOS imaging sensor located behind the sample recorded the in-line hologram for image reconstruction. During the imaging process, the sample was moved by hand randomly and the in-line holograms were acquired sequentially. Then the sample image was reconstructed from an enhanced-resolution hologram obtained from multiple low-resolution in-line holograms by applying the pixel super-resolution (PSR) technique. We studied the resolution enhancement effects by using the U.S. Air Force (USAF) target as the sample in numerical simulation and experiment. We also showed that multilayer pixel super-resolution images can be obtained by imaging a triple-layer sample made with the filamentous algae on the middle layer and microspheres with diameter of 2 μm on the top and bottom layers. Our pixel super-resolution LIHM provides a compact and low-cost solution for microscopic imaging and is promising for many biomedical applications.

  11. Comparison of Sampling Designs for Estimating Deforestation from Landsat TM and MODIS Imagery: A Case Study in Mato Grosso, Brazil

    Directory of Open Access Journals (Sweden)

    Shanyou Zhu

    2014-01-01

    Full Text Available Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.

  12. Comparison of sampling designs for estimating deforestation from landsat TM and MODIS imagery: a case study in Mato Grosso, Brazil.

    Science.gov (United States)

    Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin

    2014-01-01

    Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.

  13. Hemodialysis catheter design and catheter performance: a randomized controlled trial.

    Science.gov (United States)

    Van Der Meersch, Hans; De Bacquer, Dirk; Vandecasteele, Stefaan J; Van den Bergh, Barbara; Vermeiren, Pieter; De Letter, Jan; De Vriese, An S

    2014-12-01

    A complication of long-term use of tunneled cuffed catheters for hemodialysis is the high rate of infection and thrombus-related dysfunction. Specific mechanical features of tunneled cuffed catheters may improve hemodynamic performance and decrease thrombosis and infection rates. However, there currently is no proven advantage of one design over another. Single-center randomized clinical trial. 302 hemodialysis patients who required a tunneled cuffed catheter as temporary or definite vascular access. Palindrome Symmetric Tip Dialysis Catheter or HemoStar Long-Term Hemodialysis Catheter. The primary end point was primary assisted patency. Secondary end points were incidence of catheter-related bloodstream infections (CRBSIs), thrombosis, and 2 indicators of rheologic function: mean effective blood flow rate and urokinase use. Mean primary assisted patency was 135.9 days for Palindrome and 136.5 days for HemoStar (P=0.8). Definite CRBSI occurred in 0.24 and 0.10/1,000 catheter-days for Palindrome and HemoStar, respectively (P=0.3). Removal rates for thrombosis that could not be resolved with thrombolysis were 0.53 and 0.43/1,000 catheter-days for Palindrome and HemoStar, respectively (P=0.7). Urokinase use was lower for Palindrome than for HemoStar, as evidenced by a lower number of urokinase infusions/1,000 catheter-days (17 and 35; Pcatheters that never required thrombolysis (58% and 45%; P=0.03). Mean effective blood flow rate was higher for Palindrome than for HemoStar (333 and 304mL/min; Pcatheter types. The Palindrome catheter required less thrombolysis and achieved higher blood flow rates than the HemoStar catheter. These findings suggest that mechanical catheter design may improve catheter rheology, but does not affect risks for thrombosis and infection and hence catheter survival. Copyright © 2014 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  14. Empirical Likelihood Confidence Intervals under the Rao-Hartley-Cochran Sampling Design

    OpenAIRE

    Berger, Yves G.

    2014-01-01

    The Hartley-Rao-Cochran (RHC) sampling design (Rao et al., 1962) is a popular unequal probability sampling design. We show how empirical likelihood confidence intervals can be derived under this sampling design. Berger and De La Riva Torres (2012) proposed an empirical likelihood approach which can be used for point estimation and to construct confidence intervals under complex sampling designs. We show how this approach can be adjusted for the RHC sampling design. The proposed approach intri...

  15. Characterization of Electron Microscopes with Binary Pseudo-random Multilayer Test Samples

    Energy Technology Data Exchange (ETDEWEB)

    V Yashchuk; R Conley; E Anderson; S Barber; N Bouet; W McKinney; P Takacs; D Voronov

    2011-12-31

    Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [1] and [2] and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi2/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.

  16. Characterization of electron microscopes with binary pseudo-random multilayer test samples

    Energy Technology Data Exchange (ETDEWEB)

    Yashchuk, Valeriy V., E-mail: VVYashchuk@lbl.gov [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Conley, Raymond [NSLS-II, Brookhaven National Laboratory, Upton, NY 11973 (United States); Anderson, Erik H. [Center for X-ray Optics, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Barber, Samuel K. [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Bouet, Nathalie [NSLS-II, Brookhaven National Laboratory, Upton, NY 11973 (United States); McKinney, Wayne R. [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Takacs, Peter Z. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Voronov, Dmitriy L. [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2011-09-01

    Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi{sub 2}/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.

  17. 23 CFR Appendix A to Part 1340 - Sample Design

    Science.gov (United States)

    2010-04-01

    ... selected. D. Sample Size: The following tables are provided as rough guidelines for determining sample size... previous probability-based seat belt surveys. Determining First Stage Sample Size Number of counties in...-170 21 More than 180 22 Determining Second Stage Sample Size Average number of road segments in each...

  18. Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2017-07-01

    The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.

  19. A two-way enriched clinical trial design: combining advantages of placebo lead-in and randomized withdrawal.

    Science.gov (United States)

    Ivanova, Anastasia; Tamura, Roy N

    2015-12-01

    A new clinical trial design, designated the two-way enriched design (TED), is introduced, which augments the standard randomized placebo-controlled trial with second-stage enrichment designs in placebo non-responders and drug responders. The trial is run in two stages. In the first stage, patients are randomized between drug and placebo. In the second stage, placebo non-responders are re-randomized between drug and placebo and drug responders are re-randomized between drug and placebo. All first-stage data, and second-stage data from first-stage placebo non-responders and first-stage drug responders, are utilized in the efficacy analysis. The authors developed one, two and three degrees of freedom score tests for treatment effect in the TED and give formulae for asymptotic power and for sample size computations. The authors compute the optimal allocation ratio between drug and placebo in the first stage for the TED and compare the operating characteristics of the design to the standard parallel clinical trial, placebo lead-in and randomized withdrawal designs. Two motivating examples from different disease areas are presented to illustrate the possible design considerations. © The Author(s) 2011.

  20. Randomized controlled trials and neuro-oncology: should alternative designs be considered?

    Science.gov (United States)

    Mansouri, Alireza; Shin, Samuel; Cooper, Benjamin; Srivastava, Archita; Bhandari, Mohit; Kondziolka, Douglas

    2015-09-01

    Deficiencies in design and reporting of randomized controlled trials (RCTs) hinders interpretability and critical appraisal. The reporting quality of recent RCTs in neuro-oncology was analyzed to assess adequacy of design and reporting. The MEDLINE and EMBASE databases were searched to identify non-surgical RCTs (years 2005-2014, inclusive). The CONSORT and Jadad scales were used to assess the quality of design/reporting. Studies published in 2005-2010 were compared as a cohort against studies published in 2011-2014, in terms of general characteristics and reporting quality. A PRECIS-based scale was used to designate studies on the pragmatic-explanatory continuum. Spearman's test was used to assess correlations. Regression analysis was used to assess associations. Overall 68 RCTs were identified. Studies were often chemotherapy-based (n = 41 studies) focusing upon high grade gliomas (46 %) and metastases (41 %) as the top pathologies. Multi-center trials (71 %) were frequent. The overall median CONSORT and Jadad scores were 34.5 (maximum 44) and 2 (maximum 5), respectively; these scores were similar in radiation and chemotherapy-based trials. Major areas of deficiency pertained to allocation concealment, implementation of methods, and blinding whereby less than 20 % of articles fulfilled all criteria. Description of intervention, random sequence generation, and the details regarding recruitment were also deficient; less than 50 % of studies fulfilled all criteria. Description of sample size calculations and blinding improved in later published cohorts. Journal impact factor was significantly associated with higher quality (p = 0.04). Large academic consortia, multi-center designs, ITT analysis, collaboration with biostatisticians, larger sample sizes, and studies with pragmatic objectives were more likely to achieve positive primary outcomes on univariate analysis; none of these variables were significant on multivariate analysis. Deficiencies in the

  1. Designing a monitoring program to estimate estuarine survival of anadromous salmon smolts: simulating the effect of sample design on inference

    Science.gov (United States)

    Romer, Jeremy D.; Gitelman, Alix I.; Clements, Shaun; Schreck, Carl B.

    2015-01-01

    A number of researchers have attempted to estimate salmonid smolt survival during outmigration through an estuary. However, it is currently unclear how the design of such studies influences the accuracy and precision of survival estimates. In this simulation study we consider four patterns of smolt survival probability in the estuary, and test the performance of several different sampling strategies for estimating estuarine survival assuming perfect detection. The four survival probability patterns each incorporate a systematic component (constant, linearly increasing, increasing and then decreasing, and two pulses) and a random component to reflect daily fluctuations in survival probability. Generally, spreading sampling effort (tagging) across the season resulted in more accurate estimates of survival. All sampling designs in this simulation tended to under-estimate the variation in the survival estimates because seasonal and daily variation in survival probability are not incorporated in the estimation procedure. This under-estimation results in poorer performance of estimates from larger samples. Thus, tagging more fish may not result in better estimates of survival if important components of variation are not accounted for. The results of our simulation incorporate survival probabilities and run distribution data from previous studies to help illustrate the tradeoffs among sampling strategies in terms of the number of tags needed and distribution of tagging effort. This information will assist researchers in developing improved monitoring programs and encourage discussion regarding issues that should be addressed prior to implementation of any telemetry-based monitoring plan. We believe implementation of an effective estuary survival monitoring program will strengthen the robustness of life cycle models used in recovery plans by providing missing data on where and how much mortality occurs in the riverine and estuarine portions of smolt migration. These data

  2. An inversion method based on random sampling for real-time MEG neuroimaging

    CERN Document Server

    Pascarella, Annalisa

    2016-01-01

    The MagnetoEncephaloGraphy (MEG) has gained great interest in neurorehabilitation training due to its high temporal resolution. The challenge is to localize the active regions of the brain in a fast and accurate way. In this paper we use an inversion method based on random spatial sampling to solve the real-time MEG inverse problem. Several numerical tests on synthetic but realistic data show that the method takes just a few hundredths of a second on a laptop to produce an accurate map of the electric activity inside the brain. Moreover, it requires very little memory storage. For this reasons the random sampling method is particularly attractive in real-time MEG applications.

  3. Modified Exponential Type Estimator for Population Mean Using Auxiliary Variables in Stratified Random Sampling

    OpenAIRE

    Özel, Gamze

    2015-01-01

    In this paper, a new exponential type estimator is developed in the stratified random sampling for the population mean using auxiliary variable information. In order to evaluate efficiency of the introduced estimator, we first review some estimators and study the optimum property of the suggested strategy. To judge the merits of the suggested class of estimators over others under the optimal condition, simulation study and real data applications are conducted. The results show that the introduc...

  4. Effectiveness of hand hygiene education among a random sample of women from the community

    OpenAIRE

    Ubheeram, J.; Biranjia-Hurdoyal, S.D.

    2017-01-01

    Summary Objective. The effectiveness of hand hygiene education was investigated by studying the hand hygiene awareness and bacterial hand contamination among a random sample of 170 women in the community. Methods. Questionnaire was used to assess the hand hygiene awareness score, followed by swabbing of the dominant hand. Bacterial identification was done by conventional biochemical tests. Results. Better hand hygiene awareness score was significantly associated with age, scarce bacterial gro...

  5. Control Capacity and A Random Sampling Method in Exploring Controllability of Complex Networks

    OpenAIRE

    Jia, Tao; Barab?si, Albert-L?szl?

    2013-01-01

    Controlling complex systems is a fundamental challenge of network science. Recent advances indicate that control over the system can be achieved through a minimum driver node set (MDS). The existence of multiple MDS's suggests that nodes do not participate in control equally, prompting us to quantify their participations. Here we introduce control capacity quantifying the likelihood that a node is a driver node. To efficiently measure this quantity, we develop a random sampling algorithm. Thi...

  6. Classification of epileptic EEG signals based on simple random sampling and sequential feature selection

    OpenAIRE

    Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui

    2016-01-01

    Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential fea...

  7. Statistical Approaches in Analysis of Variance: from Random Arrangements to Latin Square Experimental Design

    OpenAIRE

    Radu E. SESTRAŞ; Lorentz JÄNTSCHI; Sorana D. BOLBOACĂ

    2009-01-01

    Background: The choices of experimental design as well as of statisticalanalysis are of huge importance in field experiments. These are necessary tobe correctly in order to obtain the best possible precision of the results. Therandom arrangements, randomized blocks and Latin square designs werereviewed and analyzed from the statistical perspective of error analysis.Material and Method: Random arrangements, randomized block and Latinsquares experimental designs were used as field experiments. ...

  8. Sample size calculations for pilot randomized trials: a confidence interval approach.

    Science.gov (United States)

    Cocks, Kim; Torgerson, David J

    2013-02-01

    To describe a method using confidence intervals (CIs) to estimate the sample size for a pilot randomized trial. Using one-sided CIs and the estimated effect size that would be sought in a large trial, we calculated the sample size needed for pilot trials. Using an 80% one-sided CI, we estimated that a pilot trial should have at least 9% of the sample size of the main planned trial. Using the estimated effect size difference for the main trial and using a one-sided CI, this allows us to calculate a sample size for a pilot trial, which will make its results more useful than at present. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. [Estimate methods used with complex sampling designs: their application in the Cuban 2001 health survey].

    Science.gov (United States)

    Cañizares Pérez, Mayilée; Barroso Utra, Isabel; Alfonso León, Alina; García Roche, René; Alfonso Sagué, Karen; Chang de la Rosa, Martha; Bonet Gorbea, Mariano; León, Esther M

    2004-03-01

    To look at the individual features of three different methods used to estimate simple parameters--means, totals, and percentages, as well as their standard errors--and of logistic regression models, and to describe how such methods can be used for analyzing data obtained from complex samples. Data from Cuba's Second National Survey of Risk Factors and Non-Communicable Chronic Ailments [Segunda Encuesta Nacional de Factores de Riesgo y Afecciones Crónicas No Transmisibles], which was conducted in 2001, were studied. A complex, stratified multi-stage cluster sampling design was used. Cuba's 14 provinces and the municipality of Isla de la Juventud served as the strata, while the clusters consisted of sampled geographic areas (SGA), blocks, and sectors. Samples were weighted in inverse proportion to their probability of being selected, and estimates were performed by sex and age group (15-34, 35-54, 55-74, and 75 or more years). Taylor approximations were used to estimate variances. Three statistical methods were compared: conventional analysis, which assumes all data were obtained through simple random sampling; weighted analysis, which only takes into account the weight of the samples when performing estimates; and adjusted analysis, which looks at all aspects of the sampling design (namely, the disparity in the probability of being included in the sample and the effect of clustering on the data). The point estimates obtained with the three different types of analytic methods were similar. Standard error (SE) estimates for the prevalence of overweight and of arterial hypertension that were obtained by conventional analysis were underestimated by 19.3% and by more than 11.5%, respectively, when such estimates were compared to those obtained with the other two analytic methods. On the other hand, weighted analysis generated SE values that were much smaller than those obtained with the other two types of analyses. The same pattern was noted when odds ratios were

  10. Empirical likelihood inference for the Rao-Hartley-Cochran sampling design

    OpenAIRE

    Berger, Yves G.

    2016-01-01

    The Hartley-Rao-Cochran sampling design is an unequal probability sampling design which can be used to select samples from finite populations. We propose to adjust the empirical likelihood approach for the Hartley-Rao-Cochran sampling design. The approach proposed intrinsically incorporates sampling weights, auxiliary information and allows for large sampling fractions. It can be used to construct confidence intervals. In a simulation study, we show that the coverage may be better for the emp...

  11. Estimating the Size of a Large Network and its Communities from a Random Sample.

    Science.gov (United States)

    Chen, Lin; Karbasi, Amin; Crawford, Forrest W

    2016-01-01

    Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V, E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K, and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios.

  12. Designing waveforms for temporal encoding using a frequency sampling method

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jensen, Jørgen Arendt

    2007-01-01

    In this paper a method for designing waveforms for temporal encoding in medical ultrasound imaging is described. The method is based on least squares optimization and is used to design nonlinear frequency modulated signals for synthetic transmit aperture imaging. By using the proposed design method......, the amplitude spectrum of the transmitted waveform can be optimized, such that most of the energy is transmitted where the transducer has large amplification. To test the design method, a waveform was designed for a BK8804 linear array transducer. The resulting nonlinear frequency modulated waveform...... waveform, on the other hand, was designed so that only frequencies where the transducer had a large amplification were excited. Hereby, unnecessary heating of the transducer could be avoided and the signal-tonoise ratio could be increased. The experimental ultrasound scanner RASMUS was used to evaluate...

  13. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    Science.gov (United States)

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  14. A novel sampling design to explore gene-longevity associations

    DEFF Research Database (Denmark)

    De Rango, Francesco; Dato, Serena; Bellizzi, Dina

    2008-01-01

    To investigate the genetic contribution to familial similarity in longevity, we set up a novel experimental design where cousin-pairs born from siblings who were concordant or discordant for the longevity trait were analyzed. To check this design, two chromosomal regions already known to encompas...

  15. Location and multi-depot vehicle routing for emergency vehicles using tour coverage and random sampling

    Directory of Open Access Journals (Sweden)

    Alireza Goli

    2015-09-01

    Full Text Available Distribution and optimum allocation of emergency resources are the most important tasks, which need to be accomplished during crisis. When a natural disaster such as earthquake, flood, etc. takes place, it is necessary to deliver rescue efforts as quickly as possible. Therefore, it is important to find optimum location and distribution of emergency relief resources. When a natural disaster occurs, it is not possible to reach some damaged areas. In this paper, location and multi-depot vehicle routing for emergency vehicles using tour coverage and random sampling is investigated. In this study, there is no need to visit all the places and some demand points receive their needs from the nearest possible location. The proposed study is implemented for some randomly generated numbers in different sizes. The preliminary results indicate that the proposed method was capable of reaching desirable solutions in reasonable amount of time.

  16. ESTIMATION OF FINITE POPULATION MEAN USING RANDOM NON–RESPONSE IN SURVEY SAMPLING

    Directory of Open Access Journals (Sweden)

    Housila P. Singh

    2010-12-01

    Full Text Available This paper consider the problem of estimating the population mean under three different situations of random non–response envisaged by Singh et al (2000. Some ratio and product type estimators have been proposed and their properties are studied under an assumption that the number of sampling units on which information can not be obtained owing to random non–response follows some distribution. The suggested estimators are compared with the usual ratio and product estimators. An empirical study is carried out to show the performance of the suggested estimators over usual unbiased estimator, ratio and product estimators. A generalized version of the proposed ratio and product estimators is also given.

  17. Securing image information using double random phase encoding and parallel compressive sensing with updated sampling processes

    Science.gov (United States)

    Hu, Guiqiang; Xiao, Di; Wang, Yong; Xiang, Tao; Zhou, Qing

    2017-11-01

    Recently, a new kind of image encryption approach using compressive sensing (CS) and double random phase encoding has received much attention due to the advantages such as compressibility and robustness. However, this approach is found to be vulnerable to chosen plaintext attack (CPA) if the CS measurement matrix is re-used. Therefore, designing an efficient measurement matrix updating mechanism that ensures resistance to CPA is of practical significance. In this paper, we provide a novel solution to update the CS measurement matrix by altering the secret sparse basis with the help of counter mode operation. Particularly, the secret sparse basis is implemented by a reality-preserving fractional cosine transform matrix. Compared with the conventional CS-based cryptosystem that totally generates all the random entries of measurement matrix, our scheme owns efficiency superiority while guaranteeing resistance to CPA. Experimental and analysis results show that the proposed scheme has a good security performance and has robustness against noise and occlusion.

  18. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    Science.gov (United States)

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  19. Randomized controlled trial on timing and number of sampling for bile aspiration cytology.

    Science.gov (United States)

    Tsuchiya, Tomonori; Yokoyama, Yukihiro; Ebata, Tomoki; Igami, Tsuyoshi; Sugawara, Gen; Kato, Katsuyuki; Shimoyama, Yoshie; Nagino, Masato

    2014-06-01

    The issue on timing and number of bile sampling for exfoliative bile cytology is still unsettled. A total of 100 patients with cholangiocarcinoma undergoing resection after external biliary drainage were randomized into two groups: a 2-day group where bile was sampled five times per day for 2 days; and a 10-day group where bile was sampled once per day for 10 days (registered University Hospital Medical Information Network/ID 000005983). The outcome of 87 patients who underwent laparotomy was analyzed, 44 in the 2-day group and 43 in the 10-day group. There were no significant differences in patient characteristics between the two groups. Positivity after one sampling session was significantly lower in the 2-day group than in the 10-day group (17.0 ± 3.7% vs. 20.7 ± 3.5%, P = 0.034). However, cumulative positivity curves were similar and overlapped each other between both groups. The final cumulative positivity by the 10th sampling session was 52.3% in the 2-day group and 51.2% in the 10-day group. We observed a small increase in cumulative positivity after the 5th or 6th session in both groups. Bile cytology positivity is unlikely to be affected by sample time. © 2013 Japanese Society of Hepato-Biliary-Pancreatic Surgery.

  20. Estimating the Size of a Large Network and its Communities from a Random Sample

    CERN Document Server

    Chen, Lin; Crawford, Forrest W

    2016-01-01

    Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V;E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that correctly estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhausti...

  1. Studies on spectral analysis of randomly sampled signals: Application to laser velocimetry data

    Science.gov (United States)

    Sree, David

    1992-01-01

    Spectral analysis is very useful in determining the frequency characteristics of many turbulent flows, for example, vortex flows, tail buffeting, and other pulsating flows. It is also used for obtaining turbulence spectra from which the time and length scales associated with the turbulence structure can be estimated. These estimates, in turn, can be helpful for validation of theoretical/numerical flow turbulence models. Laser velocimetry (LV) is being extensively used in the experimental investigation of different types of flows, because of its inherent advantages; nonintrusive probing, high frequency response, no calibration requirements, etc. Typically, the output of an individual realization laser velocimeter is a set of randomly sampled velocity data. Spectral analysis of such data requires special techniques to obtain reliable estimates of correlation and power spectral density functions that describe the flow characteristics. FORTRAN codes for obtaining the autocorrelation and power spectral density estimates using the correlation-based slotting technique were developed. Extensive studies have been conducted on simulated first-order spectrum and sine signals to improve the spectral estimates. A first-order spectrum was chosen because it represents the characteristics of a typical one-dimensional turbulence spectrum. Digital prefiltering techniques, to improve the spectral estimates from randomly sampled data were applied. Studies show that the spectral estimates can be increased up to about five times the mean sampling rate.

  2. Prevalence and correlates of problematic smartphone use in a large random sample of Chinese undergraduates.

    Science.gov (United States)

    Long, Jiang; Liu, Tie-Qiao; Liao, Yan-Hui; Qi, Chang; He, Hao-Yu; Chen, Shu-Bao; Billieux, Joël

    2016-11-17

    Smartphones are becoming a daily necessity for most undergraduates in Mainland China. Because the present scenario of problematic smartphone use (PSU) is largely unexplored, in the current study we aimed to estimate the prevalence of PSU and to screen suitable predictors for PSU among Chinese undergraduates in the framework of the stress-coping theory. A sample of 1062 undergraduate smartphone users was recruited by means of the stratified cluster random sampling strategy between April and May 2015. The Problematic Cellular Phone Use Questionnaire was used to identify PSU. We evaluated five candidate risk factors for PSU by using logistic regression analysis while controlling for demographic characteristics and specific features of smartphone use. The prevalence of PSU among Chinese undergraduates was estimated to be 21.3%. The risk factors for PSU were majoring in the humanities, high monthly income from the family (≥1500 RMB), serious emotional symptoms, high perceived stress, and perfectionism-related factors (high doubts about actions, high parental expectations). PSU among undergraduates appears to be ubiquitous and thus constitutes a public health issue in Mainland China. Although further longitudinal studies are required to test whether PSU is a transient phenomenon or a chronic and progressive condition, our study successfully identified socio-demographic and psychological risk factors for PSU. These results, obtained from a random and thus representative sample of undergraduates, opens up new avenues in terms of prevention and regulation policies.

  3. Designing a Repetitive Group Sampling Plan for Weibull Distributed Processes

    Directory of Open Access Journals (Sweden)

    Aijun Yan

    2016-01-01

    Full Text Available Acceptance sampling plans are useful tools to determine whether the submitted lots should be accepted or rejected. An efficient and economic sampling plan is very desirable for the high quality levels required by the production processes. The process capability index CL is an important quality parameter to measure the product quality. Utilizing the relationship between the CL index and the nonconforming rate, a repetitive group sampling (RGS plan based on CL index is developed in this paper when the quality characteristic follows the Weibull distribution. The optimal plan parameters of the proposed RGS plan are determined by satisfying the commonly used producer’s risk and consumer’s risk at the same time by minimizing the average sample number (ASN and then tabulated for different combinations of acceptance quality level (AQL and limiting quality level (LQL. The results show that the proposed plan has better performance than the single sampling plan in terms of ASN. Finally, the proposed RGS plan is illustrated with an industrial example.

  4. Robustness and Optimal Design Issues for Cluster Randomized Trials

    NARCIS (Netherlands)

    Korendijk, E.|info:eu-repo/dai/nl/072035463

    2012-01-01

    Cluster randomized trials (CRT), in which whole clusters instead of individuals are assigned to conditions, are not uncommon in the social, behavioral, educational, medical and organizational sciences. Though the assignment of individuals to treatment conditions is more efficient, this may not

  5. Protein/creatinine ratio on random urine samples for prediction of proteinuria in preeclampsia.

    Science.gov (United States)

    Roudsari, F Vahid; Ayati, S; Ayatollahi, H; Shakeri, M T

    2012-01-01

    To evaluate Protein/Creatinine ratio on random urine samples for prediction of proteinuria in preeclampsia. This study was performed on 150 pregnant women who were hospitalized as preeclampsia in Ghaem Hospital during 2006. At first, a 24-hours urine sample was collected for each patient to determine protein/creatinine ratio. Then, 24-hours urine collection was analyzed for the evaluation of proteinuria. Statistical analysis was performed with SPSS software. A total of 150 patients entered the study. There was a significant relation between the 24-hours urine protein and protein/creatinine ratio (r = 0.659, P < 0.001). Since the measurement of protein/creatinine ratio is more accurate, reliable, and cost-effective, it can be replaced by the method of measurement the 24-hours urine protein.

  6. Integrating legacy soil information in a Digital Soil Mapping approach based on a modified conditioned Latin Hypercube Sampling design

    Science.gov (United States)

    Stumpf, Felix; Schmidt, Karsten; Behrens, Thorsten; Schoenbrodt-Stitt, Sarah; Scholten, Thomas

    2014-05-01

    One crucial component of a Digital Soil Mapping (DSM) framework is outlined by geo-referenced soil observations. Nevertheless, highly informative legacy soil information, acquired by traditional soil surveys, is often neglected due to lacking accordance with specific statistical DSM designs. The focus of this study is to integrate legacy data into a state-of-the-art DSM approach, based on a modified conditioned Latin Hypercube Sampling (cLHS) design and Random Forest. Furthermore, by means of the cLHS modification the scope of actually unique cLHS sampling locations is widened in order to compensate limited accessability in the field. As well, the maximally stratified cLHS design is not diluted by the modification. Exemplarily the target variables of the modelling are represented by sand and clay fractions. The study site is a small mountainous hydrological catchment of 4.2 km² in the reservoir of the Three Gorges Dam in Central China. The modification is accomplished by demarcating the histogram borders of each cLHS stratum, which are based on the multivariate cLHS feature space. Thereby, all potential sample locations per stratum are identified. This provides a possibility to integrate legacy data samples that match one of the newly created sample locations, and flexibility with respect to field accessibility. Consequently, six legacy data samples, taken from a total sample size of n = 30 were integrated into the sampling design and for all strata several potential sample locations are identified. The comparability of the modified and standard cLHS data sets is approved by (i) identifying their feature space coverage with respect to the cLHS stratifying variables, and (ii) by assessing the Random Forest accuracy estimates.

  7. NEON terrestrial field observations: designing continental scale, standardized sampling

    Science.gov (United States)

    R. H. Kao; C.M. Gibson; R. E. Gallery; C. L. Meier; D. T. Barnett; K. M. Docherty; K. K. Blevins; P. D. Travers; E. Azuaje; Y. P. Springer; K. M. Thibault; V. J. McKenzie; M. Keller; L. F. Alves; E. L. S. Hinckley; J. Parnell; D. Schimel

    2012-01-01

    Rapid changes in climate and land use and the resulting shifts in species distributions and ecosystem functions have motivated the development of the National Ecological Observatory Network (NEON). Integrating across spatial scales from ground sampling to remote sensing, NEON will provide data for users to address ecological responses to changes in climate, land use,...

  8. Using remote sensing images to design optimal field sampling schemes

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-08-01

    Full Text Available in the alteration zones was chosen as the target mineral. Sampling points are distributed more intensely in regions of high probable alunite as classified by both SAM and SFF, thus representing the purest of pixels. This method leads to an efficient distribution...

  9. LOD score exclusion analyses for candidate QTLs using random population samples.

    Science.gov (United States)

    Deng, Hong-Wen

    2003-11-01

    While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes as putative QTLs using random population samples. Previously, we developed an LOD score exclusion mapping approach for candidate genes for complex diseases. Here, we extend this LOD score approach for exclusion analyses of candidate genes for quantitative traits. Under this approach, specific genetic effects (as reflected by heritability) and inheritance models at candidate QTLs can be analyzed and if an LOD score is < or = -2.0, the locus can be excluded from having a heritability larger than that specified. Simulations show that this approach has high power to exclude a candidate gene from having moderate genetic effects if it is not a QTL and is robust to population admixture. Our exclusion analysis complements association analysis for candidate genes as putative QTLs in random population samples. The approach is applied to test the importance of Vitamin D receptor (VDR) gene as a potential QTL underlying the variation of bone mass, an important determinant of osteoporosis.

  10. A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling

    Directory of Open Access Journals (Sweden)

    Ying Yan

    2017-01-01

    Full Text Available Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix between interval evidences is constructed, and expert’s information is fused. Comment grades are quantified using the interval number, and cumulative probability function for evaluating the importance of indices is constructed based on the fused information. Finally, index weights are obtained by Monte Carlo random sampling. The method can process expert’s information with varying degrees of uncertainties, which possesses good compatibility. Difficulty in effectively fusing high-conflict group decision-making information and large information loss after fusion is avertible. Original expert judgments are retained rather objectively throughout the processing procedure. Cumulative probability function constructing and random sampling processes do not require any human intervention or judgment. It can be implemented by computer programs easily, thus having an apparent advantage in evaluation practices of fairly huge index systems.

  11. A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin

    Science.gov (United States)

    Blaschek, Michael; Duttmann, Rainer

    2015-04-01

    The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using

  12. Reporting quality of stepped wedge design randomized trials: a systematic review protocol

    Directory of Open Access Journals (Sweden)

    Thabane A

    2016-07-01

    Full Text Available Alex Thabane,1,2 Brittany B Dennis,3,4 Olga Gajic-Veljanoski,3,9,10 James Paul,2,3 Lehana Thabane2,3,5-8 1Life Sciences Program, Queen’s University, Kingston, ON, Canada; 2Department of Anesthesia, McMaster University Hamilton ON, 3Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton ON, Canada; 4St. George’s University of London, London England, UK; 5Population Health Research Institute, Hamilton Health Sciences, 6Department of Pediatrics, McMaster University, Hamilton, ON, Canada; 7Centre for Evaluation of Medicine, 8Biostatistics Unit, Father Sean O’Sullivan Research Centre, St Joseph’s Healthcare, Hamilton, ON, Canada; 9Department of Medicine, McMaster University, Hamilton ON, Canada; 10Hamilton Health Sciences, St. Peter’s Hospital, Hamilton, ON Canada Background: Stepped wedge design (SWD is a cluster randomized controlled trial (RCT design that sequentially rolls out intervention to all clusters at varying time points. Being a relatively new design method, reporting quality has yet to be explored, and this review will seek to fill this gap in knowledge. Objectives: The objectives of this review are: 1 to assess the quality of SWD trial reports based on the CONSORT guidelines or CONSORT extension to cluster RCTs; 2 to assess the completeness of reporting of SWD trial abstracts using the CONSORT extension for abstracts; 3 to assess the reporting of sample size details in SWD trial reports or protocols; 4 to assess the completeness of reporting of SWD trial protocols according to SPIRIT guidelines; 5 to assess the consistency between the trial registration information and final SWD trial reports; and 6 to assess the consistency of what is reported in the abstracts and main text of the SWD trial reports. We will also explore factors that are associated with the completeness of reporting. Methods: We will search MEDLINE, EMBASE, Web of Science, CINAHL, and PsycINFO for all randomized controlled trials

  13. Measuring Radionuclides in the environment: radiological quantities and sampling designs

    Energy Technology Data Exchange (ETDEWEB)

    Voigt, G. [ed.] [GSF - Forschungszentrum fuer Umwelt und Gesundheit Neuherberg GmbH, Oberschleissheim (Germany). Inst. fuer Strahlenschutz

    1998-10-01

    One aim of the workshop was to support and provide an ICRU report committee (International Union of Radiation Units) with actual information on techniques, data and knowledge of modern radioecology when radionuclides are to be measured in the environment. It has been increasingly recognised that some studies in radioecology, especially those involving both field sampling and laboratory measurements, have not paid adequate attention to the problem of obtaining representative, unbiased samples. This can greatly affect the quality of scientific interpretation, and the ability to manage the environment. Further, as the discipline of radioecology has developed, it has seen a growth in the numbers of quantities and units used, some of which are ill-defined and which are non-standardised. (orig.)

  14. Design of the CERN MEDICIS Collection and Sample Extraction System

    CERN Document Server

    Brown, Alexander

    MEDICIS is a new facility at CERN ISOLDE that aims to produce radio-isotopes for medical research. Possible designs for the collection and transport system for the collection of radio-isotopes was investigated. A system using readily available equipment was devised with the the aim of keeping costs to a minimum whilst maintaining the highest safety standards. FLUKA, a Monte Carlo radiation transport code, was used to simulate the radiation from the isotopes to be collected. Of the isotopes to be collected 44Sc was found to give the largest dose by simulating the collection of all isotopes of interest to CERN’s MEDICIS facility, for medical research. The simulations helped guide the amount of shielding used in the final design. Swiss Regulations stipulating allowed activity level of individual isotopes was also considered within the body of the work.

  15. Sample frame and related sample design issues for surveys of physicians and physician practices.

    Science.gov (United States)

    Digaetano, Ralph

    2013-09-01

    A sample frame is the listing of the units from which a sample is to be selected. When deciding upon a file to serve as a source for a sample frame for a survey, perhaps the most important consideration is the extent to which the target population will be covered by the frame. However, other issues also come into play such as the accuracy of contact and other information appearing on the file as well as its cost. The American Medical Association Masterfile has long been considered the preferred choice for surveys of physicians, although it does have drawbacks. Here we consider two alternative files, discussing their relative advantages and disadvantages. For surveys of physician practices (or other organizations that employ physicians), there have been no files that are obvious choices to serve as the basis for a sample frame. Here we discuss data collection on physician practices from an analytic perspective and consider how sampling physicians to obtain practice level data may be a desirable approach.

  16. UHF Antenna Design for AFIT Random Noise Radar

    Science.gov (United States)

    2012-03-01

    which are still widely used to this day, including spherical dipoles, square plate dipoles, biconical dipoles, bow-tie dipoles, and their respective...the final antenna, in Figure 35, resembles a planar design of a historical biconical antenna designed by P.S. Carter in 1939 [57]. 53

  17. On The Choice of a Completely Randomized Design with Emphasis ...

    African Journals Online (AJOL)

    Journal of the Nigerian Association of Mathematical Physics ... The designs include the unconstrained CRD, the zero-sum constraint CRD and the baseline constraint CRD. ... The study shows that the baseline constraint CRD is the most suitable choice of designs for an experiment with five treatments and six replicates.

  18. Demonstrating treatment efficacy using the single subject randomization design: A tutorial and demonstration.

    Science.gov (United States)

    Rvachew, Susan; Matthews, Tanya

    2017-05-01

    Single case research refers to the broader category of research designs in which each case serves as his or her own control. A single subject randomization design is a specific form in which sessions are randomly allocated to treatment conditions within subjects. Two tutorials on the application of these designs are provided. In the single subject randomized phase design, baseline probes are administered repeatedly during a no-treatment or usual care phase; subsequently probes are administered repeatedly during the treatment phase of the experiment; the starting point for the treatment phase is determined by random selection. In the single subject randomized alternation design, any session can be randomly allocated to any treatment condition. In either case, the test statistic can be the mean of probe performance during the treatment sessions after subtracting the baseline mean. The significance of the obtained test statistic is determined by resampling test. Specifically, the obtained test statistic is interpreted relative to a distribution of test statistics generated by all possible random allocations. This distribution yields a P value which represents the probability of obtaining a test statistic as large as that obtained by the selected allocation. In addition to the tutorials, two experiments using these designs with a single 8-year-old participant with Childhood Apraxia of Speech are presented to demonstrate the utility of these designs and the application of the associated statistical analysis procedures. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Dual to Ratio-Cum-Product Estimator in Simple and Stratified Random Sampling

    OpenAIRE

    Yunusa Olufadi

    2013-01-01

    New estimators for estimating the finite population mean using two auxiliary variables under simple and stratified sampling design is proposed. Their properties (e.g., mean square error) are studied to the first order of approximation. More so, some estimators are shown to be a particular member of this estimator. Furthermore, comparison of the proposed estimator with the usual unbiased estimator and other estimators considered in this paper reveals interesting results. These results are fur...

  20. The design of the run Clever randomized trial

    DEFF Research Database (Denmark)

    Ramskov, Daniel; Nielsen, Rasmus Oestergaard; Sørensen, Henrik

    2016-01-01

    follow-up. Healthy recreational runners between 18 and 65 years and with an average of 1-3 running sessions per week the past 6 months are included. Participants are randomized into two intervention groups: Running schedule-I and Schedule-V. Schedule-I emphasizes a progression in running intensity......BACKGROUND: Injury incidence and prevalence in running populations have been investigated and documented in several studies. However, knowledge about injury etiology and prevention is needed. Training errors in running are modifiable risk factors and people engaged in recreational running need...

  1. Improving the Design of Science Intervention Studies: An Empirical Investigation of Design Parameters for Planning Group Randomized Trials

    Science.gov (United States)

    Westine, Carl; Spybrook, Jessaca

    2013-01-01

    The capacity of the field to conduct power analyses for group randomized trials (GRTs) of educational interventions has improved over the past decade (Authors, 2009). However, a power analysis depends on estimates of design parameters. Hence it is critical to build the empirical base of design parameters for GRTs across a variety of outcomes and…

  2. An empirical examination of respondent driven sampling design effects among HIV risk groups from studies conducted around the world.

    Science.gov (United States)

    Johnston, Lisa G; Chen, Yea-Hung; Silva-Santisteban, Alfonso; Raymond, H Fisher

    2013-07-01

    For studies using respondent driven sampling (RDS), the current practice of collecting a sample twice as large as that used in simple random sampling (SRS) (i.e. design effect of 2.00) may not be sufficient. This paper provides empirical evidence of sample-to-sample variability in design effects using data from nine studies in six countries among injecting drug users, female sex workers, men who have sex with men and male-to-female transgender (MTF) persons. We computed the design effect as the variance under RDS divided by the variance under SRS for a broad range of demographic and behavioral variables in each study. We also estimated several measures for each variable in each study that we hypothesized might be related to design effect: the number of waves needed for equilibrium, homophily, and mean network size. Design effects for all studies ranged from 1.20 to 5.90. Mean design effects among all studies ranged from 1.50 to 3.70. A particularly high design effect was found for employment status (design effect of 5.90) of MTF in Peru. This may be explained by a "bottleneck"--defined as the occurrence of a relatively small number of recruitment ties between two groups in the population. A design effect of two for RDS studies may not be sufficient. Since the mean design effect across all studies was 2.33, an effect slightly above 2.00 may be adequate; however, an effect closer to 3.00 or 4.00 might be more appropriate.

  3. Tobacco smoking surveillance: is quota sampling an efficient tool for monitoring national trends? A comparison with a random cross-sectional survey.

    Directory of Open Access Journals (Sweden)

    Romain Guignard

    Full Text Available OBJECTIVES: It is crucial for policy makers to monitor the evolution of tobacco smoking prevalence. In France, this monitoring is based on a series of cross-sectional general population surveys, the Health Barometers, conducted every five years and based on random samples. A methodological study has been carried out to assess the reliability of a monitoring system based on regular quota sampling surveys for smoking prevalence. DESIGN / OUTCOME MEASURES: In 2010, current and daily tobacco smoking prevalences obtained in a quota survey on 8,018 people were compared with those of the 2010 Health Barometer carried out on 27,653 people. Prevalences were assessed separately according to the telephone equipment of the interviewee (landline phone owner vs "mobile-only", and logistic regressions were conducted in the pooled database to assess the impact of the telephone equipment and of the survey mode on the prevalences found. Finally, logistic regressions adjusted for sociodemographic characteristics were conducted in the random sample in order to determine the impact of the needed number of calls to interwiew "hard-to-reach" people on the prevalence found. RESULTS: Current and daily prevalences were higher in the random sample (respectively 33.9% and 27.5% in 15-75 years-old than in the quota sample (respectively 30.2% and 25.3%. In both surveys, current and daily prevalences were lower among landline phone owners (respectively 31.8% and 25.5% in the random sample and 28.9% and 24.0% in the quota survey. The required number of calls was slightly related to the smoking status after adjustment for sociodemographic characteristics. CONCLUSION: Random sampling appears to be more effective than quota sampling, mainly by making it possible to interview hard-to-reach populations.

  4. Conic sampling: an efficient method for solving linear and quadratic programming by randomly linking constraints within the interior.

    Science.gov (United States)

    Serang, Oliver

    2012-01-01

    Linear programming (LP) problems are commonly used in analysis and resource allocation, frequently surfacing as approximations to more difficult problems. Existing approaches to LP have been dominated by a small group of methods, and randomized algorithms have not enjoyed popularity in practice. This paper introduces a novel randomized method of solving LP problems by moving along the facets and within the interior of the polytope along rays randomly sampled from the polyhedral cones defined by the bounding constraints. This conic sampling method is then applied to randomly sampled LPs, and its runtime performance is shown to compare favorably to the simplex and primal affine-scaling algorithms, especially on polytopes with certain characteristics. The conic sampling method is then adapted and applied to solve a certain quadratic program, which compute a projection onto a polytope; the proposed method is shown to outperform the proprietary software Mathematica on large, sparse QP problems constructed from mass spectometry-based proteomics.

  5. Improvement of sampling strategies for randomly distributed hotspots in soil applying a computerized simulation considering the concept of uncertainty.

    Science.gov (United States)

    Hildebrandt, Thomas; Pick, Denis; Einax, Jürgen W

    2012-02-01

    The pollution of soil and environment as a result of human activity is a major problem. Nowadays, the determination of local contaminations is of interest for environmental remediation. These hotspots can have various toxic effects on plants, animals, humans, and the whole ecological system. However, economical and juridical consequences are also possible, e.g., high costs for remediation measures. In this study three sampling strategies (simple random sampling, stratified sampling, and systematic sampling) were applied on randomly distributed hotspot contaminations to prove their efficiency in term of finding hotspots. The results were used for the validation of a computerized simulation. This application can simulate the contamination on a field, the sampling pattern, and a virtual sampling. A constant hit rate showed that none of the sampling patterns could reach better results than others. Furthermore, the uncertainty associated with the results is described by confidence intervals. It is to be considered that the uncertainty during sampling is enormous and will decrease slightly, even the number of samples applied was increased to an unreasonable amount. It is hardly possible to identify the exact number of randomly distributed hotspot contaminations by statistical sampling. But a range of possible results could be calculated. Depending on various parameters such as shape and size of the area, number of hotspots, and sample quantity, optimal sampling strategies could be derived. Furthermore, an estimation of bias arising from sampling methodology is possible. The developed computerized simulation is an innovative tool for optimizing sampling strategies in terrestrial compartments for hotspot distributions.

  6. RANdom SAmple Consensus (RANSAC) algorithm for material-informatics: application to photovoltaic solar cells.

    Science.gov (United States)

    Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch

    2017-06-06

    An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.

  7. Notes on interval estimation of the generalized odds ratio under stratified random sampling.

    Science.gov (United States)

    Lui, Kung-Jong; Chang, Kuang-Chao

    2013-05-01

    It is not rare to encounter the patient response on the ordinal scale in a randomized clinical trial (RCT). Under the assumption that the generalized odds ratio (GOR) is homogeneous across strata, we consider four asymptotic interval estimators for the GOR under stratified random sampling. These include the interval estimator using the weighted-least-squares (WLS) approach with the logarithmic transformation (WLSL), the interval estimator using the Mantel-Haenszel (MH) type of estimator with the logarithmic transformation (MHL), the interval estimator using Fieller's theorem with the MH weights (FTMH) and the interval estimator using Fieller's theorem with the WLS weights (FTWLS). We employ Monte Carlo simulation to evaluate the performance of these interval estimators by calculating the coverage probability and the average length. To study the bias of these interval estimators, we also calculate and compare the noncoverage probabilities in the two tails of the resulting confidence intervals. We find that WLSL and MHL can generally perform well, while FTMH and FTWLS can lose either precision or accuracy. We further find that MHL is likely the least biased. Finally, we use the data taken from a study of smoking status and breathing test among workers in certain industrial plants in Houston, Texas, during 1974 to 1975 to illustrate the use of these interval estimators.

  8. Control capacity and a random sampling method in exploring controllability of complex networks.

    Science.gov (United States)

    Jia, Tao; Barabási, Albert-László

    2013-01-01

    Controlling complex systems is a fundamental challenge of network science. Recent advances indicate that control over the system can be achieved through a minimum driver node set (MDS). The existence of multiple MDS's suggests that nodes do not participate in control equally, prompting us to quantify their participations. Here we introduce control capacity quantifying the likelihood that a node is a driver node. To efficiently measure this quantity, we develop a random sampling algorithm. This algorithm not only provides a statistical estimate of the control capacity, but also bridges the gap between multiple microscopic control configurations and macroscopic properties of the network under control. We demonstrate that the possibility of being a driver node decreases with a node's in-degree and is independent of its out-degree. Given the inherent multiplicity of MDS's, our findings offer tools to explore control in various complex systems.

  9. Classification of epileptic EEG signals based on simple random sampling and sequential feature selection.

    Science.gov (United States)

    Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui

    2016-06-01

    Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively.

  10. On effects of trawling, benthos and sampling design.

    Science.gov (United States)

    Gray, John S; Dayton, Paul; Thrush, Simon; Kaiser, Michel J

    2006-08-01

    The evidence for the wider effects of fishing on the marine ecosystem demands that we incorporate these considerations into our management of human activities. The consequences of the direct physical disturbance of the seabed caused by towed bottom-fishing gear have been studied extensively with over 100 manipulations reported in the peer-reviewed literature. The outcome of these studies varies according to the gear used and the habitat in which it was deployed. This variability in the response of different benthic systems concurs with established theoretical models of the response of community metrics to disturbance. Despite this powerful evidence, a recent FAO report wrongly concludes that the variability in the reported responses to fishing disturbance mean that no firm conclusion as to the effects of fishing disturbance can be made. This thesis is further supported (incorrectly) by the supposition that current benthic sampling methodologies are inadequate to demonstrate the effects of fishing disturbance on benthic systems. The present article addresses these two erroneous conclusions which may confuse non-experts and in particular policy-makers.

  11. Generalized optimal design for two-arm, randomized phase II clinical trials with endpoints from the exponential dispersion family.

    Science.gov (United States)

    Jiang, Wei; Mahnken, Jonathan D; He, Jianghua; Mayo, Matthew S

    2016-11-01

    For two-arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre-specified precision. Its applications on data from specific distribution families are discussed under multiple design considerations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Design and rationale of the HITTS randomized controlled trial

    DEFF Research Database (Denmark)

    Nytrøen, Kari; Yardley, Marianne; Rolid, Katrine

    2016-01-01

    There is no consensus on how, when, and at what intensity exercise should be performed and organized after heart transplantation (HTx). Most rehabilitation programs are conducted in HTx centers, which might be impractical and costly. We have recently shown that high-intensity interval training (HIT......) is safe, well tolerated, and efficacious in maintenance HTx recipients, but there are no studies among de novo patients, and whether HIT is feasible and superior to moderate training in HTx recipients is unclear. A total of 120 clinically stable HTx recipients older than 18 years will be recruited from 3...... Scandinavian HTx centers. Participants are randomized to HIT or moderate training, shortly after surgery. All exercises are supervised in the patients' local communities. Testing at baseline and follow-up includes the following: VO2peak (primary end point), muscle strength, body composition, quality of life...

  13. Empirical evidence of study design biases in randomized trials

    DEFF Research Database (Denmark)

    Page, Matthew J.; Higgins, Julian P. T.; Clayton, Gemma

    2016-01-01

    search September 2012), and searched Ovid MEDLINE and Ovid EMBASE for studies indexed from Jan 2012-May 2015. Data were extracted by one author and verified by another. We combined estimates of average bias (e.g. ratio of odds ratios (ROR) or difference in standardised mean differences (dSMD)) in meta......-analyses using the random-effects model. Analyses were stratified by type of outcome ("mortality" versus "other objective" versus "subjective"). Direction of effect was standardised so that ROR ...) characteristic. Results: We included 24 studies. The available evidence suggests that intervention effect estimates may be exaggerated in trials with inadequate/unclear (versus adequate) sequence generation (ROR 0.93, 95% CI 0.86 to 0.99; 7 studies) and allocation concealment (ROR 0.90, 95% CI 0.84 to 0.97; 7...

  14. Reporting quality of stepped wedge design randomized trials: a systematic review protocol.

    Science.gov (United States)

    Thabane, Alex; Dennis, Brittany B; Gajic-Veljanoski, Olga; Paul, James; Thabane, Lehana

    2016-01-01

    Stepped wedge design (SWD) is a cluster randomized controlled trial (RCT) design that sequentially rolls out intervention to all clusters at varying time points. Being a relatively new design method, reporting quality has yet to be explored, and this review will seek to fill this gap in knowledge. The objectives of this review are: 1) to assess the quality of SWD trial reports based on the CONSORT guidelines or CONSORT extension to cluster RCTs; 2) to assess the completeness of reporting of SWD trial abstracts using the CONSORT extension for abstracts; 3) to assess the reporting of sample size details in SWD trial reports or protocols; 4) to assess the completeness of reporting of SWD trial protocols according to SPIRIT guidelines; 5) to assess the consistency between the trial registration information and final SWD trial reports; and 6) to assess the consistency of what is reported in the abstracts and main text of the SWD trial reports. We will also explore factors that are associated with the completeness of reporting. We will search MEDLINE, EMBASE, Web of Science, CINAHL, and PsycINFO for all randomized controlled trials utilizing SWD. Details from eligible papers will be extracted in duplicate. Demographic statistics obtained from the data extraction will be analyzed to answer the primary objectives pertaining to the reporting quality of several aspects of a published paper, as well as to explore possible temporal trends and consistency between abstracts, trial registration information, and final published articles. Findings from this review will establish the reporting quality of SWD trials and inform academics and clinicians on their completeness and consistency. Results of this review will influence future trials and improve the overall quality and reporting of SWD trials.

  15. An in silico approach helped to identify the best experimental design, population, and outcome for future randomized clinical trials.

    Science.gov (United States)

    Bajard, Agathe; Chabaud, Sylvie; Cornu, Catherine; Castellan, Anne-Charlotte; Malik, Salma; Kurbatova, Polina; Volpert, Vitaly; Eymard, Nathalie; Kassai, Behrouz; Nony, Patrice

    2016-01-01

    The main objective of our work was to compare different randomized clinical trial (RCT) experimental designs in terms of power, accuracy of the estimation of treatment effect, and number of patients receiving active treatment using in silico simulations. A virtual population of patients was simulated and randomized in potential clinical trials. Treatment effect was modeled using a dose-effect relation for quantitative or qualitative outcomes. Different experimental designs were considered, and performances between designs were compared. One thousand clinical trials were simulated for each design based on an example of modeled disease. According to simulation results, the number of patients needed to reach 80% power was 50 for crossover, 60 for parallel or randomized withdrawal, 65 for drop the loser (DL), and 70 for early escape or play the winner (PW). For a given sample size, each design had its own advantage: low duration (parallel, early escape), high statistical power and precision (crossover), and higher number of patients receiving the active treatment (PW and DL). Our approach can help to identify the best experimental design, population, and outcome for future RCTs. This may be particularly useful for drug development in rare diseases, theragnostic approaches, or personalized medicine. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Random polyfluorene co -polymers designed for a better optical ...

    African Journals Online (AJOL)

    Two alternating polyfluorenes (APFO15-F8BT and APFO3-F8BT) with full absorption of the visible region of the electromagnetic radiation were designed and synthesized for bulk-heterojunction solar cell devices. The optical and electrochemical properties of the two polymers were studied. The two polymers exhibited strong ...

  17. Random Photon Absorption Model Elucidates How Early Gain Control in Fly Photoreceptors Arises from Quantal Sampling

    Science.gov (United States)

    Song, Zhuoyi; Zhou, Yu; Juusola, Mikko

    2016-01-01

    Many diurnal photoreceptors encode vast real-world light changes effectively, but how this performance originates from photon sampling is unclear. A 4-module biophysically-realistic fly photoreceptor model, in which information capture is limited by the number of its sampling units (microvilli) and their photon-hit recovery time (refractoriness), can accurately simulate real recordings and their information content. However, sublinear summation in quantum bump production (quantum-gain-nonlinearity) may also cause adaptation by reducing the bump/photon gain when multiple photons hit the same microvillus simultaneously. Here, we use a Random Photon Absorption Model (RandPAM), which is the 1st module of the 4-module fly photoreceptor model, to quantify the contribution of quantum-gain-nonlinearity in light adaptation. We show how quantum-gain-nonlinearity already results from photon sampling alone. In the extreme case, when two or more simultaneous photon-hits reduce to a single sublinear value, quantum-gain-nonlinearity is preset before the phototransduction reactions adapt the quantum bump waveform. However, the contribution of quantum-gain-nonlinearity in light adaptation depends upon the likelihood of multi-photon-hits, which is strictly determined by the number of microvilli and light intensity. Specifically, its contribution to light-adaptation is marginal (≤ 1%) in fly photoreceptors with many thousands of microvilli, because the probability of simultaneous multi-photon-hits on any one microvillus is low even during daylight conditions. However, in cells with fewer sampling units, the impact of quantum-gain-nonlinearity increases with brightening light. PMID:27445779

  18. Rational design of soft mechanical metamaterials: Independent tailoring of elastic properties with randomness

    Science.gov (United States)

    Mirzaali, M. J.; Hedayati, R.; Vena, P.; Vergani, L.; Strano, M.; Zadpoor, A. A.

    2017-07-01

    The elastic properties of mechanical metamaterials are direct functions of their topological designs. Rational design approaches based on computational models could, therefore, be used to devise topological designs that result in the desired properties. It is of particular importance to independently tailor the elastic modulus and Poisson's ratio of metamaterials. Here, we present patterned randomness as a strategy for independent tailoring of both properties. Soft mechanical metamaterials incorporating various types of patterned randomness were fabricated using an indirect additive manufacturing technique and mechanically tested. Computational models were also developed to predict the topology-property relationship in a wide range of proposed topologies. The results of this study show that patterned randomness allows for independent tailoring of the elastic properties and covering a broad area of the elastic modulus-Poisson's ratio plane. The uniform and homogenous topologies constitute the boundaries of the covered area, while topological designs with patterned randomness fill the enclosed area.

  19. Design of Randomly Deployed Heterogeneous Wireless Sensor Networks by Algorithms Based on Swarm Intelligence

    OpenAIRE

    Joon-Woo Lee; Won Kim

    2015-01-01

    This paper reports the design of a randomly deployed heterogeneous wireless sensor network (HWSN) with two types of nodes: a powerful node and an ordinary node. Powerful nodes, such as Cluster Heads (CHs), communicate directly to the data sink of the network, and ordinary nodes sense the desired information and transmit the processed data to powerful nodes. The heterogeneity of HWSNs improves the networks lifetime and coverage. This paper focuses on the design of a random network among HWSNs....

  20. Sampling flies or sampling flaws? Experimental design and inference strength in forensic entomology.

    Science.gov (United States)

    Michaud, J-P; Schoenly, Kenneth G; Moreau, G

    2012-01-01

    Forensic entomology is an inferential science because postmortem interval estimates are based on the extrapolation of results obtained in field or laboratory settings. Although enormous gains in scientific understanding and methodological practice have been made in forensic entomology over the last few decades, a majority of the field studies we reviewed do not meet the standards for inference, which are 1) adequate replication, 2) independence of experimental units, and 3) experimental conditions that capture a representative range of natural variability. Using a mock case-study approach, we identify design flaws in field and lab experiments and suggest methodological solutions for increasing inference strength that can inform future casework. Suggestions for improving data reporting in future field studies are also proposed.

  1. Optimization of Sampling Design to Determine the Spatial Distributions of Emerging Contaminants in Estuaries

    Science.gov (United States)

    Narragansett Bay (NB) has been extensively sampled over the last 50 years by various government agencies, academic institutions, and private groups. To date, most spatial research conducted within the estuary has employed deterministic sampling designs. Several studies have used ...

  2. Notes on interval estimation of the gamma correlation under stratified random sampling.

    Science.gov (United States)

    Lui, Kung-Jong; Chang, Kuang-Chao

    2012-07-01

    We have developed four asymptotic interval estimators in closed forms for the gamma correlation under stratified random sampling, including the confidence interval based on the most commonly used weighted-least-squares (WLS) approach (CIWLS), the confidence interval calculated from the Mantel-Haenszel (MH) type estimator with the Fisher-type transformation (CIMHT), the confidence interval using the fundamental idea of Fieller's Theorem (CIFT) and the confidence interval derived from a monotonic function of the WLS estimator of Agresti's α with the logarithmic transformation (MWLSLR). To evaluate the finite-sample performance of these four interval estimators and note the possible loss of accuracy in application of both Wald's confidence interval and MWLSLR using pooled data without accounting for stratification, we employ Monte Carlo simulation. We use the data taken from a general social survey studying the association between the income level and job satisfaction with strata formed by genders in black Americans published elsewhere to illustrate the practical use of these interval estimators. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Predictive value of testing random urine sample to detect microalbuminuria in diabetic subjects during outpatient visit.

    Science.gov (United States)

    Bouhanick, B; Berrut, G; Chameau, A M; Hallar, M; Bled, F; Chevet, B; Vergely, J; Rohmer, V; Fressinaud, P; Marre, M

    1992-01-01

    The predictive value of random urine sample during outpatient visit to predict persistent microalbuminuria was studied in 76 Type 1, insulin-dependent diabetic subjects, 61 Type 2, non-insulin-dependent diabetic subjects, and 72 Type 2, insulin-treated diabetic subjects. Seventy-six patients attended outpatient clinic during morning, and 133 during afternoon. Microalbuminuria was suspected if Urinary Albumin Excretion (UAE) exceeded 20 mg/l. All patients were hospitalized within 6 months following outpatient visit, and persistent microalbuminuria was assessed then if UAE was between 30 and 300 mg/24 h on 2-3 occasions in 3 urines samples. Of these 209 subjects eighty-three were also screened with Microbumintest (Ames-Bayer), a semi-quantitative method. Among the 209 subjects, 71 were positive both for microalbuminuria during outpatient visit and a persistent microalbuminuria during hospitalization: sensitivity 91.0%, specificity 83.2%, concordance 86.1%, and positive predictive value 76.3% (chi-squared test: 191; p less than 10(-4)). Data were not different for subjects examined on morning, or on afternoon. Among the 83 subjects also screened with Microbumintest, 22 displayed both a positive reaction and a persistent microalbuminuria: sensitivity 76%, specificity 81%, concordance 80%, and positive predictive value 69% (chi-squared test: 126; p less than 10(-4)). Both types of screening appeared equally effective during outpatient visit. Hence, a persistent microalbuminuria can be predicted during an outpatient visit in a diabetic clinic.

  4. Effectiveness of hand hygiene education among a random sample of women from the community.

    Science.gov (United States)

    Ubheeram, J; Biranjia-Hurdoyal, S D

    2017-03-01

    The effectiveness of hand hygiene education was investigated by studying the hand hygiene awareness and bacterial hand contamination among a random sample of 170 women in the community. Questionnaire was used to assess the hand hygiene awareness score, followed by swabbing of the dominant hand. Bacterial identification was done by conventional biochemical tests. Better hand hygiene awareness score was significantly associated with age, scarce bacterial growth and absence of potential pathogen (p hand samples, bacterial growth was noted in 155 (91.2%), which included 91 (53.5%) heavy growth, 53 (31.2%) moderate growth and 11 (6.47%) scanty growth. The presence of enteric bacteria was associated with long nails (49.4% vs 29.2%; p = 0.007; OR = 2.3; 95% CI: 1.25-4.44) while finger rings were associated with higher bacterial load (p = 0.003). Coliforms was significantly higher among women who had a lower hand hygiene awareness score, washed their hands at lower frequency (59.0% vs 32.8%; p = 0.003; OR = 2.9; 95% CI: 1.41-6.13) and used common soap as compared to antiseptic soaps (69.7% vs 30.3%, p = 0.000; OR = 4.11; 95% CI: 1.67-10.12). Level of hand hygiene awareness among the participants was satisfactory but not the compliance of hand washing practice, especially among the elders.

  5. Association between stalking victimisation and psychiatric morbidity in a random community sample.

    Science.gov (United States)

    Purcell, Rosemary; Pathé, Michele; Mullen, Paul E

    2005-11-01

    No studies have assessed psychopathology among victims of stalking who have not sought specialist help. To examine the associations between stalking victimisation and psychiatric morbidity in a representative community sample. A random community sample (n=1844) completed surveys examining the experience of harassment and current mental health. The 28-item General Health Questionnaire (GHQ-28) and the Impact of Event Scale were used to assess symptomatology in those reporting brief harassment (n=196) or protracted stalking (n=236) and a matched control group reporting no harassment (n=432). Rates of caseness on the GHQ-28 were higher among stalking victims (36.4%) than among controls (19.3%) and victims of brief harassment (21.9%). Psychiatric morbidity did not differ according to the recency of victimisation, with 34.1% of victims meeting caseness criteria 1 year after stalking had ended. In a significant minority of victims, stalking victimisation is associated with psychiatric morbidity that may persist long after it has ceased. Recognition of the immediate and long-term impacts of stalking is necessary to assist victims and help alleviate distress and long-term disability.

  6. Random sample community-based health surveys: does the effort to reach participants matter?

    Science.gov (United States)

    Messiah, Antoine; Castro, Grettel; Rodríguez de la Vega, Pura; Acuna, Juan M

    2014-12-15

    Conducting health surveys with community-based random samples are essential to capture an otherwise unreachable population, but these surveys can be biased if the effort to reach participants is insufficient. This study determines the desirable amount of effort to minimise such bias. A household-based health survey with random sampling and face-to-face interviews. Up to 11 visits, organised by canvassing rounds, were made to obtain an interview. Single-family homes in an underserved and understudied population in North Miami-Dade County, Florida, USA. Of a probabilistic sample of 2200 household addresses, 30 corresponded to empty lots, 74 were abandoned houses, 625 households declined to participate and 265 could not be reached and interviewed within 11 attempts. Analyses were performed on the 1206 remaining households. Each household was asked if any of their members had been told by a doctor that they had high blood pressure, heart disease including heart attack, cancer, diabetes, anxiety/ depression, obesity or asthma. Responses to these questions were analysed by the number of visit attempts needed to obtain the interview. Return per visit fell below 10% after four attempts, below 5% after six attempts and below 2% after eight attempts. As the effort increased, household size decreased, while household income and the percentage of interviewees active and employed increased; proportion of the seven health conditions decreased, four of which did so significantly: heart disease 20.4-9.2%, high blood pressure 63.5-58.1%, anxiety/depression 24.4-9.2% and obesity 21.8-12.6%. Beyond the fifth attempt, however, cumulative percentages varied by less than 1% and precision varied by less than 0.1%. In spite of the early and steep drop, sustaining at least five attempts to reach participants is necessary to reduce selection bias. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  7. Critical appraisal of arguments for the delayed-start design proposed as alternative to the parallel-group randomized clinical trial design in the field of rare disease.

    Science.gov (United States)

    Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin

    2017-08-17

    A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.

  8. Approximating the variance of estimated means for systematic random sampling, illustrated with data of the French Soil Monitoring Network

    NARCIS (Netherlands)

    Brus, D.J.; Saby, N.P.A.

    2016-01-01

    In France like in many other countries, the soil is monitored at the locations of a regular, square grid thus forming a systematic sample (SY). This sampling design leads to good spatial coverage, enhancing the precision of design-based estimates of spatial means and totals. Design-based

  9. Treatment noncompliance in randomized experiments: statistical approaches and design issues.

    Science.gov (United States)

    Sagarin, Brad J; West, Stephen G; Ratnikov, Alexander; Homan, William K; Ritchie, Timothy D; Hansen, Edward J

    2014-09-01

    Treatment noncompliance in randomized experiments threatens the validity of causal inference and the interpretability of treatment effects. This article provides a nontechnical review of 7 approaches: 3 traditional and 4 newer statistical analysis strategies. Traditional approaches include (a) intention-to-treat analysis (which estimates the effects of treatment assignment irrespective of treatment received), (b) as-treated analysis (which reassigns participants to groups reflecting the treatment they actually received), and (c) per-protocol analysis (which drops participants who did not comply with their assigned treatment). Newer approaches include (d) the complier average causal effect (which estimates the effect of treatment on the subpopulation of those who would comply with their assigned treatment), (e) dose-response estimation (which uses degree of compliance to stratify participants, producing an estimate of a dose-response relationship), (f) propensity score analysis (which uses covariates to estimate the probability that individual participants will comply, enabling estimates of treatment effects at different propensities), and (g) treatment effect bounding (which calculates a range of possible treatment effects applicable to both compliers and noncompliers). The discussion considers the areas of application, the quantity estimated, the underlying assumptions, and the strengths and weaknesses of each approach. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  10. Bayesian dose selection design for a binary outcome using restricted response adaptive randomization.

    Science.gov (United States)

    Meinzer, Caitlyn; Martin, Renee; Suarez, Jose I

    2017-09-08

    In phase II trials, the most efficacious dose is usually not known. Moreover, given limited resources, it is difficult to robustly identify a dose while also testing for a signal of efficacy that would support a phase III trial. Recent designs have sought to be more efficient by exploring multiple doses through the use of adaptive strategies. However, the added flexibility may potentially increase the risk of making incorrect assumptions and reduce the total amount of information available across the dose range as a function of imbalanced sample size. To balance these challenges, a novel placebo-controlled design is presented in which a restricted Bayesian response adaptive randomization (RAR) is used to allocate a majority of subjects to the optimal dose of active drug, defined as the dose with the lowest probability of poor outcome. However, the allocation between subjects who receive active drug or placebo is held constant to retain the maximum possible power for a hypothesis test of overall efficacy comparing the optimal dose to placebo. The design properties and optimization of the design are presented in the context of a phase II trial for subarachnoid hemorrhage. For a fixed total sample size, a trade-off exists between the ability to select the optimal dose and the probability of rejecting the null hypothesis. This relationship is modified by the allocation ratio between active and control subjects, the choice of RAR algorithm, and the number of subjects allocated to an initial fixed allocation period. While a responsive RAR algorithm improves the ability to select the correct dose, there is an increased risk of assigning more subjects to a worse arm as a function of ephemeral trends in the data. A subarachnoid treatment trial is used to illustrate how this design can be customized for specific objectives and available data. Bayesian adaptive designs are a flexible approach to addressing multiple questions surrounding the optimal dose for treatment efficacy

  11. Simulation of design-unbiased point-to-particle sampling compared to alternatives on plantation rows

    Science.gov (United States)

    Thomas B. Lynch; David Hamlin; Mark J. Ducey

    2016-01-01

    Total quantities of tree attributes can be estimated in plantations by sampling on plantation rows using several methods. At random sample points on a row, either fixed row lengths or variable row lengths with a fixed number of sample trees can be assessed. Ratio of means or mean of ratios estimators can be developed for the fixed number of trees option but are not...

  12. A better alternative to stratified permuted block design for subject randomization in clinical trials.

    Science.gov (United States)

    Zhao, Wenle

    2014-12-30

    Stratified permuted block randomization has been the dominant covariate-adaptive randomization procedure in clinical trials for several decades. Its high probability of deterministic assignment and low capacity of covariate balancing have been well recognized. The popularity of this sub-optimal method is largely due to its simplicity in implementation and the lack of better alternatives. Proposed in this paper is a two-stage covariate-adaptive randomization procedure that uses the block urn design or the big stick design in stage one to restrict the treatment imbalance within each covariate stratum, and uses the biased-coin minimization method in stage two to control imbalances in the distribution of additional covariates that are not included in the stratification algorithm. Analytical and simulation results show that the new randomization procedure significantly reduces the probability of deterministic assignments, and improve the covariate balancing capacity when compared to the traditional stratified permuted block randomization. Copyright © 2014 John Wiley & Sons, Ltd.

  13. On the design of henon and logistic map-based random number generator

    Science.gov (United States)

    Magfirawaty; Suryadi, M. T.; Ramli, Kalamullah

    2017-10-01

    The key sequence is one of the main elements in the cryptosystem. True Random Number Generators (TRNG) method is one of the approaches to generating the key sequence. The randomness source of the TRNG divided into three main groups, i.e. electrical noise based, jitter based and chaos based. The chaos based utilizes a non-linear dynamic system (continuous time or discrete time) as an entropy source. In this study, a new design of TRNG based on discrete time chaotic system is proposed, which is then simulated in LabVIEW. The principle of the design consists of combining 2D and 1D chaotic systems. A mathematical model is implemented for numerical simulations. We used comparator process as a harvester method to obtain the series of random bits. Without any post processing, the proposed design generated random bit sequence with high entropy value and passed all NIST 800.22 statistical tests.

  14. A strategy for sampling on a sphere applied to 3D selective RF pulse design.

    Science.gov (United States)

    Wong, S T; Roos, M S

    1994-12-01

    Conventional constant angular velocity sampling of the surface of a sphere results in a higher sampling density near the two poles relative to the equatorial region. More samples, and hence longer sampling time, are required to achieve a given sampling density in the equatorial region when compared with uniform sampling. This paper presents a simple expression for a continuous sample path through a nearly uniform distribution of points on the surface of a sphere. Sampling of concentric spherical shells in k-space with the new strategy is used to design 3D selective inversion and spin-echo pulses. These new 3D selective pulses have been implemented and verified experimentally.

  15. Stratified sampling design and loss to follow-up in survival models: evaluation of efficiency and bias.

    Science.gov (United States)

    César, Cibele C; Carvalho, Marilia S

    2011-06-26

    Longitudinal studies often employ complex sample designs to optimize sample size, over-representing population groups of interest. The effect of sample design on parameter estimates is quite often ignored, particularly when fitting survival models. Another major problem in long-term cohort studies is the potential bias due to loss to follow-up. In this paper we simulated a dataset with approximately 50,000 individuals as the target population and 15,000 participants to be followed up for 40 years, both based on real cohort studies of cardiovascular diseases. Two sample strategies--simple random (our golden standard) and Stratified by professional group, with non-proportional allocation--and two loss to follow-up scenarios--non-informative censoring and losses related to the professional group--were analyzed. Two modeling approaches were evaluated: weighted and non-weighted fit. Our results indicate that under the correctly specified model, ignoring the sample weights does not affect the results. However, the model ignoring the interaction of sample strata with the variable of interest and the crude estimates were highly biased. In epidemiological studies misspecification should always be considered, as different sources of variability, related to the individuals and not captured by the covariates, are always present. Therefore, allowance must be made for the possibility of unknown confounders and interactions with the main variable of interest in our data. It is strongly recommended always to correct by sample weights.

  16. Sample-to-sample fluctuations of power spectrum of a random motion in a periodic Sinai model

    Science.gov (United States)

    Dean, David S.; Iorio, Antonio; Marinari, Enzo; Oshanin, Gleb

    2016-09-01

    The Sinai model of a tracer diffusing in a quenched Brownian potential is a much-studied problem exhibiting a logarithmically slow anomalous diffusion due to the growth of energy barriers with the system size. However, if the potential is random but periodic, the regime of anomalous diffusion crosses over to one of normal diffusion once a tracer has diffused over a few periods of the system. Here we consider a system in which the potential is given by a Brownian bridge on a finite interval (0 ,L ) and then periodically repeated over the whole real line and study the power spectrum S (f ) of the diffusive process x (t ) in such a potential. We show that for most of realizations of x (t ) in a given realization of the potential, the low-frequency behavior is S (f ) ˜A /f2 , i.e., the same as for standard Brownian motion, and the amplitude A is a disorder-dependent random variable with a finite support. Focusing on the statistical properties of this random variable, we determine the moments of A of arbitrary, negative, or positive order k and demonstrate that they exhibit a multifractal dependence on k and a rather unusual dependence on the temperature and on the periodicity L , which are supported by atypical realizations of the periodic disorder. We finally show that the distribution of A has a log-normal left tail and exhibits an essential singularity close to the right edge of the support, which is related to the Lifshitz singularity. Our findings are based both on analytic results and on extensive numerical simulations of the process x (t ) .

  17. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments

    Science.gov (United States)

    Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B.

    2015-07-01

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.

  18. Methods and analysis of realizing randomized grouping.

    Science.gov (United States)

    Hu, Liang-Ping; Bao, Xiao-Lei; Wang, Qi

    2011-07-01

    Randomization is one of the four basic principles of research design. The meaning of randomization includes two aspects: one is to randomly select samples from the population, which is known as random sampling; the other is to randomly group all the samples, which is called randomized grouping. Randomized grouping can be subdivided into three categories: completely, stratified and dynamically randomized grouping. This article mainly introduces the steps of complete randomization, the definition of dynamic randomization and the realization of random sampling and grouping by SAS software.

  19. System design description for the consolidated sludge sampling system for K Basins floor and fuel canisters

    Energy Technology Data Exchange (ETDEWEB)

    HECHT, S.L.

    1999-02-18

    This System Design Description describes the Consolidated Sludge Sampling System used in the gathering of sludge samples from K Basin floor and fuel canisters. This document provides additional information on the need for the system, the functions and requirements of the systems, the operations of the system, and the general work plan used in its' design and development.

  20. A data structure for describing sampling designs to aid in compilation of stand attributes

    Science.gov (United States)

    John C. Byrne; Albert R. Stage

    1988-01-01

    Maintaining permanent plot data with different sampling designs over long periods within an organization, and sharing such information between organizations, requires that common standards be used. A data structure for the description of the sampling design within a stand is proposed. It is composed of just those variables and their relationships needed to compile...

  1. Mass weighted urn design - a new randomization algorithm for unequal allocations

    Science.gov (United States)

    Zhao, Wenle

    2015-01-01

    Unequal allocations have been used in clinical trials motivated by ethical, efficiency, or feasibility concerns. Commonly used permuted block randomization faces a tradeoff between effective imbalance control with a small block size and accurate allocation target with a large block size. Few other unequal allocation randomization designs have been proposed in literature with applications in real trials hardly ever been reported, partly due to their complexity in implementation compared to the permuted block randomization. Proposed in this paper is the mass weighted urn design, in which the number of balls in the urn equals to the number of treatments, and remains unchanged during the study. The chance a ball being randomly selected is proportional to the mass of the ball. After each treatment assignment, a part of the mass of the selected ball is re-distributed to all balls based on the target allocation ratio. This design allows any desired optimal unequal allocations be accurately targeted without approximation, and provides a consistent imbalance control throughout the allocation sequence. The statistical properties of this new design is evaluated with the Euclidean distance between the observed treatment distribution and the desired treatment distribution as the treatment imbalance measure; and the Euclidean distance between the conditional allocation probability and the target allocation probability as the allocation predictability measure. Computer simulation results are presented comparing the mass weighted urn design with other randomization designs currently available for unequal allocations. PMID:26091947

  2. Mass weighted urn design--A new randomization algorithm for unequal allocations.

    Science.gov (United States)

    Zhao, Wenle

    2015-07-01

    Unequal allocations have been used in clinical trials motivated by ethical, efficiency, or feasibility concerns. Commonly used permuted block randomization faces a tradeoff between effective imbalance control with a small block size and accurate allocation target with a large block size. Few other unequal allocation randomization designs have been proposed in literature with applications in real trials hardly ever been reported, partly due to their complexity in implementation compared to the permuted block randomization. Proposed in this paper is the mass weighted urn design, in which the number of balls in the urn equals to the number of treatments, and remains unchanged during the study. The chance a ball being randomly selected is proportional to the mass of the ball. After each treatment assignment, a part of the mass of the selected ball is re-distributed to all balls based on the target allocation ratio. This design allows any desired optimal unequal allocations be accurately targeted without approximation, and provides a consistent imbalance control throughout the allocation sequence. The statistical properties of this new design is evaluated with the Euclidean distance between the observed treatment distribution and the desired treatment distribution as the treatment imbalance measure; and the Euclidean distance between the conditional allocation probability and the target allocation probability as the allocation predictability measure. Computer simulation results are presented comparing the mass weighted urn design with other randomization designs currently available for unequal allocations. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Design of a Handheld Pseudo Random Coded UWB Radar for Human Sensing

    Directory of Open Access Journals (Sweden)

    Xia Zheng-huan

    2015-10-01

    Full Text Available This paper presents the design of a handheld pseudo random coded Ultra-WideBand (UWB radar for human sensing. The main tasks of the radar are to track the moving human object and extract the human respiratory frequency. In order to achieve perfect penetrability and good range resolution, m sequence with a carrier of 800 MHz is chosen as the transmitting signal. The modulated m-sequence can be generated directly by the high-speed DAC and FPGA to reduce the size of the radar system, and the mean power of the transmitting signal is 5 dBm. The receiver has two receiving channels based on hybrid sampling, the first receiving channel is to sample the reference signal and the second receiving channel is to obtain the radar echo. The real-time pulse compression is computed in parallel with a group of on-chip DSP48E slices in FPGA to improve the scanning rate of the radar system. Additionally, the algorithms of moving target tracking and life detection are implemented using Intel’s micro-processor, and the detection results are sent to the micro displayer fixed on the helmet. The experimental results show that the moving target located at less than 16 m far away from the wall can be tracked, and the respiratory frequency of the static human at less than 14 m far away from the wall can be extracted.

  4. Customized Sequential Designs for Random Simulation Experiments : Kriging Metamodelling and Bootstrapping

    NARCIS (Netherlands)

    van Beers, W.C.M.; Kleijnen, J.P.C.

    2005-01-01

    This paper proposes a novel method to select an experimental design for interpolation in random simulation, especially discrete event simulation.(Though the paper focuses on Kriging, this design approach may also apply to other types of metamodels such as linear regression models.)Assuming that

  5. Customized sequential designs for random simulation experiments: Kriging metamodeling and bootstrapping

    NARCIS (Netherlands)

    Beers, van W.C.M.; Kleijnen, J.P.C.

    2005-01-01

    This paper proposes a novel method to select an experimental design for interpolation in random simulation, especially discrete event simulation. (Though the paper focuses on Kriging, this design approach may also apply to other types of metamodels such as linear regression models.) Assuming that

  6. Probiotics for fibromyalgia: study design for a pilot double-blind, randomized controlled trial.

    Science.gov (United States)

    Roman, Pablo; Estévez, Ángeles F; Sánchez-Labraca, Nuria; Cañadas, Fernando; Miras, Alonso; Cardona, Diana

    2017-10-24

    Fibromyalgia syndrome (FMS) is a chronic, generalized and diffuse pain disorder accompanied by other symptoms such as emotional and cognitive deficits. The FMS patients show a high prevalence of gastrointestinal symptoms. Recently it has been found that microbes in the gut may regulate brain processes through the gut-microbiota-brain axis, modulating thus affection, motivation and higher cognitive functions. Therefore, the use of probiotics might be a new treatment that could improve the physical, psychological and cognitive state in FMS; however, no evidence about this issue is available. This paper describes the design and protocol of a double-blind, placebo-controlled and randomized pilot study. We use validated questionnaires, cognitive task through E-Prime and biological measures like urine cortisol and stool fecal samples. The trial aim is to explore the effects of eight weeks of probiotics therapy in physical (pain, impact of the FMS and quality of life), emotional (depression, and anxiety) and cognitive symptoms (attention, memory, and impulsivity) in FMS patients as compared to placebo. This pilot study is the first, to our knowledge, to evaluate the effects of probiotics in FMS. The primary hypothesis was that FMS patients will show a better performance on cognitive tasks, and an improvement in emotional and physical symptoms. These results will contribute to a better understanding in the gut-brain axis. Here we present the design and protocol of the study.

  7. Random Model Sampling: Making Craig Interpolation Work When It Should Not

    Directory of Open Access Journals (Sweden)

    Marat Akhin

    2014-01-01

    Full Text Available One of the most serious problems when doing program analyses is dealing with function calls. While function inlining is the traditional approach to this problem, it nonetheless suffers from the increase in analysis complexity due to the state space explosion. Craig interpolation has been successfully used in recent years in the context of bounded model checking to do function summarization which allows one to replace the complete function body with its succinct summary and, therefore, reduce the complexity. Unfortunately this technique can be applied only to a pair of unsatisfiable formulae.In this work-in-progress paper we present an approach to function summarization based on Craig interpolation that overcomes its limitation by using random model sampling. It captures interesting input/output relations, strengthening satisfiable formulae into unsatisfiable ones and thus allowing the use of Craig interpolation. Preliminary experiments show the applicability of this approach; in our future work we plan to do a full evaluation on real-world examples.

  8. Discriminative motif discovery via simulated evolution and random under-sampling.

    Directory of Open Access Journals (Sweden)

    Tao Song

    Full Text Available Conserved motifs in biological sequences are closely related to their structure and functions. Recently, discriminative motif discovery methods have attracted more and more attention. However, little attention has been devoted to the data imbalance problem, which is one of the main reasons affecting the performance of the discriminative models. In this article, a simulated evolution method is applied to solve the multi-class imbalance problem at the stage of data preprocessing, and at the stage of Hidden Markov Models (HMMs training, a random under-sampling method is introduced for the imbalance between the positive and negative datasets. It is shown that, in the task of discovering targeting motifs of nine subcellular compartments, the motifs found by our method are more conserved than the methods without considering data imbalance problem and recover the most known targeting motifs from Minimotif Miner and InterPro. Meanwhile, we use the found motifs to predict protein subcellular localization and achieve higher prediction precision and recall for the minority classes.

  9. Neurofeedback Against Binge Eating: A Randomized Controlled Trial in a Female Subclinical Threshold Sample.

    Science.gov (United States)

    Schmidt, Jennifer; Martin, Alexandra

    2016-09-01

    Brain-directed treatment techniques, such as neurofeedback, have recently been proposed as adjuncts in the treatment of eating disorders to improve therapeutic outcomes. In line with this recommendation, a cue exposure EEG-neurofeedback protocol was developed. The present study aimed at the evaluation of the specific efficacy of neurofeedback to reduce subjective binge eating in a female subthreshold sample. A total of 75 subjects were randomized to EEG-neurofeedback, mental imagery with a comparable treatment set-up or a waitlist group. At post-treatment, only EEG-neurofeedback led to a reduced frequency of binge eating (p = .015, g = 0.65). The effects remained stable to a 3-month follow-up. EEG-neurofeedback further showed particular beneficial effects on perceived stress and dietary self-efficacy. Differences in outcomes did not arise from divergent treatment expectations. Because EEG-neurofeedback showed a specific efficacy, it may be a promising brain-directed approach that should be tested as a treatment adjunct in clinical groups with binge eating. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association.

  10. A coupled well-balanced and random sampling scheme for computing bubble oscillations*

    Directory of Open Access Journals (Sweden)

    Jung Jonathan

    2012-04-01

    Full Text Available We propose a finite volume scheme to study the oscillations of a spherical bubble of gas in a liquid phase. Spherical symmetry implies a geometric source term in the Euler equations. Our scheme satisfies the well-balanced property. It is based on the VFRoe approach. In order to avoid spurious pressure oscillations, the well-balanced approach is coupled with an ALE (Arbitrary Lagrangian Eulerian technique at the interface and a random sampling remap. Nous proposons un schéma de volumes finis pour étudier les oscillations d’une bulle sphérique de gaz dans l’eau. La symétrie sphérique fait apparaitre un terme source géométrique dans les équations d’Euler. Notre schéma est basé sur une approche VFRoe et préserve les états stationnaires. Pour éviter les oscillations de pression, l’approche well-balanced est couplée avec une approche ALE (Arbitrary Lagrangian Eulerian, et une étape de projection basée sur un échantillonage aléatoire.

  11. Accelerating Markov chain Monte Carlo simulation by differential evolution with self-adaptive randomized subspace sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Hyman, James M [Los Alamos National Laboratory; Robinson, Bruce A [Los Alamos National Laboratory; Higdon, Dave [Los Alamos National Laboratory; Ter Braak, Cajo J F [NETHERLANDS; Diks, Cees G H [UNIV OF AMSTERDAM

    2008-01-01

    Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.

  12. Active Learning Not Associated with Student Learning in a Random Sample of College Biology Courses

    Science.gov (United States)

    Andrews, T. M.; Leonard, M. J.; Colgrove, C. A.; Kalinowski, S. T.

    2011-01-01

    Previous research has suggested that adding active learning to traditional college science lectures substantially improves student learning. However, this research predominantly studied courses taught by science education researchers, who are likely to have exceptional teaching expertise. The present study investigated introductory biology courses randomly selected from a list of prominent colleges and universities to include instructors representing a broader population. We examined the relationship between active learning and student learning in the subject area of natural selection. We found no association between student learning gains and the use of active-learning instruction. Although active learning has the potential to substantially improve student learning, this research suggests that active learning, as used by typical college biology instructors, is not associated with greater learning gains. We contend that most instructors lack the rich and nuanced understanding of teaching and learning that science education researchers have developed. Therefore, active learning as designed and implemented by typical college biology instructors may superficially resemble active learning used by education researchers, but lacks the constructivist elements necessary for improving learning. PMID:22135373

  13. Optimal designs of the median run length based double sampling X chart for minimizing the average sample size.

    Directory of Open Access Journals (Sweden)

    Wei Lin Teoh

    Full Text Available Designs of the double sampling (DS X chart are traditionally based on the average run length (ARL criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS X chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS X chart, for minimizing (i the in-control average sample size (ASS and (ii both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA X and Shewhart X charts demonstrate the superiority of the proposed optimal MRL-based DS X chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS X chart in reducing the sample size needed.

  14. A new proposal for randomized start design to investigate disease-modifying therapies for Alzheimer disease.

    Science.gov (United States)

    Zhang, Richard Y; Leon, Andrew C; Chuang-Stein, Christy; Romano, Steven J

    2011-02-01

    The increasing prevalence of Alzheimer disease (AD) and lack of effective agents to attenuate progression have accelerated research and development of disease modifying (DM) therapies. The traditional parallel group design and single time point analysis used in the support of past AD drug approvals address symptomatic benefit over relatively short treatment durations. More recent trials investigating disease modification are by necessity longer in duration and require larger sample sizes. Nevertheless, trial design and analysis remain mostly unchanged and may not be adequate to meet the objective of demonstrating disease modification. Randomized start design (RSD) has been proposed as an option to study DM effects, but its application in AD trials may have been hampered by certain methodological challenges. To address the methodological issues that have impeded more extensive use of RSD in AD trial and to encourage other researchers to develop novel design and analysis methodologies to better ascertain DM effects for the next generation of AD therapies, we propose a stepwise testing procedure to evaluate potential DM effects of novel AD therapies. Alzheimer Disease Assessment Scale-Cognitive Subscale (ADAS-cog) is used for illustration. We propose to test three hypotheses in a stepwise sequence. The three tests pertain to treatment difference at two separate time points and a difference in the rate of change. Estimation is facilitated by the Mixed-effects Model for Repeated Measures approach. The required sample size is estimated using Monte Carlo simulations and by modeling ADAS-cog data from prior longitudinal AD studies. The greatest advantage of the RSD proposed in this article is its ability to critically address the question on a DM effect. The AD trial using the new approach would be longer (12-month placebo period plus 12-month delay-start period; total 24-month duration) and require more subjects (about 1000 subjects per arm for the non-inferiority margin

  15. A sample design for globally consistent biomass estimation using lidar data from the Geoscience Laser Altimeter System (GLAS

    Directory of Open Access Journals (Sweden)

    Healey Sean P

    2012-10-01

    Full Text Available Abstract Background Lidar height data collected by the Geosciences Laser Altimeter System (GLAS from 2002 to 2008 has the potential to form the basis of a globally consistent sample-based inventory of forest biomass. GLAS lidar return data were collected globally in spatially discrete full waveform “shots,” which have been shown to be strongly correlated with aboveground forest biomass. Relationships observed at spatially coincident field plots may be used to model biomass at all GLAS shots, and well-established methods of model-based inference may then be used to estimate biomass and variance for specific spatial domains. However, the spatial pattern of GLAS acquisition is neither random across the surface of the earth nor is it identifiable with any particular systematic design. Undefined sample properties therefore hinder the use of GLAS in global forest sampling. Results We propose a method of identifying a subset of the GLAS data which can justifiably be treated as a simple random sample in model-based biomass estimation. The relatively uniform spatial distribution and locally arbitrary positioning of the resulting sample is similar to the design used by the US national forest inventory (NFI. We demonstrated model-based estimation using a sample of GLAS data in the US state of California, where our estimate of biomass (211 Mg/hectare was within the 1.4% standard error of the design-based estimate supplied by the US NFI. The standard error of the GLAS-based estimate was significantly higher than the NFI estimate, although the cost of the GLAS estimate (excluding costs for the satellite itself was almost nothing, compared to at least US$ 10.5 million for the NFI estimate. Conclusions Global application of model-based estimation using GLAS, while demanding significant consolidation of training data, would improve inter-comparability of international biomass estimates by imposing consistent methods and a globally coherent sample frame. The

  16. Hyperbaric oxygen brain injury treatment (HOBIT) trial: a multifactor design with response adaptive randomization and longitudinal modeling.

    Science.gov (United States)

    Gajewski, Byron J; Berry, Scott M; Barsan, William G; Silbergleit, Robert; Meurer, William J; Martin, Renee; Rockswold, Gaylan L

    2016-09-01

    The goals of phase II clinical trials are to gain important information about the performance of novel treatments and decide whether to conduct a larger phase III trial. This can be complicated in cases when the phase II trial objective is to identify a novel treatment having several factors. Such multifactor treatment scenarios can be explored using fixed sample size trials. However, the alternative design could be response adaptive randomization with interim analyses and additionally, longitudinal modeling whereby more data could be used in the estimation process. This combined approach allows a quicker and more responsive adaptation to early estimates of later endpoints. Such alternative clinical trial designs are potentially more powerful, faster, and smaller than fixed randomized designs. Such designs are particularly challenging, however, because phase II trials tend to be smaller than subsequent confirmatory phase III trials. The phase II trial may need to explore a large number of treatment variations to ensure that the efficacy of optimal clinical conditions is not overlooked. Adaptive trial designs need to be carefully evaluated to understand how they will perform and to take full advantage of their potential benefits. This manuscript discusses a Bayesian response adaptive randomization design with a longitudinal model that uses a multifactor approach for predicting phase III study success via the phase II data. The approach is based on an actual clinical trial design for the hyperbaric oxygen brain injury treatment trial. Specific details of the thought process and the models informing the trial design are provided. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. The importance of distributed sampling in blocked functional magnetic resonance imaging designs

    NARCIS (Netherlands)

    Veltman, Dick J.; Mechelli, Andrea; Friston, Karl J.; Price, Cathy J.

    2002-01-01

    In this study we demonstrate the importance of distributed sampling of peristimulus time in blocked design fMRI studies. Distributed sampling ensures all the components of an event-related hemodynamic response are sampled and avoids the bias incurred when stimulus presentation is time-locked to data

  18. An improved adaptive sampling and experiment design method for aerodynamic optimization

    Directory of Open Access Journals (Sweden)

    Huang Jiangtao

    2015-10-01

    Full Text Available Experiment design method is a key to construct a highly reliable surrogate model for numerical optimization in large-scale project. Within the method, the experimental design criterion directly affects the accuracy of the surrogate model and the optimization efficient. According to the shortcomings of the traditional experimental design, an improved adaptive sampling method is proposed in this paper. The surrogate model is firstly constructed by basic sparse samples. Then the supplementary sampling position is detected according to the specified criteria, which introduces the energy function and curvature sampling criteria based on radial basis function (RBF network. Sampling detection criteria considers both the uniformity of sample distribution and the description of hypersurface curvature so as to significantly improve the prediction accuracy of the surrogate model with much less samples. For the surrogate model constructed with sparse samples, the sample uniformity is an important factor to the interpolation accuracy in the initial stage of adaptive sampling and surrogate model training. Along with the improvement of uniformity, the curvature description of objective function surface gradually becomes more important. In consideration of these issues, crowdness enhance function and root mean square error (RMSE feedback function are introduced in C criterion expression. Thus, a new sampling method called RMSE and crowdness enhance (RCE adaptive sampling is established. The validity of RCE adaptive sampling method is studied through typical test function firstly and then the airfoil/wing aerodynamic optimization design problem, which has high-dimensional design space. The results show that RCE adaptive sampling method not only reduces the requirement for the number of samples, but also effectively improves the prediction accuracy of the surrogate model, which has a broad prospects for applications.

  19. Estimation of the Randomized Complete Block Design Parameters with Fuzzy Goal Programming

    OpenAIRE

    Kula, Kamile; Apaydin, Ayşen

    2011-01-01

    Since goal programming was introduced by Charnes, Cooper and Ferguson (1955), goal programming has been widely studied and applied in various areas. Parameter estimation is quite important in many areas. Recently, many researches have been studied in fuzzy estimation. In this study, fuzzy goal programming was proposed by Hannan (1981) adapted to estimation of randomized complete block design parameters. Suggested fuzzy goal programming is used for estimation of randomized complete block desig...

  20. Cost-effective Sampling Design Applied to Large-scale Monitoring of Boreal Birds

    Directory of Open Access Journals (Sweden)

    Matthew Carlson

    2002-12-01

    Full Text Available Despite their important roles in biodiversity conservation, large-scale ecological monitoring programs are scarce, in large part due to the difficulty of achieving an effective design under fiscal constraints. Using long-term avian monitoring in the boreal forest of Alberta, Canada as an example, we present a methodology that uses power analysis, statistical modeling, and partial derivatives to identify cost-effective sampling strategies for ecological monitoring programs. Empirical parameter estimates were used in simulations that estimated the power of sampling designs to detect trend in a variety of species' populations and community metrics. The ability to detect trend with increased sample effort depended on the monitoring target's variability and how effort was allocated to sampling parameters. Power estimates were used to develop nonlinear models of the relationship between sample effort and power. A cost model was also developed, and partial derivatives of the power and cost models were evaluated to identify two cost-effective avian sampling strategies. For decreasing sample error, sampling multiple plots at a site is preferable to multiple within-year visits to the site, and many sites should be sampled relatively infrequently rather than sampling few sites frequently, although the importance of frequent sampling increases for variable targets. We end by stressing the need for long-term, spatially extensive data for additional taxa, and by introducing optimal design as an alternative to power analysis for the evaluation of ecological monitoring program designs.

  1. Accuracy or precision: Implications of sample design and methodology on abundance estimation

    Science.gov (United States)

    Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.

    2015-01-01

    Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.

  2. Modern survey sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...

  3. [Evidence-based medicine: can we trust the results of well-designed randomized trials?].

    Science.gov (United States)

    Maturana, A; Benaglio, C

    2014-10-01

    Evidence based medicine assists in clinical decision-making by integrating critically appraised information with patient's values and preferences within an existing clinical context. A fundamental concept in this paradigm is the hierarchy of information. The randomized clinical trial is recognized as one of the designs that is less prone to bias and therefore of higher methodological quality. Clinical guidelines are one of the principal tools that evidence based medicine uses to transfer scientific information to clinical practice and many of their recommendations are based on these type of studies. In this review we present some of the limitations that the results can have, in even well designed and executed randomized clinical trials. We also discuss why valid results in these types of studies could not necessarily be extrapolated to the general population. Although the randomized clinical trial continues to be one of the best methodological designs, we suggest that the reader be careful when interpreting its results.

  4. Taping patients with clinical signs of subacromial impingement syndrome: the design of a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Knol Dirk L

    2011-08-01

    Full Text Available Abstract Background Shoulder problems are a common complaint of the musculoskeletal system. Physical therapists treat these patients with different modalities such as exercise, massage, and shoulder taping. Although different techniques have been described, the effectiveness of taping has not yet been established. The aim of this study is to assess the effectiveness and cost-effectiveness of usual physical therapy care in combination with a particular tape technique for subacromial impingement syndrome of the shoulder compared to usual physical therapy care without this tape technique in a primary healthcare setting. Methods and design An economic evaluation alongside a randomized controlled trial will be conducted. A sample of 140 patients between 18 and 65 years of age with a diagnosis of subacromial impingement syndrome (SAIS as assessed by physical therapists will be recruited. Eligible patients will be randomized to either the intervention group (usual care in combination with the particular tape technique or the control group (usual care without this tape technique. In both groups, usual care will consist of individualized physical therapy care. The primary outcomes will be shoulder-specific function (the Simple Shoulder Test and pain severity (11-point numerical rating scale. The economic evaluation will be performed using a societal perspective. All relevant costs will be registered using cost diaries. Utilities (Quality Adjusted Life Years will be measured using the EuroQol. The data will be collected at baseline, and 4, 12, and 26 weeks follow-up. Discussion This pragmatic study will provide information about the effectiveness and cost-effectiveness of taping in patients presenting with clinical signs of SAIS. Trial registration Trial registration number: NTR2575

  5. An efficient reliability algorithm for locating design point using the combination of importance sampling concepts and response surface method

    Science.gov (United States)

    Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin

    2017-06-01

    Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.

  6. Developing effective sampling designs for monitoring natural resources in Alaskan national parks: an example using simulations and vegetation data

    Science.gov (United States)

    Thompson, William L.; Miller, Amy E.; Mortenson, Dorothy C.; Woodward, Andrea

    2011-01-01

    Monitoring natural resources in Alaskan national parks is challenging because of their remoteness, limited accessibility, and high sampling costs. We describe an iterative, three-phased process for developing sampling designs based on our efforts to establish a vegetation monitoring program in southwest Alaska. In the first phase, we defined a sampling frame based on land ownership and specific vegetated habitats within the park boundaries and used Path Distance analysis tools to create a GIS layer that delineated portions of each park that could be feasibly accessed for ground sampling. In the second phase, we used simulations based on landcover maps to identify size and configuration of the ground sampling units (single plots or grids of plots) and to refine areas to be potentially sampled. In the third phase, we used a second set of simulations to estimate sample size and sampling frequency required to have a reasonable chance of detecting a minimum trend in vegetation cover for a specified time period and level of statistical confidence. Results of the first set of simulations indicated that a spatially balanced random sample of single plots from the most common landcover types yielded the most efficient sampling scheme. Results of the second set of simulations were compared with field data and indicated that we should be able to detect at least a 25% change in vegetation attributes over 31. years by sampling 8 or more plots per year every five years in focal landcover types. This approach would be especially useful in situations where ground sampling is restricted by access.

  7. Mental health impact of the 2010 Haiti earthquake on the Miami Haitian population: A random-sample survey.

    Science.gov (United States)

    Messiah, Antoine; Acuna, Juan M; Castro, Grettel; de la Vega, Pura Rodríguez; Vaiva, Guillaume; Shultz, James; Neria, Yuval; De La Rosa, Mario

    2014-07-01

    This study examined the mental health consequences of the January 2010 Haiti earthquake on Haitians living in Miami-Dade County, Florida, 2-3 years following the event. A random-sample household survey was conducted from October 2011 through December 2012 in Miami-Dade County, Florida. Haitian participants (N = 421) were assessed for their earthquake exposure and its impact on family, friends, and household finances; and for symptoms of posttraumatic stress disorder (PTSD), anxiety, and major depression; using standardized screening measures and thresholds. Exposure was considered as "direct" if the interviewee was in Haiti during the earthquake. Exposure was classified as "indirect" if the interviewee was not in Haiti during the earthquake but (1) family members or close friends were victims of the earthquake, and/or (2) family members were hosted in the respondent's household, and/or (3) assets or jobs were lost because of the earthquake. Interviewees who did not qualify for either direct or indirect exposure were designated as "lower" exposure. Eight percent of respondents qualified for direct exposure, and 63% qualified for indirect exposure. Among those with direct exposure, 19% exceeded threshold for PTSD, 36% for anxiety, and 45% for depression. Corresponding percentages were 9%, 22% and 24% for respondents with indirect exposure, and 6%, 14%, and 10% for those with lower exposure. A majority of Miami Haitians were directly or indirectly exposed to the earthquake. Mental health distress among them remains considerable two to three years post-earthquake.

  8. Design of sample carrier for neutron irradiation facility at TRIGA MARK II nuclear reactor

    Science.gov (United States)

    Abdullah, Y.; Hamid, N. A.; Mansor, M. A.; Ahmad, M. H. A. R. M.; Yusof, M. R.; Yazid, H.; Mohamed, A. A.

    2013-06-01

    The objective of this work is to design a sample carrier for neutron irradiation experiment at beam ports of research nuclear reactor, the Reaktor TRIGA PUSPATI (RTP). The sample carrier was designed so that irradiation experiment can be performed safely by researchers. This development will resolve the transferring of sample issues faced by the researchers at the facility when performing neutron irradiation studies. The function of sample carrier is to ensure the sample for the irradiation process can be transferred into and out from the beam port of the reactor safely and effectively. The design model used was House of Quality Method (HOQ) which is usually used for developing specifications for product and develop numerical target to work towards and determining how well we can meet up to the needs. The chosen sample carrier (product) consists of cylindrical casing shape with hydraulic cylinders transportation method. The sample placing can be done manually, locomotion was by wheel while shielding used was made of boron materials. The sample carrier design can shield thermal neutron during irradiation of sample so that only low fluencies fast neutron irradiates the sample.

  9. A typical proficiency testing programmes sample design for electrical and electronic product

    Science.gov (United States)

    Wang, T. T.; Zhang, H.; Xie, L. L.; Wang, Y. Y.

    2017-04-01

    Creepage distance and clearance testing are the basic testing items in the safety standards for almost all electrical and electronic products. A typical sample group is designed in this paper for the purpose of proficiency testing programmes. The sample group is composed of two kinds of circuit board. The length of the creepage distance of the two circuit boards in pollution degree 2 and 3 are the same but with different paths. This sample group includes three testing points. This sample group is designed beneficial for numerical statistics and avoiding the data complicity in the laboratory. It can be used for effective laboratory monitoring.

  10. 1979 Reserve Force Studies Surveys: Survey Design, Sample Design and Administrative Procedures,

    Science.gov (United States)

    1981-08-01

    Dental Service 17 Reception Cnt. 11Other 46 Total 2914 The heterogeneity of the units poses a problem for a unit sampling plan. Namely, if the...regular schoel or collae 04. What is tha ZIP Code ZIP Cods thats you think you will ama c0oMsil In tha futura? If af tha placa whare your highest grade

  11. LONG-TERM VARIABILITY OF BRONCHIAL RESPONSIVENESS TO HISTAMINE IN A RANDOM-POPULATION SAMPLE OF ADULTS

    NARCIS (Netherlands)

    RIJCKEN, B; SCHOUTEN, JP; WEISS, ST; ROSNER, B; DEVRIES, K; VANDERLENDE, R

    1993-01-01

    Long-term variability of bronchial responsiveness has been studied in a random population sample of adults. During a follow-up period of 18 yr, 2,216 subjects contributed 5,012 observations to the analyses. Each subject could have as many as seven observations. Bronchial responsiveness was assessed

  12. Mindful Yoga for women with metastatic breast cancer: design of a randomized controlled trial.

    Science.gov (United States)

    Carson, James W; Carson, Kimberly M; Olsen, Maren K; Sanders, Linda; Porter, Laura S

    2017-03-13

    Women with metastatic breast cancer (MBC) have average life expectancies of about 2 years, and report high levels of disease-related symptoms including pain, fatigue, sleep disturbance, psychological distress, and functional impairment. There is growing recognition of the limitations of medical approaches to managing such symptoms. Yoga is a mind-body discipline that has demonstrated a positive impact on psychological and functional health in early stage breast cancer patients and survivors, but has not been rigorously studied in advanced cancer samples. This randomized controlled trial examines the feasibility and initial efficacy of a Mindful Yoga program, compared with a social support condition that controls for attention, on measures of disease-related symptoms such as pain and fatigue. The study will be completed by December 2017. Sixty-five women with MBC age ≥ 18 are being identified and randomized with a 2:1 allocation to Mindful Yoga or a support group control intervention. The 120-min intervention sessions take place weekly for 8 weeks. The study is conducted at an urban tertiary care academic medical center located in Durham, North Carolina. The primary feasibility outcome is attendance at intervention sessions. Efficacy outcomes include pain, fatigue, sleep quality, psychological distress, mindfulness and functional capacity at post-intervention, 3-month follow-up, and 6-month follow-up. In this article, we present the challenges of designing a randomized controlled trial with long-term follow-up among women with MBC. These challenges include ensuring adequate recruitment including of minorities, limiting and controlling for selection bias, tailoring of the yoga intervention to address special needs, and maximizing adherence and retention. This project will provide important information regarding yoga as an intervention for women with advanced cancer, including preliminary data on the psychological and functional effects of yoga for MBC

  13. Albumin to creatinine ratio in a random urine sample: Correlation with severity of preeclampsia

    Directory of Open Access Journals (Sweden)

    Fady S. Moiety

    2014-06-01

    Conclusions: Random urine ACR may be a reliable method for prediction and assessment of severity of preeclampsia. Using the estimated cut-off may add to the predictive value of such a simple quick test.

  14. Active-Varying Sampling-Based Fault Detection Filter Design for Networked Control Systems

    Directory of Open Access Journals (Sweden)

    Yu-Long Wang

    2014-01-01

    Full Text Available This paper is concerned with fault detection filter design for continuous-time networked control systems considering packet dropouts and network-induced delays. The active-varying sampling period method is introduced to establish a new discretized model for the considered networked control systems. The mutually exclusive distribution characteristic of packet dropouts and network-induced delays is made full use of to derive less conservative fault detection filter design criteria. Compared with the fault detection filter design adopting a constant sampling period, the proposed active-varying sampling-based fault detection filter design can improve the sensitivity of the residual signal to faults and shorten the needed time for fault detection. The simulation results illustrate the merits and effectiveness of the proposed fault detection filter design.

  15. Robust design with imprecise random variables and its application in hydrokinetic turbine optimization

    Science.gov (United States)

    Hu, Zhen; Du, Xiaoping; Kolekar, Nitin S.; Banerjee, Arindam

    2014-03-01

    In robust design, uncertainty is commonly modelled with precise probability distributions. In reality, the distribution types and distribution parameters may not always be available owing to limited data. This research develops a robust design methodology to accommodate the mixture of both precise and imprecise random variables. By incorporating the Taguchi quality loss function and the minimax regret criterion, the methodology mitigates the effects of not only uncertain parameters but also uncertainties in the models of the uncertain parameters. Hydrokinetic turbine systems are a relatively new alternative energy technology, and both precise and imprecise random variables exist in the design of such systems. The developed methodology is applied to the robust design optimization of a hydrokinetic turbine system. The results demonstrate the effectiveness of the proposed methodology.

  16. [Sampling plan, weighting process and design effects of the Brazilian Oral Health Survey].

    Science.gov (United States)

    Silva, Nilza Nunes da; Roncalli, Angelo Giuseppe

    2013-12-01

    To present aspects of the sampling plan of the Brazilian Oral Health Survey (SBBrasil Project). with theoretical and operational issues that should be taken into account in the primary data analyses. The studied population was composed of five demographic groups from urban areas of Brazil in 2010. Two and three stage cluster sampling was used. adopting different primary units. Sample weighting and design effects (deff) were used to evaluate sample consistency. In total. 37,519 individuals were reached. Although the majority of deff estimates were acceptable. some domains showed distortions. The majority (90%) of the samples showed results in concordance with the precision proposed in the sampling plan. The measures to prevent losses and the effects the cluster sampling process in the minimum sample sizes proved to be effective for the deff. which did not exceeded 2. even for results derived from weighting. The samples achieved in the SBBrasil 2010 survey were close to the main proposals for accuracy of the design. Some probabilities proved to be unequal among the primary units of the same domain. Users of this database should bear this in mind, introducing sample weighting in calculations of point estimates, standard errors, confidence intervals and design effects.

  17. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  18. Comparative analysis between intergradient and interblock in randomized complete block design with replication

    OpenAIRE

    Mubarak, Fadhlul

    2015-01-01

    Interblock and intergradient analyses obtaining alternative for mean square error in complex design likes factorial randomized complete blocks design with replication. The best analysis between interblok and intergradient analyses with using relative eficience. Relative eficience intergradient and interblock analyses within untractor???s line are 0.79 and 0.25, so intergradient and interblock analyses more anova for this case. Relative eficience intergradient and interblock ana...

  19. Dealing with Design Failures in Randomized Field Experiments: Analytic Issues Regarding the Evaluation of Treatment Effects.

    Science.gov (United States)

    Gartin, Patrick R.

    1995-01-01

    Asserts that several analytical issues in randomized field experiments conducted by criminal justice scholars must be addressed more systematically. Notes that issues related to statistical power and desired sample size remain unresolved. Reviews related literature from the field of medicine to provide insights regarding the dilemmas created by…

  20. The cohort multiple randomized controlled trial design: a valid and efficient alternative to pragmatic trials?

    Science.gov (United States)

    van der Velden, Joanne M; Verkooijen, Helena M; Young-Afat, Danny A; Burbach, Johannes Pm; van Vulpen, Marco; Relton, Clare; van Gils, Carla H; May, Anne M; Groenwold, Rolf Hh

    2017-02-01

    Randomized controlled trials (RCTs)-the gold standard for evaluating the effects of medical interventions-are notoriously challenging in terms of logistics, planning and costs. The cohort multiple randomized controlled trial approach is designed to facilitate randomized trials for pragmatic evaluation of (new) interventions and is a promising variation from conventional pragmatic RCTs. In this paper, we evaluate methodological challenges of conducting an RCT within a cohort. We argue that equally valid results can be obtained from trials conducted within cohorts as from pragmatic RCTs. However, whether this design is more efficient compared with conducting a pragmatic RCT depends on the amount and nature of non-compliance in the intervention arm. © The Author 2016; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  1. Development and Validation of an Admission Test Designed to Assess Samples of Performance on Academic Tasks

    Science.gov (United States)

    Tanilon, Jenny; Segers, Mien; Vedder, Paul; Tillema, Harm

    2009-01-01

    This study illustrates the development and validation of an admission test, labeled as Performance Samples on Academic Tasks in Educational Sciences (PSAT-Ed), designed to assess samples of performance on academic tasks characteristic of those that would eventually be encountered by examinees in an Educational Sciences program. The test was based…

  2. Randomized Controlled Studies and Alternative Designs in Outcome Studies: Challenges and Opportunities

    Science.gov (United States)

    Shadish, William R.

    2011-01-01

    This article reviews several decades of the author's meta-analytic and experimental research on the conditions under which nonrandomized experiments can approximate the results from randomized experiments (REs). Several studies make clear that we can expect accurate effect estimates from the regression discontinuity design, though its statistical…

  3. A worksite prevention program for construction workers: Design of a randomized controlled trial

    NARCIS (Netherlands)

    Oude Hengel, K.M.; Joling, C.I.; Proper, K.I.; Blatter, B.M.; Bongers, P.M.

    2010-01-01

    Background. A worksite prevention program was developed to promote the work ability of construction workers and thereby prolong a healthy working life. The objective of this paper is to present the design of a randomized controlled trial evaluating the effectiveness of that intervention program

  4. Customized Sequential Designs for Random Simulation Experiments : Kriging Metamodelling and Bootstrapping

    NARCIS (Netherlands)

    van Beers, W.C.M.; Kleijnen, J.P.C.

    2004-01-01

    This paper proposes a novel method to select an experimental design for interpolation in random simulation.(Though the paper focuses on Kriging, this method may also apply to other types of metamodels such as linear regression models.)Assuming that simulation requires much computer time, it is

  5. Single-tooth implants with different neck designs : a randomized clinical trial evaluating the aesthetic outcome

    NARCIS (Netherlands)

    Hartog, Laurens den; Raghoebar, Gerry M.; Huddleston Slater, James J.R.; Stellingsma, Kees; Vissink, Arjan; Meijer, Hendrikus

    Aim: To evaluate the aesthetic outcome of single-tooth implants in the aesthetic zone with different neck designs from a professional's and patient's perception. Materials and Methods: Ninety-three patients with a missing anterior tooth in the maxilla were randomly assigned to be treated with an

  6. Establishing the experimenting society : The historical origin of social experimentation according to the randomized controlled design

    NARCIS (Netherlands)

    Dehue, T

    2001-01-01

    This article tl aces the historical origin of social experimentation. It highlights the central role of psychology in establishing the randomized controlled design and its quasi-experimental derivatives. The author investigates the differences in the 19th- and 20th-century meaning of the expression

  7. Power To Detect Additive Treatment Effects with Randomized Block and Analysis of Covariance Designs.

    Science.gov (United States)

    Klockars, Alan J.; Potter, Nina Salcedo; Beretvas, S. Natasha

    1999-01-01

    Compared the power of analysis of covariance (ANCOVA) and two types of randomized block designs as a function of the correlation between the concomitant variable and the outcome measure, the number of groups, the number of participants, and nominal power. Discusses advantages of ANCOVA. (Author/SLD)

  8. A Comparison of Three Methods of Analyzing Dichotomous Data in a Randomized Block Design.

    Science.gov (United States)

    Mandeville, Garrett K.

    Results of a comparative study of F and Q tests, in a randomized block design with one replication per cell, are presented. In addition to these two procedures, a multivariate test was also considered. The model and test statistics, data generation and parameter selection, results, summary and conclusions are presented. Ten tables contain the…

  9. Non-Parametric Inference for Ordered Alternatives in a Randomized Block Design

    Science.gov (United States)

    Hettmansperger, Thomas P.

    1975-01-01

    Treats the problem of testing an ordered hypothesis based on the ranks of the data. Statistical procedures for the randomized block design with more than one observation per cell are derived. Multiple comparisions and estimation procedures are included. (Author/RC)

  10. Beyond Random Walk and Metropolis-Hastings Samplers: Why You Should Not Backtrack for Unbiased Graph Sampling

    CERN Document Server

    Lee, Chul-Ho; Eun, Do Young

    2012-01-01

    Graph sampling via crawling has been actively considered as a generic and important tool for collecting uniform node samples so as to consistently estimate and uncover various characteristics of complex networks. The so-called simple random walk with re-weighting (SRW-rw) and Metropolis-Hastings (MH) algorithm have been popular in the literature for such unbiased graph sampling. However, an unavoidable downside of their core random walks -- slow diffusion over the space, can cause poor estimation accuracy. In this paper, we propose non-backtracking random walk with re-weighting (NBRW-rw) and MH algorithm with delayed acceptance (MHDA) which are theoretically guaranteed to achieve, at almost no additional cost, not only unbiased graph sampling but also higher efficiency (smaller asymptotic variance of the resulting unbiased estimators) than the SRW-rw and the MH algorithm, respectively. In particular, a remarkable feature of the MHDA is its applicability for any non-uniform node sampling like the MH algorithm,...

  11. Statistical methods for genetic association studies with response-selective sampling designs

    NARCIS (Netherlands)

    Balliu, Brunilda

    2015-01-01

    This dissertation describes new statistical methods designed to improve the power of genetic association studies. Of particular interest are studies with a response-selective sampling design, i.e. case-control studies of unrelated individuals and case-control studies of family members. The

  12. Vegetation resources inventory of southwest Alaska: development and application of an innovative, extensive sampling design.

    Science.gov (United States)

    Willem W.S. van Hees

    1999-01-01

    An assessment of the vegetation resources of southwest Alaska was made by using an inventory design developed by the Pacific Northwest Research Station. Satellite imagery (LANDSAT MSS), high-altitude aerial photography, and ground sampling were the major components of the design. Estimates of area for all land cover classes in the southwest region were produced....

  13. Adaptive sampling in two-phase designs: a biomarker study for progression in arthritis

    Science.gov (United States)

    McIsaac, Michael A; Cook, Richard J

    2015-01-01

    Response-dependent two-phase designs are used increasingly often in epidemiological studies to ensure sampling strategies offer good statistical efficiency while working within resource constraints. Optimal response-dependent two-phase designs are difficult to implement, however, as they require specification of unknown parameters. We propose adaptive two-phase designs that exploit information from an internal pilot study to approximate the optimal sampling scheme for an analysis based on mean score estimating equations. The frequency properties of estimators arising from this design are assessed through simulation, and they are shown to be similar to those from optimal designs. The design procedure is then illustrated through application to a motivating biomarker study in an ongoing rheumatology research program. Copyright © 2015 © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25951124

  14. Soil map disaggregation improved by soil-landscape relationships, area-proportional sampling and random forest implementation

    DEFF Research Database (Denmark)

    Møller, Anders Bjørn; Malone, Brendan P.; Odgers, Nathan

    algorithm were evaluated. The resulting maps were validated on 777 soil profiles situated in a grid covering Denmark. The experiments showed that the results obtained with Jacobsen’s map were more accurate than the results obtained with the CEC map, despite a nominally coarser scale of 1:2,000,000 vs. 1...... of European Communities (CEC, 1985) respectively, both using the FAO 1974 classification. Furthermore, the effects of implementing soil-landscape relationships, using area proportional sampling instead of per polygon sampling, and replacing the default C5.0 classification tree algorithm with a random forest......:1,000,000. This finding is probably related to the fact that Jacobsen’s map was more detailed with a larger number of polygons, soil map units and soil types, despite its coarser scale. The results showed that the implementation of soil-landscape relationships, area-proportional sampling and the random forest...

  15. A comparison of sampling designs for monitoring recreational trail impacts in Rocky Mountain National Park.

    Science.gov (United States)

    Pettebone, David; Newman, Peter; Theobald, David

    2009-03-01

    The dual goals of the Organic Act of 1916 and Wilderness Act of 1964 are to protect natural resources and provide quality visitor experiences. Park managers need metrics of trail conditions to protect park resources and quality of visitor experiences. A few methods of sampling design for trails have been developed. Here, we describe a relatively new method, spatially balanced sampling, and compare it to systematic sampling. We evaluated the efficiency of sampling designs to measure recreation-related impacts in Rocky Mountain National Park. This study addressed two objectives: first, it compared estimates of trail conditions from data collected from systematic versus spatially balanced sampling data; second, it examined the relationship between sampling precision and sampling efficiency. No statistically significant differences in trail condition were found between the 100-m interval and the spatially balanced datasets. The spatially balanced probability-based dataset was found to be a good estimate of trail conditions when analyses were conducted with fewer sample points. Moreover, spatially balanced probability-based sampling is flexible and allows additional sample points to be added to a sample.

  16. Practical iterative learning control with frequency domain design and sampled data implementation

    CERN Document Server

    Wang, Danwei; Zhang, Bin

    2014-01-01

    This book is on the iterative learning control (ILC) with focus on the design and implementation. We approach the ILC design based on the frequency domain analysis and address the ILC implementation based on the sampled data methods. This is the first book of ILC from frequency domain and sampled data methodologies. The frequency domain design methods offer ILC users insights to the convergence performance which is of practical benefits. This book presents a comprehensive framework with various methodologies to ensure the learnable bandwidth in the ILC system to be set with a balance between learning performance and learning stability. The sampled data implementation ensures effective execution of ILC in practical dynamic systems. The presented sampled data ILC methods also ensure the balance of performance and stability of learning process. Furthermore, the presented theories and methodologies are tested with an ILC controlled robotic system. The experimental results show that the machines can work in much h...

  17. Bionic Design for Mars Sampling Scoop Inspired by Himalayan Marmot Claw

    Directory of Open Access Journals (Sweden)

    Long Xue

    2016-01-01

    Full Text Available Cave animals are often adapted to digging and life underground, with claw toes similar in structure and function to a sampling scoop. In this paper, the clawed toes of the Himalayan marmot were selected as a biological prototype for bionic research. Based on geometric parameter optimization of the clawed toes, a bionic sampling scoop for use on Mars was designed. Using a 3D laser scanner, the point cloud data of the second front claw toe was acquired. Parametric equations and contour curves for the claw were then built with cubic polynomial fitting. We obtained 18 characteristic curve equations for the internal and external contours of the claw. A bionic sampling scoop was designed according to the structural parameters of Curiosity’s sampling shovel and the contours of the Himalayan marmot’s claw. Verifying test results showed that when the penetration angle was 45° and the sampling speed was 0.33 r/min, the bionic sampling scoops’ resistance torque was 49.6% less than that of the prototype sampling scoop. When the penetration angle was 60° and the sampling speed was 0.22 r/min, the resistance torque of the bionic sampling scoop was 28.8% lower than that of the prototype sampling scoop.

  18. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions.

    Science.gov (United States)

    Pavlacky, David C; Lukacs, Paul M; Blakesley, Jennifer A; Skorkowsky, Robert C; Klute, David S; Hahn, Beth A; Dreitz, Victoria J; George, T Luke; Hanni, David J

    2017-01-01

    Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1) coordination across organizations and regions, 2) meaningful management and conservation objectives, and 3) rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR) program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17). We provide two examples for the Brewer's sparrow (Spizella breweri) in BCR 17 demonstrating the ability of the design to 1) determine hierarchical population responses to landscape change and 2) estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous statistical

  19. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions.

    Directory of Open Access Journals (Sweden)

    David C Pavlacky

    Full Text Available Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1 coordination across organizations and regions, 2 meaningful management and conservation objectives, and 3 rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17. We provide two examples for the Brewer's sparrow (Spizella breweri in BCR 17 demonstrating the ability of the design to 1 determine hierarchical population responses to landscape change and 2 estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous

  20. Randomization tests

    CERN Document Server

    Edgington, Eugene

    2007-01-01

    Statistical Tests That Do Not Require Random Sampling Randomization Tests Numerical Examples Randomization Tests and Nonrandom Samples The Prevalence of Nonrandom Samples in Experiments The Irrelevance of Random Samples for the Typical Experiment Generalizing from Nonrandom Samples Intelligibility Respect for the Validity of Randomization Tests Versatility Practicality Precursors of Randomization Tests Other Applications of Permutation Tests Questions and Exercises Notes References Randomized Experiments Unique Benefits of Experiments Experimentation without Mani

  1. Planning considerations for a Mars Sample Receiving Facility: summary and interpretation of three design studies.

    Science.gov (United States)

    Beaty, David W; Allen, Carlton C; Bass, Deborah S; Buxbaum, Karen L; Campbell, James K; Lindstrom, David J; Miller, Sylvia L; Papanastassiou, Dimitri A

    2009-10-01

    It has been widely understood for many years that an essential component of a Mars Sample Return mission is a Sample Receiving Facility (SRF). The purpose of such a facility would be to take delivery of the flight hardware that lands on Earth, open the spacecraft and extract the sample container and samples, and conduct an agreed-upon test protocol, while ensuring strict containment and contamination control of the samples while in the SRF. Any samples that are found to be non-hazardous (or are rendered non-hazardous by sterilization) would then be transferred to long-term curation. Although the general concept of an SRF is relatively straightforward, there has been considerable discussion about implementation planning. The Mars Exploration Program carried out an analysis of the attributes of an SRF to establish its scope, including minimum size and functionality, budgetary requirements (capital cost, operating costs, cost profile), and development schedule. The approach was to arrange for three independent design studies, each led by an architectural design firm, and compare the results. While there were many design elements in common identified by each study team, there were significant differences in the way human operators were to interact with the systems. In aggregate, the design studies provided insight into the attributes of a future SRF and the complex factors to consider for future programmatic planning.

  2. Adaptive designs for the one-sample log-rank test.

    Science.gov (United States)

    Schmidt, Rene; Faldum, Andreas; Kwiecien, Robert

    2017-09-22

    Traditional designs in phase IIa cancer trials are single-arm designs with a binary outcome, for example, tumor response. In some settings, however, a time-to-event endpoint might appear more appropriate, particularly in the presence of loss to follow-up. Then the one-sample log-rank test might be the method of choice. It allows to compare the survival curve of the patients under treatment to a prespecified reference survival curve. The reference curve usually represents the expected survival under standard of the care. In this work, convergence of the one-sample log-rank statistic to Brownian motion is proven using Rebolledo's martingale central limit theorem while accounting for staggered entry times of the patients. On this basis, a confirmatory adaptive one-sample log-rank test is proposed where provision is made for data dependent sample size reassessment. The focus is to apply the inverse normal method. This is done in two different directions. The first strategy exploits the independent increments property of the one-sample log-rank statistic. The second strategy is based on the patient-wise separation principle. It is shown by simulation that the proposed adaptive test might help to rescue an underpowered trial and at the same time lowers the average sample number (ASN) under the null hypothesis as compared to a single-stage fixed sample design. © 2017, The International Biometric Society.

  3. Block urn design - a new randomization algorithm for sequential trials with two or more treatments and balanced or unbalanced allocation

    Science.gov (United States)

    Zhao, Wenle; Weng, Yanqiu

    2011-01-01

    Permuted block design is the most popular randomization method used in clinical trials, especially for trials with more than two treatments and unbalanced allocation, because of its consistent imbalance control and simplicity in implementation. However, the risk of selection biases caused by high proportion of deterministic assignments is a cause of concern. Efron’s biased coin design and Wei’s urn design provide better allocation randomness without deterministic assignments, but they do not consistently control treatment imbalances. Alternative randomization designs with improved performances have been proposed over the past few decades, including Soares and Wu’s big stick design, which has high allocation randomness, but is limited to two-treatment balanced allocation scenarios only, and Berger’s maximal procedure design which has a high allocation randomness and a potential for more general trial scenarios, but lacks the explicit function for the conditional allocation probability and is more complex to implement than most other designs. The block urn design proposed in this paper combines the advantages of existing randomization designs while overcoming their limitations. Statistical properties of the new algorithm are assessed and compared to currently available designs via analytical and computer simulation approaches. The results suggest that the block urn design simultaneously provides consistent imbalance control and high allocation randomness. It can be easily implemented for sequential clinical trials with two or more treatments and balanced or unbalanced allocation. PMID:21893215

  4. Alcohol and marijuana use in adolescents' daily lives: a random sample of experiences.

    Science.gov (United States)

    Larson, R; Csikszentmihalyi, M; Freeman, M

    1984-07-01

    High school students filled out reports on their experiences at random times during their daily lives, including 48 occasions when they were using alcohol or marijuana. Alcohol use was reported primarily in the context of Friday and Saturday night social gatherings and was associated with a happy and gregarious subjective state. Marijuana use was reported across a wider range of situations and was associated with an average state that differed much less from ordinary experience.

  5. Optimal Design for Two-Level Random Assignment and Regression Discontinuity Studies

    Science.gov (United States)

    Rhoads, Christopher H.; Dye, Charles

    2016-01-01

    An important concern when planning research studies is to obtain maximum precision of an estimate of a treatment effect given a budget constraint. When research designs have a "multilevel" or "hierarchical" structure changes in sample size at different levels of the design will impact precision differently. Furthermore, there…

  6. Random or systematic sampling to detect a localised microbial contamination within a batch of food

    NARCIS (Netherlands)

    Jongenburger, I.; Reij, M.W.; Boer, E.P.J.; Gorris, L.G.M.; Zwietering, M.H.

    2011-01-01

    Pathogenic microorganisms are known to be distributed heterogeneously in food products that are solid, semi-solid or powdered, like for instance peanut butter, cereals, or powdered milk. This complicates effective detection of the pathogens by sampling. Two-class sampling plans, which are deployed

  7. Multistage point relascope and randomized branch sampling for downed coarse woody debris estimation

    Science.gov (United States)

    Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine

    2002-01-01

    New sampling methods have recently been introduced that allow estimation of downed coarse woody debris using an angle gauge, or relascope. The theory behind these methods is based on sampling straight pieces of downed coarse woody debris. When pieces deviate from this ideal situation, auxillary methods must be employed. We describe a two-stage procedure where the...

  8. Design, analysis, and interpretation of field quality-control data for water-sampling projects

    Science.gov (United States)

    Mueller, David K.; Schertz, Terry L.; Martin, Jeffrey D.; Sandstrom, Mark W.

    2015-01-01

    The process of obtaining and analyzing water samples from the environment includes a number of steps that can affect the reported result. The equipment used to collect and filter samples, the bottles used for specific subsamples, any added preservatives, sample storage in the field, and shipment to the laboratory have the potential to affect how accurately samples represent the environment from which they were collected. During the early 1990s, the U.S. Geological Survey implemented policies to include the routine collection of quality-control samples in order to evaluate these effects and to ensure that water-quality data were adequately representing environmental conditions. Since that time, the U.S. Geological Survey Office of Water Quality has provided training in how to design effective field quality-control sampling programs and how to evaluate the resultant quality-control data. This report documents that training material and provides a reference for methods used to analyze quality-control data.

  9. Important considerations when analyzing health survey data collected using a complex sample design.

    Science.gov (United States)

    Sakshaug, Joseph W; West, Brady T

    2014-01-01

    Researchers often use survey data to answer important public health policy questions. Examples of common data sources used in public health research include the National Health and Nutrition Examination Survey, the National Health Interview Survey, the Medical Expenditure Panel Survey, and the National Ambulatory Medical Care Survey. All these surveys employ a complex sample design to recruit participants into the survey. When performing secondary analyses of complex sample survey data, it is necessary to remind ourselves of the key features of these designs that must be taken into account to produce valid statistical estimates.

  10. Incorporating Complex Sample Design Effects When Only Final Survey Weights are Available.

    Science.gov (United States)

    West, Brady T; McCabe, Sean Esteban

    2012-10-01

    This article considers the situation that arises when a survey data producer has collected data from a sample with a complex design (possibly featuring stratification of the population, cluster sampling, and / or unequal probabilities of selection), and for various reasons only provides secondary analysts of those survey data with a final survey weight for each respondent and "average" design effects for survey estimates computed from the data. In general, these "average" design effects, presumably computed by the data producer in a way that fully accounts for all of the complex sampling features, already incorporate possible increases in sampling variance due to the use of the survey weights in estimation. The secondary analyst of the survey data who then 1) uses the provided information to compute weighted estimates, 2) computes design-based standard errors reflecting variance in the weights (using Taylor Series Linearization, for example), and 3) inflates the estimated variances using the "average" design effects provided is applying a "double" adjustment to the standard errors for the effect of weighting on the variance estimates, leading to overly conservative inferences. We propose a simple method for preventing this problem, and provide a Stata program for applying appropriate adjustments to variance estimates in this situation. We illustrate two applications of the method to survey data from the Monitoring the Future (MTF) study, and conclude with suggested directions for future research in this area.

  11. A design-based approximation to the Bayes Information Criterion in finite population sampling

    Directory of Open Access Journals (Sweden)

    Enrico Fabrizi

    2014-05-01

    Full Text Available In this article, various issues related to the implementation of the usual Bayesian Information Criterion (BIC are critically examined in the context of modelling a finite population. A suitable design-based approximation to the BIC is proposed in order to avoid the derivation of the exact likelihood of the sample which is often very complex in a finite population sampling. The approximation is justified using a theoretical argument and a Monte Carlo simulation study.

  12. Distance software: design and analysis of distance sampling surveys for estimating population size.

    Science.gov (United States)

    Thomas, Len; Buckland, Stephen T; Rexstad, Eric A; Laake, Jeff L; Strindberg, Samantha; Hedley, Sharon L; Bishop, Jon Rb; Marques, Tiago A; Burnham, Kenneth P

    2010-02-01

    1.Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance.2.We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use.3.Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated.4.A first step in analysis of distance sampling data is modelling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark-recapture distance sampling, which relaxes the assumption of certain detection at zero distance.5.All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap.6.Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modelling analysis engine for spatial and habitat modelling, and information about accessing the analysis engines directly from other software.7.Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of-the-art software that implements these methods is described that makes the methods

  13. Architectural Design Space Exploration of an FPGA-based Compressed Sampling Engine

    DEFF Research Database (Denmark)

    El-Sayed, Mohammad; Koch, Peter; Le Moullec, Yannick

    2015-01-01

    We present the architectural design space exploration of a compressed sampling engine for use in a wireless heart-rate monitoring system. We show how parallelism affects execution time at the register transfer level. Furthermore, two example solutions (modified semi-parallel and full......-parallel) selected from the design space are prototyped on an Altera Cyclone III FPGA platform; in both cases the FPGA resource usage is less than 1% and the maximum frequency is 250 MHz....

  14. Design of the Polish Colonoscopy Screening Program: a randomized health services study.

    Science.gov (United States)

    Kaminski, Michal F; Kraszewska, Ewa; Rupinski, Maciej; Laskowska, Milena; Wieszczy, Paulina; Regula, Jaroslaw

    2015-12-01

    Colonoscopy screening for colorectal cancer has been implemented without evidence from randomized controlled trials quantifying its benefit and invariably as an opportunistic program, both of which are contrary to the European Union guideline recommendations. The aim of this paper is to describe the rationale and design of the first population-based colonoscopy screening program (PCSP), which was launched in Poland in 2012 as a randomized health services (RHS) study. The PCSP is a natural extension of opportunistic colonoscopy screening implemented in 2000. It uses colonoscopy capacity, a quality assurance program, and a network of 92 centers built up during the opportunistic screening phase to develop a countrywide PCSP. Within the PCSP, single screening colonoscopy is offered to a target population aged 55-64 years. The PCSP uses an RHS design, which means that eligible individuals drawn from population registries are randomly assigned to immediate or postponed invitation to screening. Individuals from birth cohorts that will reach the upper age limit for screening before full implementation of the PCSP are randomly assigned, in a 1:1:1 ratio, to "immediate" screening, "postponed" screening, or a "never invited" control group. The RHS design is a natural platform that will evaluate the effectiveness of screening, and compare different age ranges for screening, invitation procedures, and quality improvement interventions. Up to 2015, 24 centers have been developed, with 34.2% geographic coverage and 851,535 individuals enrolled. The PCSP sets an example for implementation of population-based colonoscopy screening with experimental design to ensure proper evaluation of its effectiveness. © Georg Thieme Verlag KG Stuttgart · New York.

  15. Sampling design for long-term regional trends in marine rocky intertidal communities

    Science.gov (United States)

    Irvine, Gail V.; Shelley, Alice

    2013-01-01

    Probability-based designs reduce bias and allow inference of results to the pool of sites from which they were chosen. We developed and tested probability-based designs for monitoring marine rocky intertidal assemblages at Glacier Bay National Park and Preserve (GLBA), Alaska. A multilevel design was used that varied in scale and inference. The levels included aerial surveys, extensive sampling of 25 sites, and more intensive sampling of 6 sites. Aerial surveys of a subset of intertidal habitat indicated that the original target habitat of bedrock-dominated sites with slope ≤30° was rare. This unexpected finding illustrated one value of probability-based surveys and led to a shift in the target habitat type to include steeper, more mixed rocky habitat. Subsequently, we evaluated the statistical power of different sampling methods and sampling strategies to detect changes in the abundances of the predominant sessile intertidal taxa: barnacles Balanomorpha, the mussel Mytilus trossulus, and the rockweed Fucus distichus subsp. evanescens. There was greatest power to detect trends in Mytilus and lesser power for barnacles and Fucus. Because of its greater power, the extensive, coarse-grained sampling scheme was adopted in subsequent years over the intensive, fine-grained scheme. The sampling attributes that had the largest effects on power included sampling of “vertical” line transects (vs. horizontal line transects or quadrats) and increasing the number of sites. We also evaluated the power of several management-set parameters. Given equal sampling effort, sampling more sites fewer times had greater power. The information gained through intertidal monitoring is likely to be useful in assessing changes due to climate, including ocean acidification; invasive species; trampling effects; and oil spills.

  16. Sampling design for long-term regional trends in marine rocky intertidal communities.

    Science.gov (United States)

    Irvine, Gail V; Shelly, Alice

    2013-08-01

    Probability-based designs reduce bias and allow inference of results to the pool of sites from which they were chosen. We developed and tested probability-based designs for monitoring marine rocky intertidal assemblages at Glacier Bay National Park and Preserve (GLBA), Alaska. A multilevel design was used that varied in scale and inference. The levels included aerial surveys, extensive sampling of 25 sites, and more intensive sampling of 6 sites. Aerial surveys of a subset of intertidal habitat indicated that the original target habitat of bedrock-dominated sites with slope ≤30° was rare. This unexpected finding illustrated one value of probability-based surveys and led to a shift in the target habitat type to include steeper, more mixed rocky habitat. Subsequently, we evaluated the statistical power of different sampling methods and sampling strategies to detect changes in the abundances of the predominant sessile intertidal taxa: barnacles Balanomorpha, the mussel Mytilus trossulus, and the rockweed Fucus distichus subsp. evanescens. There was greatest power to detect trends in Mytilus and lesser power for barnacles and Fucus. Because of its greater power, the extensive, coarse-grained sampling scheme was adopted in subsequent years over the intensive, fine-grained scheme. The sampling attributes that had the largest effects on power included sampling of "vertical" line transects (vs. horizontal line transects or quadrats) and increasing the number of sites. We also evaluated the power of several management-set parameters. Given equal sampling effort, sampling more sites fewer times had greater power. The information gained through intertidal monitoring is likely to be useful in assessing changes due to climate, including ocean acidification; invasive species; trampling effects; and oil spills.

  17. Exploring effective sampling design for monitoring soil organic carbon in degraded Tibetan grasslands.

    Science.gov (United States)

    Chang, Xiaofeng; Bao, Xiaoying; Wang, Shiping; Zhu, Xiaoxue; Luo, Caiyun; Zhang, Zhenhua; Wilkes, Andreas

    2016-05-15

    The effects of climate change and human activities on grassland degradation and soil carbon stocks have become a focus of both research and policy. However, lack of research on appropriate sampling design prevents accurate assessment of soil carbon stocks and stock changes at community and regional scales. Here, we conducted an intensive survey with 1196 sampling sites over an area of 190 km(2) of degraded alpine meadow. Compared to lightly degraded meadow, soil organic carbon (SOC) stocks in moderately, heavily and extremely degraded meadow were reduced by 11.0%, 13.5% and 17.9%, respectively. Our field survey sampling design was overly intensive to estimate SOC status with a tolerable uncertainty of 10%. Power analysis showed that the optimal sampling density to achieve the desired accuracy would be 2, 3, 5 and 7 sites per 10 km(2) for lightly, moderately, heavily and extremely degraded meadows, respectively. If a subsequent paired sampling design with the optimum sample size were performed, assuming stock change rates predicted by experimental and modeling results, we estimate that about 5-10 years would be necessary to detect expected trends in SOC in the top 20 cm soil layer. Our results highlight the utility of conducting preliminary surveys to estimate the appropriate sampling density and avoid wasting resources due to over-sampling, and to estimate the sampling interval required to detect an expected sequestration rate. Future studies will be needed to evaluate spatial and temporal patterns of SOC variability. Copyright © 2016. Published by Elsevier Ltd.

  18. Viking GC/MS mechanisms design and performance. [for analyzing samples of Martian surface

    Science.gov (United States)

    Chase, C. P.; Weilbach, A. O.

    1976-01-01

    The Viking Lander gas chromatograph/mass spectrometer will analyze pyrolyzed samples of the Martian surface for organic content. The surface-sample loader and pyrolyzer assembly (SSPLA) is described, along with the major problems encountered during design and testing. Three mechanisms were developed to implement the required SSLPA functions: (1) a soil loader that forces soil from a filled rotating funnel into each of three ovens located on a carriage, (2) a Geneva drive for rotating and precisely indexing the ovens to receive sample, and (3) a toggle-clamp mechanism for sealing the ovens by forcing circular double knife edges into gold sealing surfaces.

  19. Data-driven soft sensor design with multiple-rate sampled data

    DEFF Research Database (Denmark)

    Lin, Bao; Recke, Bodil; Knudsen, Jørgen K.H.

    2007-01-01

    Multi-rate systems are common in industrial processes where quality measurements have slower sampling rate than other process variables. Since inter-sample information is desirable for effective quality control, different approaches have been reported to estimate the quality between samples...... are implemented to design quality soft sensors for cement kiln processes using data collected from a plant log system. Preliminary results reveal that the WPLS approach is able to provide accurate one-step-ahead prediction. The regularized data lifting technique predicts the product quality of cement kiln systems...... reasonably well, demonstrating the potential to be used for effective quality control....

  20. DSM-5 field trials in the United States and Canada, Part I: study design, sampling strategy, implementation, and analytic approaches.

    Science.gov (United States)

    Clarke, Diana E; Narrow, William E; Regier, Darrel A; Kuramoto, S Janet; Kupfer, David J; Kuhl, Emily A; Greiner, Lisa; Kraemer, Helena C

    2013-01-01

    This article discusses the design,sampling strategy, implementation,and data analytic processes of the DSM-5 Field Trials. The DSM-5 Field Trials were conducted by using a test-retest reliability design with a stratified sampling approach across six adult and four pediatric sites in the United States and one adult site in Canada. A stratified random sampling approach was used to enhance precision in the estimation of the reliability coefficients. A web-based research electronic data capture system was used for simultaneous data collection from patients and clinicians across sites and for centralized data management.Weighted descriptive analyses, intraclass kappa and intraclass correlation coefficients for stratified samples, and receiver operating curves were computed. The DSM-5 Field Trials capitalized on advances since DSM-III and DSM-IV in statistical measures of reliability (i.e., intraclass kappa for stratified samples) and other recently developed measures to determine confidence intervals around kappa estimates. Diagnostic interviews using DSM-5 criteria were conducted by 279 clinicians of varied disciplines who received training comparable to what would be available to any clinician after publication of DSM-5.Overall, 2,246 patients with various diagnoses and levels of comorbidity were enrolled,of which over 86% were seen for two diagnostic interviews. A range of reliability coefficients were observed for the categorical diagnoses and dimensional measures. Multisite field trials and training comparable to what would be available to any clinician after publication of DSM-5 provided “real-world” testing of DSM-5 proposed diagnoses.

  1. Random sampling technique for ultra-fast computations of molecular opacities for exoplanet atmospheres

    Science.gov (United States)

    Min, M.

    2017-10-01

    Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.

  2. Tick-, mosquito-, and rodent-borne parasite sampling designs for the National Ecological Observatory Network

    Science.gov (United States)

    Springer, Yuri P; Hoekman, David; Johnson, Pieter TJ; Duffy, Paul A; Hufft, Rebecca A.; Barnett, David T.; Allan, Brian F.; Amman, Brian R; Barker, Christopher M; Barrera, Roberto; Beard, Charles B; Beati, Lorenza; Begon, Mike; Blackmore, Mark S; Bradshaw, William E; Brisson, Dustin; Calisher, Charles H.; Childs, James E; Diuk-Wasser, Maria A.; Douglass, Richard J; Eisen, Rebecca J; Foley, Desmond H; Foley, Janet E.; Gaff, Holly D; Gardner, Scott L; Ginsberg, Howard; Glass, Gregory E; Hamer, Sarah A; Hayden, Mary H; Hjelle, Brian; Holzapfel, Christina M; Juliano, Steven A.; Kramer, Laura D.; Kuenzi, Amy J.; LaDeau, Shannon L.; Livdahl, Todd P.; Mills, James N.; Moore, Chester G.; Morand, Serge; Nasci, Roger S.; Ogden, Nicholas H.; Ostfeld, Richard S.; Parmenter, Robert R.; Piesman, Joseph; Reisen, William K.; Savage, Harry M.; Sonenshine, Daniel E.; Swei, Andrea; Yabsley, Michael J.

    2016-01-01

    Parasites and pathogens are increasingly recognized as significant drivers of ecological and evolutionary change in natural ecosystems. Concurrently, transmission of infectious agents among human, livestock, and wildlife populations represents a growing threat to veterinary and human health. In light of these trends and the scarcity of long-term time series data on infection rates among vectors and reservoirs, the National Ecological Observatory Network (NEON) will collect measurements and samples of a suite of tick-, mosquito-, and rodent-borne parasites through a continental-scale surveillance program. Here, we describe the sampling designs for these efforts, highlighting sampling priorities, field and analytical methods, and the data as well as archived samples to be made available to the research community. Insights generated by this sampling will advance current understanding of and ability to predict changes in infection and disease dynamics in novel, interdisciplinary, and collaborative ways.

  3. Economic Design of Acceptance Sampling Plans in a Two-Stage Supply Chain

    Directory of Open Access Journals (Sweden)

    Lie-Fern Hsu

    2012-01-01

    Full Text Available Supply Chain Management, which is concerned with material and information flows between facilities and the final customers, has been considered the most popular operations strategy for improving organizational competitiveness nowadays. With the advanced development of computer technology, it is getting easier to derive an acceptance sampling plan satisfying both the producer's and consumer's quality and risk requirements. However, all the available QC tables and computer software determine the sampling plan on a noneconomic basis. In this paper, we design an economic model to determine the optimal sampling plan in a two-stage supply chain that minimizes the producer's and the consumer's total quality cost while satisfying both the producer's and consumer's quality and risk requirements. Numerical examples show that the optimal sampling plan is quite sensitive to the producer's product quality. The product's inspection, internal failure, and postsale failure costs also have an effect on the optimal sampling plan.

  4. Hybrid random walk-linear discriminant analysis method for unwrapping quantitative phase microscopy images of biological samples

    Science.gov (United States)

    Kim, Diane N. H.; Teitell, Michael A.; Reed, Jason; Zangle, Thomas A.

    2015-11-01

    Standard algorithms for phase unwrapping often fail for interferometric quantitative phase imaging (QPI) of biological samples due to the variable morphology of these samples and the requirement to image at low light intensities to avoid phototoxicity. We describe a new algorithm combining random walk-based image segmentation with linear discriminant analysis (LDA)-based feature detection, using assumptions about the morphology of biological samples to account for phase ambiguities when standard methods have failed. We present three versions of our method: first, a method for LDA image segmentation based on a manually compiled training dataset; second, a method using a random walker (RW) algorithm informed by the assumed properties of a biological phase image; and third, an algorithm which combines LDA-based edge detection with an efficient RW algorithm. We show that the combination of LDA plus the RW algorithm gives the best overall performance with little speed penalty compared to LDA alone, and that this algorithm can be further optimized using a genetic algorithm to yield superior performance for phase unwrapping of QPI data from biological samples.

  5. Use of protein: creatinine ratio in a random spot urine sample for predicting significant proteinuria in diabetes mellitus.

    Science.gov (United States)

    Yadav, B K; Adhikari, S; Gyawali, P; Shrestha, R; Poudel, B; Khanal, M

    2010-06-01

    Present study was undertaken during a period of 6 months (September 2008-February 2009) to see an correlation of 24 hours urine protein estimation with random spot protein-creatinine (P:C) ratio among a diabetic patients. The study comprised of 144 patients aged 30-70 years, recruited from Kantipur hospital, Kathmandu. The 24-hr urine sample was collected, followed by spot random urine sample. Both samples were analyzed for protein and creatinine excretion. An informed consent was taken from all participants. Sixteen inadequately collected urine samples as defined by (predicted creatinine--measured creatinine)/predicted creatinine > 0.2 were excluded from analysis. The Spearman's rank correlation between the spot urine P:C ratio and 24-hr total protein were performed by the Statistical Package for Social Service. At the P:C ratio cutoff of 0.15 and reference method (24-hr urine protein) cutoff of 150 mg/day, the correlation coefficient was found to be 0.892 (p urine collection but the cutoff should be carefully selected for different patients group under different laboratory procedures and settings.

  6. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    Science.gov (United States)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  7. OSIRIS-REx Touch-and-Go (TAG) Mission Design for Asteroid Sample Collection

    Science.gov (United States)

    May, Alexander; Sutter, Brian; Linn, Timothy; Bierhaus, Beau; Berry, Kevin; Mink, Ron

    2014-01-01

    The Origins Spectral Interpretation Resource Identification Security Regolith Explorer (OSIRIS-REx) mission is a NASA New Frontiers mission launching in September 2016 to rendezvous with the near-Earth asteroid Bennu in October 2018. After several months of proximity operations to characterize the asteroid, OSIRIS-REx flies a Touch-And-Go (TAG) trajectory to the asteroid's surface to collect at least 60 g of pristine regolith sample for Earth return. This paper provides mission and flight system overviews, with more details on the TAG mission design and key events that occur to safely and successfully collect the sample. An overview of the navigation performed relative to a chosen sample site, along with the maneuvers to reach the desired site is described. Safety monitoring during descent is performed with onboard sensors providing an option to abort, troubleshoot, and try again if necessary. Sample collection occurs using a collection device at the end of an articulating robotic arm during a brief five second contact period, while a constant force spring mechanism in the arm assists to rebound the spacecraft away from the surface. Finally, the sample is measured quantitatively utilizing the law of conservation of angular momentum, along with qualitative data from imagery of the sampling device. Upon sample mass verification, the arm places the sample into the Stardust-heritage Sample Return Capsule (SRC) for return to Earth in September 2023.

  8. Sampling Design of Soil Physical Properties in a Conilon Coffee Field

    Directory of Open Access Journals (Sweden)

    Eduardo Oliveira de Jesus Santos

    Full Text Available ABSTRACT Establishing the number of samples required to determine values of soil physical properties ultimately results in optimization of labor and allows better representation of such attributes. The objective of this study was to analyze the spatial variability of soil physical properties in a Conilon coffee field and propose a soil sampling method better attuned to conditions of the management system. The experiment was performed in a Conilon coffee field in Espírito Santo state, Brazil, under a 3.0 × 2.0 × 1.0 m (4,000 plants ha-1 double spacing design. An irregular grid, with dimensions of 107 × 95.7 m and 65 sampling points, was set up. Soil samples were collected from the 0.00-0.20 m depth from each sampling point. Data were analyzed under descriptive statistical and geostatistical methods. Using statistical parameters, the adequate number of samples for analyzing the attributes under study was established, which ranged from 1 to 11 sampling points. With the exception of particle density, all soil physical properties showed a spatial dependence structure best fitted to the spherical model. Establishment of the number of samples and spatial variability for the physical properties of soils may be useful in developing sampling strategies that minimize costs for farmers within a tolerable and predictable level of error.

  9. Blinded sample size re-estimation in three-arm trials with 'gold standard' design.

    Science.gov (United States)

    Mütze, Tobias; Friede, Tim

    2017-10-15

    In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. [Modeling and simulation activities to design sampling scheme for population pharmacokinetic study on amlodipine].

    Science.gov (United States)

    Zuo, Xiao-Cong; Yuan, Hong; Zhang, Bi-Kui; Ng, Chee M; Barrett, Jeff S; Yang, Guo-Ping; Huang, Zhi-Jun; Pei, Qi; Guo, Ren; Zhou, Ya-Nan; Jing, Ning-Ning; Di, Wu

    2012-07-01

    Reasonable sampling scheme is the important basis for establishing reliable population pharmacokinetic model. It is an effective method for estimation of population pharmacokinetic parameters with sparse data to perform population pharmacokinetic analysis using the nonlinear mixed-effects models. We designed the sampling scheme for amlodipine based on D-optimal sampling strategy and Bayesian estimation method. First, optimized sample scenarios were designed using WinPOPT software according to the aim, dosage regimen and visit schedule of the clinical study protocol, and the amlodipine population model reported by Rohatagi et al. Second, we created a NONMEM-formatted dataset (n = 400) for each sample scenario via Monte Carlo simulation. Third, the estimation of amlodipine pharmacokinetic parameters (clearance (CL/F), volume (V/F) and Ka) was based on the simulation results. All modeling and simulation exercises were conducted with NONMEM version 7.2. Finally, the accuracy and precision of the estimated parameters were evaluated using the mean prediction error (MPE) and the mean absolute error (MAPE), respectively. Among the 6 schemes, schemes 6 and 3 have good accuracy and precision. MPE is 0.1% for scheme 6 and -0.6% for scheme 3, respectively. MAPE is 0.7% for both schemes. There is no significant difference in MPE and MAPE of volume among them. Therefore, we select scheme 3 as the final sample scenario because it has good accuracy and precision and less sample points. This research aims to provide scientific and effective sampling scheme for population pharmacokinetic (PK) study of amlodipine in patients with renal impairment and hypertension, provide a scientific method for an optimum design in clinical population PK/PD (pharmacodynamics) research.

  11. Time as a dimension of the sample design in national-scale forest inventories

    Science.gov (United States)

    Francis Roesch; Paul Van Deusen

    2013-01-01

    Historically, the goal of forest inventories has been to determine the extent of the timber resource. Predictions of how the resource was changing were made by comparing differences between successive inventories. The general view of the associated sample design was with selection probabilities based on land area observed at a discrete point in time. Time was not...

  12. An Alternative View of Some FIA Sample Design and Analysis Issues

    Science.gov (United States)

    Paul C. Van Deusen

    2005-01-01

    Sample design and analysis decisions are the result of compromises and inputs from many sources. The end result would likely change if different individuals or groups were involved in the planning process. Discussed here are some alternatives to the procedures that are currently being used for the annual inventory. The purpose is to indicate that alternatives exist and...

  13. Designing sampling schemes for effect monitoring of nutrient leaching from agricultural soils.

    NARCIS (Netherlands)

    Brus, D.J.; Noij, I.G.A.M.

    2008-01-01

    A general methodology for designing sampling schemes for monitoring is illustrated with a case study aimed at estimating the temporal change of the spatial mean P concentration in the topsoil of an agricultural field after implementation of the remediation measure. A before-after control-impact

  14. Sampling design for compliance monitoring of surface water quality: A case study in a Polder area

    NARCIS (Netherlands)

    Brus, D.J.; Knotters, M.

    2008-01-01

    International agreements such as the EU Water Framework Directive (WFD) ask for efficient sampling methods for monitoring natural resources. In this paper a general methodology for designing efficient, statistically sound monitoring schemes is described. An important decision is the choice between a

  15. Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models

    Science.gov (United States)

    Debasish Saha; Armen R. Kemanian; Benjamin M. Rau; Paul R. Adler; Felipe Montes

    2017-01-01

    Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (...

  16. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    Science.gov (United States)

    Thomas C. Edwards; D. Richard Cutler; Niklaus E. Zimmermann; Linda Geiser; Gretchen G. Moisen

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by...

  17. Comparisons among Small Sample Equating Methods in a Common-Item Design

    Science.gov (United States)

    Kim, Sooyeon; Livingston, Samuel A.

    2010-01-01

    Score equating based on small samples of examinees is often inaccurate for the examinee populations. We conducted a series of resampling studies to investigate the accuracy of five methods of equating in a common-item design. The methods were chained equipercentile equating of smoothed distributions, chained linear equating, chained mean equating,…

  18. Sample Design and Estimation Procedures for a National Health Examination Survey of Children.

    Science.gov (United States)

    Bryant, E. Earl; And Others

    This report presents considerations for sample design and estimation procedures for the Health Examination Survey, one of the major survey programs employed by the National Center for Health Statistics. The survey collects data which provide national estimates and distributions of various health characteristics related to the growth and…

  19. RARtool: A MATLAB Software Package for Designing Response-Adaptive Randomized Clinical Trials with Time-to-Event Outcomes.

    Science.gov (United States)

    Ryeznik, Yevgen; Sverdlov, Oleksandr; Wong, Weng Kee

    2015-08-01

    Response-adaptive randomization designs are becoming increasingly popular in clinical trial practice. In this paper, we present RARtool , a user interface software developed in MATLAB for designing response-adaptive randomized comparative clinical trials with censored time-to-event outcomes. The RARtool software can compute different types of optimal treatment allocation designs, and it can simulate response-adaptive randomization procedures targeting selected optimal allocations. Through simulations, an investigator can assess design characteristics under a variety of experimental scenarios and select the best procedure for practical implementation. We illustrate the utility of our RARtool software by redesigning a survival trial from the literature.

  20. THE RHETORICAL USE OF RANDOM SAMPLING: CRAFTING AND COMMUNICATING THE PUBLIC IMAGE OF POLLS AS A SCIENCE (1935-1948).

    Science.gov (United States)

    Lusinchi, Dominic

    2017-03-01

    The scientific pollsters (Archibald Crossley, George H. Gallup, and Elmo Roper) emerged onto the American news media scene in 1935. Much of what they did in the following years (1935-1948) was to promote both the political and scientific legitimacy of their enterprise. They sought to be recognized as the sole legitimate producers of public opinion. In this essay I examine the, mostly overlooked, rhetorical work deployed by the pollsters to publicize the scientific credentials of their polling activities, and the central role the concept of sampling has had in that pursuit. First, they distanced themselves from the failed straw poll by claiming that their sampling methodology based on quotas was informed by science. Second, although in practice they did not use random sampling, they relied on it rhetorically to derive the symbolic benefits of being associated with the "laws of probability." © 2017 Wiley Periodicals, Inc.

  1. Multivariate Multi-Objective Allocation in Stratified Random Sampling: A Game Theoretic Approach.

    Science.gov (United States)

    Muhammad, Yousaf Shad; Hussain, Ijaz; Shoukry, Alaa Mohamd

    2016-01-01

    We consider the problem of multivariate multi-objective allocation where no or limited information is available within the stratum variance. Results show that a game theoretic approach (based on weighted goal programming) can be applied to sample size allocation problems. We use simulation technique to determine payoff matrix and to solve a minimax game.

  2. The effect of dead time on randomly sampled power spectral estimates

    DEFF Research Database (Denmark)

    Buchhave, Preben; Velte, Clara Marika; George, William K.

    2014-01-01

    consider both the effect on the measured spectrum of a finite sampling time, i.e., a finite time during which the signal is acquired, and a finite dead time, that is a time in which the signal processor is busy evaluating a data point and therefore unable to measure a subsequent data point arriving within...... the dead time delay....

  3. Phase microscopy of technical and biological samples through random phase modulation with a difuser

    DEFF Research Database (Denmark)

    Almoro, Percival; Pedrini, Giancarlo; Gundu, Phanindra Narayan

    2010-01-01

    A technique for phase microscopy using a phase diffuser and a reconstruction algorithm is proposed. A magnified specimen wavefront is projected on the diffuser plane that modulates the wavefront into a speckle field. The speckle patterns at axially displaced planes are sampled and used in an iter...

  4. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size

    Directory of Open Access Journals (Sweden)

    R. Eric Heidel

    2016-01-01

    Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  5. Earth Entry Vehicle Design for Sample Return Missions Using M-SAPE

    Science.gov (United States)

    Samareh, Jamshid

    2015-01-01

    Most mission concepts that return sample material to Earth share one common element: an Earth entry vehicle (EEV). The primary focus of this paper is the examination of EEV design space for relevant sample return missions. Mission requirements for EEV concepts can be divided into three major groups: entry conditions (e.g., velocity and flight path angle), payload (e.g., mass, volume, and g-load limit), and vehicle characteristics (e.g., thermal protection system, structural topology, and landing concepts). The impacts of these requirements on the EEV design have been studied with an integrated system analysis tool, and the results will be discussed in details. In addition, through sensitivities analyses, critical design drivers that have been identified will be reviewed.

  6. Development of NASA's Sample Cartridge Assembly: Summary of GEDS Design, Development Testing, and Thermal Analyses

    Science.gov (United States)

    O'Connoer, Brian; Hernandez, Deborah; Hornsby, Linda; Brown, Maria; Horton-Mullins, Kathryn

    2017-01-01

    NASA's Sample Cartridge Assembly (SCA) project is responsible for designing and validating a payload that contains materials research samples in a sealed environment. The SCA will be heated in the European Space Agency's (ESA) Low Gradient Furnace (LGF) that is housed inside the Material Science Research Rack (MSRR) located on the International Space Station (ISS). The first Principle Investigator (PI) to utilize the SCA will focus on Gravitational Effects on Distortion in Sintering (GEDS) research. This paper will give a summary of the design and development test effort for the GEDS SCA and will discuss the role of thermal analysis in developing test profiles to meet the science and engineering requirements. Lessons learned will be reviewed and salient design features that may differ for each PI will be discussed.

  7. Randomness, Sample Size, Imagination and Metacognition: Making Judgments about Differences in Data Sets

    Science.gov (United States)

    Stack, Sue; Watson, Jane

    2013-01-01

    There is considerable research on the difficulties students have in conceptualising individual concepts of probability and statistics (see for example, Bryant & Nunes, 2012; Jones, 2005). The unit of work developed for the action research project described in this article is specifically designed to address some of these in order to help…

  8. Optimized design and analysis of sparse-sampling fMRI experiments

    Directory of Open Access Journals (Sweden)

    Tyler K Perrachione

    2013-04-01

    Full Text Available Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI, in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional timeseries. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR delay (an acquisition parameter, stimulation rate (an experimental design parameter and model basis function (an analysis parameter act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1 Sparse analyses should utilize a physiologically-informed model that incorporates hemodynamic response convolution to reduce model error. (2 The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3 TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to improve

  9. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Karina B. de [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Farmacia; Oliveira, Bras H. de, E-mail: bho@ufpr.br [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Quimica

    2013-01-15

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C{sub 18} column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min-1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 {+-} 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  10. Start-up designs for response-adaptive randomization procedures with sequential estimation.

    Science.gov (United States)

    Haines, Linda M; Sadiq, Hassan

    2015-09-20

    Response-adaptive randomization procedures are appropriate for clinical trials in which two or more treatments are to be compared, patients arrive sequentially and the response of each patient is recorded before the next patient arrives. However, for those procedures that involve sequential estimation of model parameters, start-up designs are commonly required in order to provide initial estimates of the parameters. In this paper, a suite of such start-up designs for two treatments and binary patient responses are considered and compared in terms of the numbers of patients required in order to give meaningful parameters estimates, the number of patients allocated to the better treatment, and the bias in the parameter estimates. It is shown that permuted block designs with blocks of size 4 are to be preferred over a wide range of parameter values. For the case of two treatments, normal responses and selected start-up procedures, a design incorporating complete randomization followed appropriately by repeats of one of the treatments yields the minimum expected number of patients and is to be preferred. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Design of a Phase III cluster randomized trial to assess the efficacy and safety of a malaria transmission blocking vaccine.

    Science.gov (United States)

    Delrieu, Isabelle; Leboulleux, Didier; Ivinson, Karen; Gessner, Bradford D

    2015-03-24

    Vaccines interrupting Plasmodium falciparum malaria transmission targeting sexual, sporogonic, or mosquito-stage antigens (SSM-VIMT) are currently under development to reduce malaria transmission. An international group of malaria experts was established to evaluate the feasibility and optimal design of a Phase III cluster randomized trial (CRT) that could support regulatory review and approval of an SSM-VIMT. The consensus design is a CRT with a sentinel population randomly selected from defined inner and buffer zones in each cluster, a cluster size sufficient to assess true vaccine efficacy in the inner zone, and inclusion of ongoing assessment of vaccine impact stratified by distance of residence from the cluster edge. Trials should be conducted first in areas of moderate transmission, where SSM-VIMT impact should be greatest. Sample size estimates suggest that such a trial is feasible, and within the range of previously supported trials of malaria interventions, although substantial issues to implementation exist. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Placement Of Cardiac PacemaKEr Trial (POCKET) - rationale and design: a randomized controlled trial.

    Science.gov (United States)

    Magnusson, Peter; Wennström, Leo; Kastberg, Robert; Liv, Per

    2017-01-01

    A pacemaker system consists of one or two leads connected to a device that is implanted into a pocket formed just below the collarbone. This pocket is typically subcutaneous, that is, located just above the pectoral fascia. Even though the size of pacemakers has decreased markedly, complications due to superficial implants do occur. An alternative technique would be intramuscular placement of the pacemaker device, but there are no randomized controlled trials (RCTs) to support this approach, which is the rationale for the Placement Of Cardiac PacemaKEr Trial (POCKET). The aim is to study if intramuscular is superior to subcutaneous placement of a pacemaker pocket. In October 2016, we started to enroll 200 consecutive patients with an indication for bradycardia pacemaker implantation. Patients are randomized to random block sizes, stratified by age group (cut-off: 65 years) and sex, and then randomized to either subcutaneous or intramuscular implant. A concealed allocation procedure is employed, using sequentially numbered, sealed envelopes. Pocket site is blinded to the patient and in all subsequent care. The primary endpoint is patient overall satisfaction with the pocket location at 24 months as measured using a visual analog scale (VAS) 0-10. Secondary endpoints are: complications, patient-reported satisfaction at 1, 12, and 24 months (overall satisfaction, pain, discomfort, degree of unsightly appearance, movement problems, and sleep problems due to device). POCKET is a prospective interventional RCT designed to evaluate if intramuscular is superior to subcutaneous placement of a bradycardia pacemaker during a two-year follow-up.

  13. Random Evolutionary Dynamics Driven by Fitness and House-of-Cards Mutations: Sampling Formulae

    Science.gov (United States)

    Huillet, Thierry E.

    2017-07-01

    We first revisit the multi-allelic mutation-fitness balance problem, especially when mutations obey a house of cards condition, where the discrete-time deterministic evolutionary dynamics of the allelic frequencies derives from a Shahshahani potential. We then consider multi-allelic Wright-Fisher stochastic models whose deviation to neutrality is from the Shahshahani mutation/selection potential. We next focus on the weak selection, weak mutation cases and, making use of a Gamma calculus, we compute the normalizing partition functions of the invariant probability densities appearing in their Wright-Fisher diffusive approximations. Using these results, generalized Ewens sampling formulae (ESF) from the equilibrium distributions are derived. We start treating the ESF in the mixed mutation/selection potential case and then we restrict ourselves to the ESF in the simpler house-of-cards mutations only situation. We also address some issues concerning sampling problems from infinitely-many alleles weak limits.

  14. The psychometric properties of the AUDIT: a survey from a random sample of elderly Swedish adults.

    Science.gov (United States)

    Källmén, Håkan; Wennberg, Peter; Ramstedt, Mats; Hallgren, Mats

    2014-07-01

    Increasing alcohol consumption and related harms have been reported among the elderly population of Europe. Consequently, it is important to monitor patterns of alcohol use, and to use a valid and reliable tool when screening for risky consumption in this age group. The aim was to evaluate the internal consistency reliability and construct validity of the Alcohol Use Disorders Identification Test (AUDIT) in elderly Swedish adults, and to compare the results with the general Swedish population. Another aim was to calculate the level of alcohol consumption (AUDIT-C) to be used for comparison in future studies. The questionnaire was sent to 1459 Swedish adults aged 79-80 years with a response rate of 73.3%. Internal consistency reliability, were assessed using Cronbach alpha, and confirmatory factor analysis assessed construct validity of the Alcohol Use Disorders Identification Test (AUDIT) in elderly population as compared to a Swedish general population sample. The results showed that AUDIT was more reliable and valid among the Swedish general population sample than among the elderly and that Item 1 and 4 in AUDIT was less reliable and valid among the elderly. While the AUDIT showed acceptable psychometric properties in the general population sample, it's performance was of less quality among the elderly respondents. Further psychometric assessments of the AUDIT in elderly populations are required before it is implemented more widely.

  15. Assessment of long-term gas sampling design at two commercial manure-belt layer barns.

    Science.gov (United States)

    Chai, Li-Long; Ni, Ji-Qin; Chen, Yan; Diehl, Claude A; Heber, Albert J; Lim, Teng T

    2010-06-01

    Understanding temporal and spatial variations of aerial pollutant concentrations is important for designing air quality monitoring systems. In long-term and continuous air quality monitoring in large livestock and poultry barns, these systems usually use location-shared analyzers and sensors and can only sample air at limited number of locations. To assess the validity of the gas sampling design at a commercial layer farm, a new methodology was developed to map pollutant gas concentrations using portable sensors under steady-state or quasi-steady-state barn conditions. Three assessment tests were conducted from December 2008 to February 2009 in two manure-belt layer barns. Each barn was 140.2 m long and 19.5 m wide and had 250,000 birds. Each test included four measurements of ammonia and carbon dioxide concentrations at 20 locations that covered all operating fans, including six of the fans used in the long-term sampling that represented three zones along the lengths of the barns, to generate data for complete-barn monitoring. To simulate the long-term monitoring, gas concentrations from the six long-term sampling locations were extracted from the 20 assessment locations. Statistical analyses were performed to test the variances (F-test) and sample means (t test) between the 6- and 20-sample data. The study clearly demonstrated ammonia and carbon dioxide concentration gradients that were characterized by increasing concentrations from the west to east ends of the barns following the under-cage manure-belt travel direction. Mean concentrations increased from 7.1 to 47.7 parts per million (ppm) for ammonia and from 2303 to 3454 ppm for carbon dioxide from the west to east of the barns. Variations of mean gas concentrations were much less apparent between the south and north sides of the barns, because they were 21.2 and 20.9 ppm for ammonia and 2979 and 2951 ppm for carbon dioxide, respectively. The null hypotheses that the variances and means between the 6- and 20

  16. A Randomized Exchange Algorithm for Computing Optimal Approximate Designs of Experiments

    KAUST Repository

    Harman, Radoslav

    2018-01-17

    We propose a class of subspace ascent methods for computing optimal approximate designs that covers both existing as well as new and more efficient algorithms. Within this class of methods, we construct a simple, randomized exchange algorithm (REX). Numerical comparisons suggest that the performance of REX is comparable or superior to the performance of state-of-the-art methods across a broad range of problem structures and sizes. We focus on the most commonly used criterion of D-optimality that also has applications beyond experimental design, such as the construction of the minimum volume ellipsoid containing a given set of data-points. For D-optimality, we prove that the proposed algorithm converges to the optimum. We also provide formulas for the optimal exchange of weights in the case of the criterion of A-optimality. These formulas enable one to use REX for computing A-optimal and I-optimal designs.

  17. Classifier-guided sampling for discrete variable, discontinuous design space exploration: Convergence and computational performance

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shahan, David W. [HRL Labs., LLC, Malibu, CA (United States); Seepersad, Carolyn Conner [Univ. of Texas, Austin, TX (United States)

    2014-04-22

    A classifier-guided sampling (CGS) method is introduced for solving engineering design optimization problems with discrete and/or continuous variables and continuous and/or discontinuous responses. The method merges concepts from metamodel-guided sampling and population-based optimization algorithms. The CGS method uses a Bayesian network classifier for predicting the performance of new designs based on a set of known observations or training points. Unlike most metamodeling techniques, however, the classifier assigns a categorical class label to a new design, rather than predicting the resulting response in continuous space, and thereby accommodates nondifferentiable and discontinuous functions of discrete or categorical variables. The CGS method uses these classifiers to guide a population-based sampling process towards combinations of discrete and/or continuous variable values with a high probability of yielding preferred performance. Accordingly, the CGS method is appropriate for discrete/discontinuous design problems that are ill-suited for conventional metamodeling techniques and too computationally expensive to be solved by population-based algorithms alone. In addition, the rates of convergence and computational properties of the CGS method are investigated when applied to a set of discrete variable optimization problems. Results show that the CGS method significantly improves the rate of convergence towards known global optima, on average, when compared to genetic algorithms.

  18. Power and sample size for the S:T repeated measures design combined with a linear mixed-effects model allowing for missing data.

    Science.gov (United States)

    Tango, Toshiro

    2017-02-13

    Tango (Biostatistics 2016) proposed a new repeated measures design called the S:T repeated measures design, combined with generalized linear mixed-effects models and sample size calculations for a test of the average treatment effect that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size compared with the simple pre-post design. In this article, we present formulas for calculating power and sample sizes for a test of the average treatment effect allowing for missing data within the framework of the S:T repeated measures design with a continuous response variable combined with a linear mixed-effects model. Examples are provided to illustrate the use of these formulas.

  19. Within-otolith variability in chemical fingerprints: implications for sampling designs and possible environmental interpretation.

    Science.gov (United States)

    Di Franco, Antonio; Bulleri, Fabio; Pennetta, Antonio; De Benedetto, Giuseppe; Clarke, K Robert; Guidetti, Paolo

    2014-01-01

    Largely used as a natural biological tag in studies of dispersal/connectivity of fish, otolith elemental fingerprinting is usually analyzed by laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS). LA-ICP-MS produces an elemental fingerprint at a discrete time-point in the life of a fish and can generate data on within-otolith variability of that fingerprint. The presence of within-otolith variability has been previously acknowledged but not incorporated into experimental designs on the presumed, but untested, grounds of both its negligibility compared to among-otolith variability and of spatial autocorrelation among multiple ablations within an otolith. Here, using a hierarchical sampling design of spatial variation at multiple scales in otolith chemical fingerprints for two Mediterranean coastal fishes, we explore: 1) whether multiple ablations within an otolith can be used as independent replicates for significance tests among otoliths, and 2) the implications of incorporating within-otolith variability when assessing spatial variability in otolith chemistry at a hierarchy of spatial scales (different fish, from different sites, at different locations on the Apulian Adriatic coast). We find that multiple ablations along the same daily rings do not necessarily exhibit spatial dependency within the otolith and can be used to estimate residual variability in a hierarchical sampling design. Inclusion of within-otolith measurements reveals that individuals at the same site can show significant variability in elemental uptake. Within-otolith variability examined across the spatial hierarchy identifies differences between the two fish species investigated, and this finding leads to discussion of the potential for within-otolith variability to be used as a marker for fish exposure to stressful conditions. We also demonstrate that a 'cost'-optimal allocation of sampling effort should typically include some level of within-otolith replication in the

  20. Cluster randomized trials utilizing primary care electronic health records: methodological issues in design, conduct, and analysis (eCRT Study).

    Science.gov (United States)

    Gulliford, Martin C; van Staa, Tjeerd P; McDermott, Lisa; McCann, Gerard; Charlton, Judith; Dregan, Alex

    2014-06-11

    There is growing interest in conducting clinical and cluster randomized trials through electronic health records. This paper reports on the methodological issues identified during the implementation of two cluster randomized trials using the electronic health records of the Clinical Practice Research Datalink (CPRD). Two trials were completed in primary care: one aimed to reduce inappropriate antibiotic prescribing for acute respiratory infection; the other aimed to increase physician adherence with secondary prevention interventions after first stroke. The paper draws on documentary records and trial datasets to report on the methodological experience with respect to research ethics and research governance approval, general practice recruitment and allocation, sample size calculation and power, intervention implementation, and trial analysis. We obtained research governance approvals from more than 150 primary care organizations in England, Wales, and Scotland. There were 104 CPRD general practices recruited to the antibiotic trial and 106 to the stroke trial, with the target number of practices being recruited within six months. Interventions were installed into practice information systems remotely over the internet. The mean number of participants per practice was 5,588 in the antibiotic trial and 110 in the stroke trial, with the coefficient of variation of practice sizes being 0.53 and 0.56 respectively. Outcome measures showed substantial correlations between the 12 months before, and after intervention, with coefficients ranging from 0.42 for diastolic blood pressure to 0.91 for proportion of consultations with antibiotics prescribed, defining practice and participant eligibility for analysis requires careful consideration. Cluster randomized trials may be performed efficiently in large samples from UK general practices using the electronic health records of a primary care database. The geographical dispersal of trial sites presents a difficulty for

  1. Output-feedback sampled-data control design for linear parameter-varying systems with delay

    Science.gov (United States)

    Ramezanifar, Amin; Mohammadpour, Javad; Grigoriadis, Karolos M.

    2014-12-01

    In this paper, we address the sampled-data output-feedback control design problem for continuous-time linear parameter-varying systems with time-varying delay in the system states. Due to the combination of the plant's continuous-time dynamics and the controller's discrete-time dynamics connected through A/D and D/A converter devices, the closed-loop system is a hybrid system. In order to analyse this hybrid system from stability and performance perspectives we use the input-delay approach to map the closed-loop system into the continuous-time domain with delay in the states. This results in a closed-loop system containing two types of delays, the system internal delay and the one imposed by the mapping. Next, we use delay-dependent conditions for analysis of stability and ?-norm performance which result in a sampled-data control synthesis procedure. The proposed output-feedback sampled-data controller is obtained based on the solution to a linear matrix inequality optimisation problem using a set of appropriately defined slack variables. A numerical example of a milling machine is presented to demonstrate the viability of the proposed sampled-data control design method to satisfy the stability and performance objectives even with a varying sampling rate.

  2. Design and methods for a randomized clinical trial treating comorbid obesity and major depressive disorder

    Directory of Open Access Journals (Sweden)

    Crawford Sybil

    2008-09-01

    Full Text Available Abstract Background Obesity is often comorbid with depression and individuals with this comorbidity fare worse in behavioral weight loss treatment. Treating depression directly prior to behavioral weight loss treatment might bolster weight loss outcomes in this population, but this has not yet been tested in a randomized clinical trial. Methods and design This randomized clinical trial will examine whether behavior therapy for depression administered prior to standard weight loss treatment produces greater weight loss than standard weight loss treatment alone. Obese women with major depressive disorder (N = 174 will be recruited from primary care clinics and the community and randomly assigned to one of the two treatment conditions. Treatment will last 2 years, and will include a 6-month intensive treatment phase followed by an 18-month maintenance phase. Follow-up assessment will occur at 6-months and 1- and 2 years following randomization. The primary outcome is weight loss. The study was designed to provide 90% power for detecting a weight change difference between conditions of 3.1 kg (standard deviation of 5.5 kg at 1-year assuming a 25% rate of loss to follow-up. Secondary outcomes include depression, physical activity, dietary intake, psychosocial variables and cardiovascular risk factors. Potential mediators (e.g., adherence, depression, physical activity and caloric intake of the intervention effect on weight change will also be examined. Discussion Treating depression before administering intensive health behavior interventions could potentially boost the impact on both mental and physical health outcomes. Trial registration NCT00572520

  3. Lorcaserin plus lifestyle modification for weight loss maintenance: Rationale and design for a randomized controlled trial.

    Science.gov (United States)

    Tronieri, Jena Shaw; Alfaris, Nasreen; Chao, Ariana M; Pearl, Rebecca L; Alamuddin, Naji; Bakizada, Zayna M; Berkowitz, Robert I; Wadden, Thomas A

    2017-08-01

    Few studies have examined the efficacy of recently approved medications for chronic weight management in facilitating the maintenance of lost weight. This paper provides an overview of the design and rationale for a trial investigating whether lorcaserin, when combined with behavioral weight loss maintenance sessions (WLM), will facilitate the maintenance of losses of ≥5% of initial weight. In this two-phase trial, participants with obesity will enroll in a 14-week run-in diet program consisting of weekly group lifestyle modification sessions and a 1000-1200kcal/d meal replacement diet. Participants who complete this weight induction phase and lose at least 5% of initial weight will then be randomized to 52weeks of WLM plus lorcaserin or WLM plus placebo. We hypothesize that at 52weeks post randomization, participants assigned to WLM plus lorcaserin will achieve significantly better maintenance of the prior 5% weight loss. We will recruit 182 adults with obesity to participate in the diet run-in, 136 of whom (75%) are expected to become eligible for the randomized controlled trial. Co-primary outcomes include the percentage of participants who maintain a loss of at least 5% of initial weight at week 52 and change in weight (kg) from randomization to week 52. This two-phase design will allow us to determine the potential efficacy of chronic weight management using lorcaserin for maintaining initial losses of at least 5% body weight, induced by the use of a structured meal-replacement diet. This combined approach holds promise of achieving larger long-term weight losses. NCT02388568 on ClinicalTrials.gov. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Effect of the Mediterranean diet on heart failure biomarkers: a randomized sample from the PREDIMED trial.

    Science.gov (United States)

    Fitó, Montserrat; Estruch, Ramón; Salas-Salvadó, Jordi; Martínez-Gonzalez, Miguel Angel; Arós, Fernando; Vila, Joan; Corella, Dolores; Díaz, Oscar; Sáez, Guillermo; de la Torre, Rafael; Mitjavila, María-Teresa; Muñoz, Miguel Angel; Lamuela-Raventós, Rosa-María; Ruiz-Gutierrez, Valentina; Fiol, Miquel; Gómez-Gracia, Enrique; Lapetra, José; Ros, Emilio; Serra-Majem, Lluis; Covas, María-Isabel

    2014-05-01

    Scarce data are available on the effect of the traditional Mediterranean diet (TMD) on heart failure biomarkers. We assessed the effect of TMD on biomarkers related to heart failure in a high cardiovascular disease risk population. A total of 930 subjects at high cardiovascular risk (420 men and 510 women) were recruited in the framework of a multicentre, randomized, controlled, parallel-group clinical trial directed at testing the efficacy of the TMD on the primary prevention of cardiovascular disease (The PREDIMED Study). Participants were assigned to a low-fat diet (control, n = 310) or one of two TMDs [TMD + virgin olive oil (VOO) or TMD + nuts]. Depending on group assignment, participants received free provision of extra-virgin olive oil, mixed nuts, or small non-food gifts. After 1 year of intervention, both TMDs decreased plasma N-terminal pro-brain natriuretic peptide, with changes reaching significance vs. control group (P cardiovascular disease (CVD) who improved their diet toward a TMD pattern reduced their N-terminal pro-brain natriuretic peptide compared with those assigned to a low-fat diet. The same was found for in vivo oxidized low-density lipoprotein and lipoprotein(a) plasma concentrations after the TMD + VOO diet. From our results TMD could be a useful tool to mitigate against risk factors for heart failure. From our results TMD could modify markers of heart failure towards a more protective mode. © 2014 The Authors. European Journal of Heart Failure © 2014 European Society of Cardiology.

  5. Sequential sampling model for multiattribute choice alternatives with random attention time and processing order.

    Science.gov (United States)

    Diederich, Adele; Oswald, Peter

    2014-01-01

    A sequential sampling model for multiattribute binary choice options, called multiattribute attention switching (MAAS) model, assumes a separate sampling process for each attribute. During the deliberation process attention switches from one attribute consideration to the next. The order in which attributes are considered as well for how long each attribute is considered-the attention time-influences the predicted choice probabilities and choice response times. Several probability distributions for the attention time with different variances are investigated. Depending on the time and order schedule the model predicts a rich choice probability/choice response time pattern including preference reversals and fast errors. Furthermore, the difference between finite and infinite decision horizons for the attribute considered last is investigated. For the former case the model predicts a probability p 0 > 0 of not deciding within the available time. The underlying stochastic process for each attribute is an Ornstein-Uhlenbeck process approximated by a discrete birth-death process. All predictions are also true for the widely applied Wiener process.

  6. Sequential sampling model for multiattribute choice alternatives with random attention time and processing order

    Directory of Open Access Journals (Sweden)

    Adele eDiederich

    2014-09-01

    Full Text Available A sequential sampling model for multiattribute binary choice options, called Multiattribute attention switching (MAAS model, assumes a separate sampling process for each attribute. During the deliberation process attention switches from one attribute consideration to the next. The order in which attributes are considered as well for how long each attribute is considered - the attention time - influences the predicted choice probabilities and choice response times. Several probability distributions for the attention time including deterministic, Poisson, binomial, geometric, and uniform with different variances are investigated. Depending on the time and order schedule the model predicts a rich choice probability/choice response time pattern including preference reversals and fast errors. Furthermore, the difference between a finite and infinite decision horizons for the attribute considered last is investigated. For the former case the model predicts a probability $p_0> 0$ of not deciding within the available time. The underlying stochastic process for each attribute is an Ornstein-Uhlenbeck process approximated by a discrete birth-death process. All predictions are also true for the widely applied Wiener process.

  7. False Dichotomies and Health Policy Research Designs: Randomized Trials Are Not Always the Answer.

    Science.gov (United States)

    Soumerai, Stephen B; Ceccarelli, Rachel; Koppel, Ross

    2017-02-01

    Some medical scientists argue that only data from randomized controlled trials (RCTs) are trustworthy. They claim data from natural experiments and administrative data sets are always spurious and cannot be used to evaluate health policies and other population-wide phenomena in the real world. While many acknowledge biases caused by poor study designs, in this article we argue that several valid designs using administrative data can produce strong findings, particularly the interrupted time series (ITS) design. Many policy studies neither permit nor require an RCT for cause-and-effect inference. Framing our arguments using Campbell and Stanley's classic research design monograph, we show that several "quasi-experimental" designs, especially interrupted time series (ITS), can estimate valid effects (or non-effects) of health interventions and policies as diverse as public insurance coverage, speed limits, hospital safety programs, drug abuse regulation and withdrawal of drugs from the market. We further note the recent rapid uptake of ITS and argue for expanded training in quasi-experimental designs in medical and graduate schools and in post-doctoral curricula.

  8. Design of Energy Aware Adder Circuits Considering Random Intra-Die Process Variations

    Directory of Open Access Journals (Sweden)

    Marco Lanuzza

    2011-04-01

    Full Text Available Energy consumption is one of the main barriers to current high-performance designs. Moreover, the increased variability experienced in advanced process technologies implies further timing yield concerns and therefore intensifies this obstacle. Thus, proper techniques to achieve robust designs are a critical requirement for integrated circuit success. In this paper, the influence of intra-die random process variations is analyzed considering the particular case of the design of energy aware adder circuits. Five well known adder circuits were designed exploiting an industrial 45 nm static complementary metal-oxide semiconductor (CMOS standard cell library. The designed adders were comparatively evaluated under different energy constraints. As a main result, the performed analysis demonstrates that, for a given energy budget, simpler circuits (which are conventionally identified as low-energy slow architectures operating at higher power supply voltages can achieve a timing yield significantly better than more complex faster adders when used in low-power design with supply voltages lower than nominal.

  9. Generalized SAMPLE SIZE Determination Formulas for Investigating Contextual Effects by a Three-Level Random Intercept Model.

    Science.gov (United States)

    Usami, Satoshi

    2017-03-01

    Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.

  10. Assessing the comparative effectiveness of Tai Chi versus physical therapy for knee osteoarthritis: design and rationale for a randomized trial.

    Science.gov (United States)

    Wang, Chenchen; Iversen, Maura D; McAlindon, Timothy; Harvey, William F; Wong, John B; Fielding, Roger A; Driban, Jeffrey B; Price, Lori Lyn; Rones, Ramel; Gamache, Tressa; Schmid, Christopher H

    2014-09-08

    Knee osteoarthritis (OA) causes pain and long-term disability with annual healthcare costs exceeding $185 billion in the United States. Few medical remedies effectively influence the course of the disease. Finding effective treatments to maintain function and quality of life in patients with knee OA is one of the national priorities identified by the Institute of Medicine. We are currently conducting the first comparative effectiveness and cost-effectiveness randomized trial of Tai Chi versus a physical-therapy regimen in a sample of patients with symptomatic and radiographically confirmed knee OA. This article describes the design and conduct of this trial. A single-center, 52-week, comparative effectiveness randomized controlled trial of Tai Chi versus a standardized physical-therapy regimen is being conducted at an urban tertiary medical center in Boston, Massachusetts. The study population consists of adults ≥ 40 years of age with symptomatic and radiographic knee OA (American College of Rheumatology criteria). Participants are randomly allocated to either 12 weeks of Tai Chi (2x/week) or Physical Therapy (2x/week for 6 weeks, followed by 6 weeks of rigorously monitored home exercise). The primary outcome measure is pain (Western Ontario and McMaster Universities WOMAC) subscale at 12 weeks. Secondary outcomes include WOMAC stkiffness and function domain scores, lower extremity strength and power, functional balance, physical performance tests, psychological and psychosocial functioning, durability effects, health related quality of life, and healthcare utilization at 12, 24 and 52 weeks. This study will be the first randomized comparative-effectiveness and cost-effectiveness trial of Tai Chi versus Physical Therapy in a large symptomatic knee OA population with long-term follow up. We present here a robust and well-designed randomized comparative-effectiveness trial that also explores multiple outcomes to elucidate the potential mechanisms of mind

  11. Comparison of Detrusor Muscle Sampling Rate in Monopolar and Bipolar Transurethral Resection of Bladder Tumor: A Randomized Trial.

    Science.gov (United States)

    Teoh, Jeremy Yuen-Chun; Chan, Eddie Shu-Yin; Yip, Siu-Ying; Tam, Ho-Man; Chiu, Peter Ka-Fung; Yee, Chi-Hang; Wong, Hon-Ming; Chan, Chi-Kwok; Hou, Simon See-Ming; Ng, Chi-Fai

    2017-05-01

    Our aim was to investigate the detrusor muscle sampling rate after monopolar versus bipolar transurethral resection of bladder tumor (TURBT). This was a single-center, prospective, randomized, phase III trial on monopolar versus bipolar TURBT. Baseline patient characteristics, disease characteristics and perioperative outcomes were compared, with the primary outcome being the detrusor muscle sampling rate in the TURBT specimen. Multivariate logistic regression analyses on detrusor muscle sampling were performed. From May 2012 to December 2015, a total of 160 patients with similar baseline characteristics were randomized to receive monopolar or bipolar TURBT. Fewer patients in the bipolar TURBT group required postoperative irrigation than patients in the monopolar TURBT group (18.7 vs. 43%; p = 0.001). In the whole cohort, no significant difference in the detrusor muscle sampling rates was observed between the bipolar and monopolar TURBT groups (77.3 vs. 63.3%; p = 0.057). In patients with urothelial carcinoma, bipolar TURBT achieved a higher detrusor muscle sampling rate than monopolar TURBT (84.6 vs. 67.7%; p = 0.025). On multivariate analyses, bipolar TURBT (odds ratio [OR] 2.23, 95% confidence interval [CI] 1.03-4.81; p = 0.042) and larger tumor size (OR 1.04, 95% CI 1.01-1.08; p = 0.022) were significantly associated with detrusor muscle sampling in the whole cohort. In addition, bipolar TURBT (OR 2.88, 95% CI 1.10-7.53; p = 0.031), larger tumor size (OR 1.05, 95% CI 1.01-1.10; p = 0.035), and female sex (OR 3.25, 95% CI 1.10-9.59; p = 0.033) were significantly associated with detrusor muscle sampling in patients with urothelial carcinoma. There was a trend towards a superior detrusor muscle sampling rate after bipolar TURBT. Further studies are needed to determine its implications on disease recurrence and progression.

  12. Randomized block experimental designs can increase the power and reproducibility of laboratory animal experiments.

    Science.gov (United States)

    Festing, Michael F W

    2014-01-01

    Randomized block experimental designs have been widely used in agricultural and industrial research for many decades. Usually they are more powerful, have higher external validity, are less subject to bias, and produce more reproducible results than the completely randomized designs typically used in research involving laboratory animals. Reproducibility can be further increased by using time as a blocking factor. These benefits can be achieved at no extra cost. A small experiment investigating the effect of an antioxidant on the activity of a liver enzyme in four inbred mouse strains, which had two replications (blocks) separated by a period of two months, illustrates this approach. The widespread failure to use these designs more widely in research involving laboratory animals has probably led to a substantial waste of animals, money, and scientific resources and slowed down the development of new treatments for human and animal diseases. © The Author 2014. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  13. The Power of the Test for Treatment Effects in Three-Level Block Randomized Designs

    Science.gov (United States)

    Konstantopoulos, Spyros

    2008-01-01

    Experiments that involve nested structures may assign treatment conditions either to subgroups (such as classrooms) or individuals within subgroups (such as students). The design of such experiments requires knowledge of the intraclass correlation structure to compute the sample sizes necessary to achieve adequate power to detect the treatment…

  14. Analysis of Clinical Cohort Data Using Nested Case-control and Case-cohort Sampling Designs. A Powerful and Economical Tool.

    Science.gov (United States)

    Ohneberg, K; Wolkewitz, M; Beyersmann, J; Palomar-Martinez, M; Olaechea-Astigarraga, P; Alvarez-Lerma, F; Schumacher, M

    2015-01-01

    Sampling from a large cohort in order to derive a subsample that would be sufficient for statistical analysis is a frequently used method for handling large data sets in epidemiological studies with limited resources for exposure measurement. For clinical studies however, when interest is in the influence of a potential risk factor, cohort studies are often the first choice with all individuals entering the analysis. Our aim is to close the gap between epidemiological and clinical studies with respect to design and power considerations. Schoenfeld's formula for the number of events required for a Cox' proportional hazards model is fundamental. Our objective is to compare the power of analyzing the full cohort and the power of a nested case-control and a case-cohort design. We compare formulas for power for sampling designs and cohort studies. In our data example we simultaneously apply a nested case-control design with a varying number of controls matched to each case, a case cohort design with varying subcohort size, a random subsample and a full cohort analysis. For each design we calculate the standard error for estimated regression coefficients and the mean number of distinct persons, for whom covariate information is required. The formula for the power of a nested case-control design and the power of a case-cohort design is directly connected to the power of a cohort study using the well known Schoenfeld formula. The loss in precision of parameter estimates is relatively small compared to the saving in resources. Nested case-control and case-cohort studies, but not random subsamples yield an attractive alternative for analyzing clinical studies in the situation of a low event rate. Power calculations can be conducted straightforwardly to quantify the loss of power compared to the savings in the num-ber of patients using a sampling design instead of analyzing the full cohort.

  15. Designing a valid randomized pragmatic primary care implementation trial: the my own health report (MOHR) project.

    Science.gov (United States)

    Krist, Alex H; Glenn, Beth A; Glasgow, Russell E; Balasubramanian, Bijal A; Chambers, David A; Fernandez, Maria E; Heurtin-Roberts, Suzanne; Kessler, Rodger; Ory, Marcia G; Phillips, Siobhan M; Ritzwoller, Debra P; Roby, Dylan H; Rodriguez, Hector P; Sabo, Roy T; Sheinfeld Gorin, Sherri N; Stange, Kurt C

    2013-06-25

    There is a pressing need for greater attention to patient-centered health behavior and psychosocial issues in primary care, and for practical tools, study designs and results of clinical and policy relevance. Our goal is to design a scientifically rigorous and valid pragmatic trial to test whether primary care practices can systematically implement the collection of patient-reported information and provide patients needed advice, goal setting, and counseling in response. This manuscript reports on the iterative design of the My Own Health Report (MOHR) study, a cluster randomized delayed intervention trial. Nine pairs of diverse primary care practices will be randomized to early or delayed intervention four months later. The intervention consists of fielding the MOHR assessment--addresses 10 domains of health behaviors and psychosocial issues--and subsequent provision of needed counseling and support for patients presenting for wellness or chronic care. As a pragmatic participatory trial, stakeholder groups including practice partners and patients have been engaged throughout the study design to account for local resources and characteristics. Participatory tasks include identifying MOHR assessment content, refining the study design, providing input on outcomes measures, and designing the implementation workflow. Study outcomes include the intervention reach (percent of patients offered and completing the MOHR assessment), effectiveness (patients reporting being asked about topics, setting change goals, and receiving assistance in early versus delayed intervention practices), contextual factors influencing outcomes, and intervention costs. The MOHR study shows how a participatory design can be used to promote the consistent collection and use of patient-reported health behavior and psychosocial assessments in a broad range of primary care settings. While pragmatic in nature, the study design will allow valid comparisons to answer the posed research question, and

  16. Random balance designs for the estimation of first order global sensitivity indices

    Energy Technology Data Exchange (ETDEWEB)

    Tarantola, S. [Joint Research Centre, European Commission, Institute of the Protection and Security of the Citizen, TP 361, Via E. Fermi 1, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: stefano.tarantola@jrc.it; Gatelli, D. [Joint Research Centre, European Commission, Institute of the Protection and Security of the Citizen, TP 361, Via E. Fermi 1, 21020 Ispra (VA) (Italy); Mara, T.A. [Laboratory of Industrial engineering, University of Reunion Island, BP 7151, 15 avenue Rene Cassin, 97 715 Saint-Denis (France)

    2006-06-15

    We present two methods for the estimation of main effects in global sensitivity analysis. The methods adopt Satterthwaite's application of random balance designs in regression problems, and extend it to sensitivity analysis of model output for non-linear, non-additive models. Finite as well as infinite ranges for model input factors are allowed. The methods are easier to implement than any other method available for global sensitivity analysis, and reduce significantly the computational cost of the analysis. We test their performance on different test cases, including an international benchmark on safety assessment for nuclear waste disposal originally carried out by OECD/NEA.

  17. Importance of anthropogenic climate impact, sampling error and urban development in sewer system design.

    Science.gov (United States)

    Egger, C; Maurer, M

    2015-04-15

    Urban drainage design relying on observed precipitation series neglects the uncertainties associated with current and indeed future climate variability. Urban drainage design is further affected by the large stochastic variability of precipitation extremes and sampling errors arising from the short observation periods of extreme precipitation. Stochastic downscaling addresses anthropogenic climate impact by allowing relevant precipitation characteristics to be derived from local observations and an ensemble of climate models. This multi-climate model approach seeks to reflect the uncertainties in the data due to structural errors of the climate models. An ensemble of outcomes from stochastic downscaling allows for addressing the sampling uncertainty. These uncertainties are clearly reflected in the precipitation-runoff predictions of three urban drainage systems. They were mostly due to the sampling uncertainty. The contribution of climate model uncertainty was found to be of minor importance. Under the applied greenhouse gas emission scenario (A1B) and within the period 2036-2065, the potential for urban flooding in our Swiss case study is slightly reduced on average compared to the reference period 1981-2010. Scenario planning was applied to consider urban development associated with future socio-economic factors affecting urban drainage. The impact of scenario uncertainty was to a large extent found to be case-specific, thus emphasizing the need for scenario planning in every individual case. The results represent a valuable basis for discussions of new drainage design standards aiming specifically to include considerations of uncertainty. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Uncertainty Of Stream Nutrient Transport Estimates Using Random Sampling Of Storm Events From High Resolution Water Quality And Discharge Data

    Science.gov (United States)

    Scholefield, P. A.; Arnscheidt, J.; Jordan, P.; Beven, K.; Heathwaite, L.

    2007-12-01

    The uncertainties associated with stream nutrient transport estimates are frequently overlooked and the sampling strategy is rarely if ever investigated. Indeed, the impact of sampling strategy and estimation method on the bias and precision of stream phosphorus (P) transport calculations is little understood despite the use of such values in the calibration and testing of models of phosphorus transport. The objectives of this research were to investigate the variability and uncertainty in the estimates of total phosphorus transfers at an intensively monitored agricultural catchment. The Oona Water which is located in the Irish border region, is part of a long term monitoring program focusing on water quality. The Oona Water is a rural river catchment with grassland agriculture and scattered dwelling houses and has been monitored for total phosphorus (TP) at 10 min resolution for several years (Jordan et al, 2007). Concurrent sensitive measurements of discharge are also collected. The water quality and discharge data were provided at 1 hour resolution (averaged) and this meant that a robust estimate of the annual flow weighted concentration could be obtained by simple interpolation between points. A two-strata approach (Kronvang and Bruhn, 1996) was used to estimate flow weighted concentrations using randomly sampled storm events from the 400 identified within the time series and also base flow concentrations. Using a random stratified sampling approach for the selection of events, a series ranging from 10 through to the full 400 were used, each time generating a flow weighted mean using a load-discharge relationship identified through log-log regression and monte-carlo simulation. These values were then compared to the observed total phosphorus concentration for the catchment. Analysis of these results show the impact of sampling strategy, the inherent bias in any estimate of phosphorus concentrations and the uncertainty associated with such estimates. The

  19. Global Stratigraphy of Venus: Analysis of a Random Sample of Thirty-Six Test Areas

    Science.gov (United States)

    Basilevsky, Alexander T.; Head, James W., III

    1995-01-01

    The age relations between 36 impact craters with dark paraboloids and other geologic units and structures at these localities have been studied through photogeologic analysis of Magellan SAR images of the surface of Venus. Geologic settings in all 36 sites, about 1000 x 1000 km each, could be characterized using only 10 different terrain units and six types of structures. These units and structures form a major stratigraphic and geologic sequence (from oldest to youngest): (1) tessera terrain; (2) densely fractured terrains associated with coronae and in the form of remnants among plains; (3) fractured and ridged plains and ridge belts; (4) plains with wrinkle ridges; (5) ridges associated with coronae annulae and ridges of arachnoid annulae which are contemporary with wrinkle ridges of the ridged plains; (6) smooth and lobate plains; (7) fractures of coronae annulae, and fractures not related to coronae annulae, which disrupt ridged and smooth plains; (8) rift-associated fractures; and (9) craters with associated dark paraboloids, which represent the youngest 1O% of the Venus impact crater population (Campbell et al.), and are on top of all volcanic and tectonic units except the youngest episodes of rift-associated fracturing and volcanism; surficial streaks and patches are approximately contemporary with dark-paraboloid craters. Mapping of such units and structures in 36 randomly distributed large regions (each approximately 10(exp 6) sq km) shows evidence for a distinctive regional and global stratigraphic and geologic sequence. On the basis of this sequence we have developed a model that illustrates several major themes in the history of Venus. Most of the history of Venus (that of its first 80% or so) is not preserved in the surface geomorphological record. The major deformation associated with tessera formation in the period sometime between 0.5-1.0 b.y. ago (Ivanov and Basilevsky) is the earliest event detected. In the terminal stages of tessera fon

  20. Use of pornography in a random sample of Norwegian heterosexual couples.

    Science.gov (United States)

    Daneback, Kristian; Traeen, Bente; Månsson, Sven-Axel

    2009-10-01

    This study examined the use of pornography in couple relationships to enhance the sex-life. The study contained a representative sample of 398 heterosexual couples aged 22-67 years. Data collection was carried out by self-administered postal questionnaires. The majority (77%) of the couples did not report any kind of pornography use to enhance the sex-life. In 15% of the couples, both had used pornography; in 3% of the couples, only the female partner had used pornography; and, in 5% of the couples, only the male partner had used pornography for this purpose. Based on the results of a discriminant function analysis, it is suggested that couples where one or both used pornography had a more permissive erotic climate compared to the couples who did not use pornography. In couples where only one partner used pornography, we found more problems related to arousal (male) and negative (female) self-perception. These findings could be of importance for clinicians who work with couples.

  1. Transfer function design based on user selected samples for intuitive multivariate volume exploration

    KAUST Repository

    Zhou, Liang

    2013-02-01

    Multivariate volumetric datasets are important to both science and medicine. We propose a transfer function (TF) design approach based on user selected samples in the spatial domain to make multivariate volumetric data visualization more accessible for domain users. Specifically, the user starts the visualization by probing features of interest on slices and the data values are instantly queried by user selection. The queried sample values are then used to automatically and robustly generate high dimensional transfer functions (HDTFs) via kernel density estimation (KDE). Alternatively, 2D Gaussian TFs can be automatically generated in the dimensionality reduced space using these samples. With the extracted features rendered in the volume rendering view, the user can further refine these features using segmentation brushes. Interactivity is achieved in our system and different views are tightly linked. Use cases show that our system has been successfully applied for simulation and complicated seismic data sets. © 2013 IEEE.

  2. The role of the upper sample size limit in two-stage bioequivalence designs.

    Science.gov (United States)

    Karalis, Vangelis

    2013-11-01

    Two-stage designs (TSDs) are currently recommended by the regulatory authorities for bioequivalence (BE) assessment. The TSDs presented until now rely on an assumed geometric mean ratio (GMR) value of the BE metric in stage I in order to avoid inflation of type I error. In contrast, this work proposes a more realistic TSD design where sample re-estimation relies not only on the variability of stage I, but also on the observed GMR. In these cases, an upper sample size limit (UL) is introduced in order to prevent inflation of type I error. The aim of this study is to unveil the impact of UL on two TSD bioequivalence approaches which are based entirely on the interim results. Monte Carlo simulations were used to investigate several different scenarios of UL levels, within-subject variability, different starting number of subjects, and GMR. The use of UL leads to no inflation of type I error. As UL values increase, the % probability of declaring BE becomes higher. The starting sample size and the variability of the study affect type I error. Increased UL levels result in higher total sample sizes of the TSD which are more pronounced for highly variable drugs. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. [Design of standard voice sample text for subjective auditory perceptual evaluation of voice disorders].

    Science.gov (United States)

    Li, Jin-rang; Sun, Yan-yan; Xu, Wen

    2010-09-01

    To design a speech voice sample text with all phonemes in Mandarin for subjective auditory perceptual evaluation of voice disorders. The principles for design of a speech voice sample text are: The short text should include the 21 initials and 39 finals, this may cover all the phonemes in Mandarin. Also, the short text should have some meanings. A short text was made out. It had 155 Chinese words, and included 21 initials and 38 finals (the final, ê, was not included because it was rarely used in Mandarin). Also, the text covered 17 light tones and one "Erhua". The constituent ratios of the initials and finals presented in this short text were statistically similar as those in Mandarin according to the method of similarity of the sample and population (r = 0.742, P text were statistically not similar as those in Mandarin (r = 0.731, P > 0.05). A speech voice sample text with all phonemes in Mandarin was made out. The constituent ratios of the initials and finals presented in this short text are similar as those in Mandarin. Its value for subjective auditory perceptual evaluation of voice disorders need further study.

  4. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    KAUST Repository

    Beck, Joakim

    2017-10-10

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  5. Optimizing sampling design to deal with mist-net avoidance in Amazonian birds and bats.

    Directory of Open Access Journals (Sweden)

    João Tiago Marques

    Full Text Available Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas.

  6. Optimizing sampling design to deal with mist-net avoidance in Amazonian birds and bats.

    Science.gov (United States)

    Marques, João Tiago; Ramos Pereira, Maria J; Marques, Tiago A; Santos, Carlos David; Santana, Joana; Beja, Pedro; Palmeirim, Jorge M

    2013-01-01

    Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas.

  7. Recommendations for choosing an analysis method that controls Type I error for unbalanced cluster sample designs with Gaussian outcomes.

    Science.gov (United States)

    Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H

    2015-11-30

    We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Optimization of sampling pattern and the design of Fourier ptychographic illuminator.

    Science.gov (United States)

    Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan

    2015-03-09

    Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.

  9. Reduction of claustrophobia during magnetic resonance imaging: methods and design of the "CLAUSTRO" randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Teichgräber Ulf

    2011-02-01

    Full Text Available Abstract Background Magnetic resonance (MR imaging has been described as the most important medical innovation in the last 25 years. Over 80 million MR procedures are now performed each year and on average 2.3% (95% confidence interval: 2.0 to 2.5% of all patients scheduled for MR imaging suffer from claustrophobia. Thus, prevention of MR imaging by claustrophobia is a common problem and approximately 2,000,000 MR procedures worldwide cannot be completed due to this situation. Patients with claustrophobic anxiety are more likely to be frightened and experience a feeling of confinement or being closed in during MR imaging. In these patients, conscious sedation and additional sequences (after sedation may be necessary to complete the examinations. Further improvements in MR design appear to be essential to alleviate this situation and broaden the applicability of MR imaging. A more open scanner configuration might help reduce claustrophobic reactions while maintaining image quality and diagnostic accuracy. Methods/Design We propose to analyze the rate of claustrophobic reactions, clinical utility, image quality, patient acceptance, and cost-effectiveness of an open MR scanner in a randomized comparison with a recently designed short-bore but closed scanner with 97% noise reduction. The primary aim of this study is thus to determine whether an open MR scanner can reduce claustrophobic reactions, thereby enabling more examinations of claustrophobic patients without incurring the safety issues associated with conscious sedation. In this manuscript we detail the methods and design of the prospective "CLAUSTRO" trial. Discussion This randomized controlled trial will be the first direct comparison of open vertical and closed short-bore MR systems in regards to claustrophobia and image quality as well as diagnostic utility. Trial Registration ClinicalTrials.gov: NCT00715806

  10. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis.

    Science.gov (United States)

    Bala, Malgorzata M; Akl, Elie A; Sun, Xin; Bassler, Dirk; Mertz, Dominik; Mejza, Filip; Vandvik, Per Olav; Malaga, German; Johnston, Bradley C; Dahm, Philipp; Alonso-Coello, Pablo; Diaz-Granados, Natalia; Srinathan, Sadeesh K; Hassouneh, Basil; Briel, Matthias; Busse, Jason W; You, John J; Walter, Stephen D; Altman, Douglas G; Guyatt, Gordon H

    2013-03-01

    To compare methodological characteristics of randomized controlled trials (RCTs) published in higher vs. lower impact Core Clinical Journals. We searched MEDLINE for RCTs published in 2007 in Core Clinical Journals. We randomly sampled 1,140 study reports in a 1:1 ratio in higher (five general medicine journals with the highest total citations in 2007) and lower impact journals. Four hundred sixty-nine RCTs proved eligible: 219 in higher and 250 in lower impact journals. RCTs in higher vs. lower impact journals had larger sample sizes (median, 285 vs. 39), were more likely to receive industry funding (53% vs. 28%), declare concealment of allocation (66% vs. 36%), declare blinding of health care providers (53% vs. 41%) and outcome adjudicators (72% vs. 54%), report a patient-important primary outcome (69% vs. 50%), report subgroup analyses (64% vs. 26%), prespecify subgroup hypotheses (42% vs. 20%), and report a test for interaction (54% vs. 27%); P journals were more likely to report methodological safeguards against bias and patient-important outcomes than those published in lower impact journals. However, sufficient limitations remain such that publication in a higher impact journal does not ensure low risk of bias. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Neural-Fuzzy Digital Strategy of Continuous-Time Nonlinear Systems Using Adaptive Prediction and Random-Local-Optimization Design

    Directory of Open Access Journals (Sweden)

    Zhi-Ren Tsai

    2013-01-01

    Full Text Available A tracking problem, time-delay, uncertainty and stability analysis of a predictive control system are considered. The predictive control design is based on the input and output of neural plant model (NPM, and a recursive fuzzy predictive tracker has scaling factors which limit the value zone of measured data and cause the tuned parameters to converge to obtain a robust control performance. To improve the further control performance, the proposed random-local-optimization design (RLO for a model/controller uses offline initialization to obtain a near global optimal model/controller. Other issues are the considerations of modeling error, input-delay, sampling distortion, cost, greater flexibility, and highly reliable digital products of the model-based controller for the continuous-time (CT nonlinear system. They are solved by a recommended two-stage control design with the first-stage (offline RLO and second-stage (online adaptive steps. A theorizing method is then put forward to replace the sensitivity calculation, which reduces the calculation of Jacobin matrices of the back-propagation (BP method. Finally, the feedforward input of reference signals helps the digital fuzzy controller improve the control performance, and the technique works to control the CT systems precisely.

  12. Placement Of Cardiac PacemaKEr Trial (POCKET – rationale and design: a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Peter Magnusson

    2017-04-01

    Full Text Available BackgroundA pacemaker system consists of one or two leads connected to a device that is implanted into a pocket formed just below the collarbone. This pocket is typically subcutaneous, that is, located just above the pectoral fascia. Even though the size of pacemakers has decreased markedly, complications due to superficial implants do occur. An alternative technique would be intramuscular placement of the pacemaker device, but there are no randomized controlled trials (RCTs to support this approach, which is the rationale for the Placement Of Cardiac PacemaKEr Trial (POCKET. The aim is to study if intramuscular is superior to subcutaneous placement of a pacemaker pocket.MethodsIn October 2016, we started to enroll 200 consecutive patients with an indication for bradycardia pacemaker implantation. Patients are randomized to random block sizes, stratified by age group (cut-off: 65 years and sex, and then randomized to either subcutaneous or intramuscular implant. A concealed allocation procedure is employed, using sequentially numbered, sealed envelopes. Pocket site is blinded to the patient and in all subsequent care. The primary endpoint is patient overall satisfaction with the pocket location at 24 months as measured using a visual analog scale (VAS 0-10. Secondary endpoints are: complications, patient-reported satisfaction at 1, 12, and 24 months (overall satisfaction, pain, discomfort, degree of unsightly appearance, movement problems, and sleep problems due to device.ConclusionsPOCKET is a prospective interventional RCT designed to evaluate if intramuscular is superior to subcutaneous placement of a bradycardia pacemaker during a two-year follow-up.

  13. Placement Of Cardiac PacemaKEr Trial (POCKET – rationale and design: a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Peter Magnusson

    2017-04-01

    Full Text Available Background: A pacemaker system consists of one or two leads connected to a device that is implanted into a pocket formed just below the collarbone. This pocket is typically subcutaneous, that is, located just above the pectoral fascia. Even though the size of pacemakers has decreased markedly, complications due to superficial implants do occur. An alternative technique would be intramuscular placement of the pacemaker device, but there are no randomized controlled trials (RCTs to support this approach, which is the rationale for the Placement Of Cardiac PacemaKEr Trial (POCKET. The aim is to study if intramuscular is superior to subcutaneous placement of a pacemaker pocket. Methods: In October 2016, we started to enroll 200 consecutive patients with an indication for bradycardia pacemaker implantation. Patients are randomized to random block sizes, stratified by age group (cut-off: 65 years and sex, and then randomized to either subcutaneous or intramuscular implant. A concealed allocation procedure is employed, using sequentially numbered, sealed envelopes. Pocket site is blinded to the patient and in all subsequent care. The primary endpoint is patient overall satisfaction with the pocket location at 24 months as measured using a visual analog scale (VAS 0-10. Secondary endpoints are: complications, patient-reported satisfaction at 1, 12, and 24 months (overall satisfaction, pain, discomfort, degree of unsightly appearance, movement problems, and sleep problems due to device. Conclusions: POCKET is a prospective interventional RCT designed to evaluate if intramuscular is superior to subcutaneous placement of a bradycardia pacemaker during a two-year follow-up.

  14. Estimating screening-mammography receiver operating characteristic (ROC) curves from stratified random samples of screening mammograms: a simulation study.

    Science.gov (United States)

    Zur, Richard M; Pesce, Lorenzo L; Jiang, Yulei

    2015-05-01

    To evaluate stratified random sampling (SRS) of screening mammograms by (1) Breast Imaging Reporting and Data System (BI-RADS) assessment categories, and (2) the presence of breast cancer in mammograms, for estimation of screening-mammography receiver operating characteristic (ROC) curves in retrospective observer studies. We compared observer study case sets constructed by (1) random sampling (RS); (2) SRS with proportional allocation (SRS-P) with BI-RADS 1 and 2 noncancer cases accounting for 90.6% of all noncancer cases; (3) SRS with disproportional allocation (SRS-D) with BI-RADS 1 and 2 noncancer cases accounting for 10%-80%; and (4) SRS-D and multiple imputation (SRS-D + MI) with missing BI-RADS 1 and 2 noncancer cases imputed to recover the 90.6% proportion. Monte Carlo simulated case sets were drawn from a large case population modeled after published Digital Mammography Imaging Screening Trial data. We compared the bias, root-mean-square error, and coverage of 95% confidence intervals of area under the ROC curve (AUC) estimates from the sampling methods (200-2000 cases, of which 25% were cancer cases) versus from the large case population. AUC estimates were unbiased from RS, SRS-P, and SRS-D + MI, but biased from SRS-D. AUC estimates from SRS-P and SRS-D + MI had 10% smaller root-mean-square error than RS. Both SRS-P and SRS-D + MI can be used to obtain unbiased and 10% more efficient estimate of screening-mammography ROC curves. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  15. Effectiveness of adjuvant occupational therapy in employees with depression: design of a randomized controlled trial.

    Science.gov (United States)

    Hees, Hiske L; Koeter, Maarten W J; de Vries, Gabe; Ooteman, Wendy; Schene, Aart H

    2010-09-17

    Major depressive disorder is among the medical conditions with the highest negative impact on work outcome. However, little is known regarding evidence-based interventions targeting the improvement of work outcomes in depressed employees. In this paper, the design of a randomized controlled trial is presented in order to evaluate the effectiveness of adjuvant occupational therapy in employees with depression. This occupational intervention is based on an earlier intervention, which was designed and proven effective by our research group, and is the only intervention to date that specifically targets work outcome in depressed employees. In a two-arm randomized controlled trial, a total of 117 participants are randomized to either 'care as usual' or ' care as usual' with the addition of occupational therapy. Patients included in the study are employees who are absent from work due to depression for at least 25% of their contract hours, and who have a possibility of returning to their own or a new job. The occupational intervention consists of six individual sessions, eight group sessions and a work-place visit over a 16-week period. By increasing exposure to the working environment, and by stimulating communication between employer and employee, the occupational intervention aims to enhance self-efficacy and the acquisition of more adaptive coping strategies. Assessments take place at baseline, and at 6, 12, and 18-month follow-ups. Primary outcome measure is work participation (hours of absenteeism and time until work resumption). Secondary outcome measures are work functioning, symptomatology, health-related quality of life, and neurocognitive functioning. In addition, cost-effectiveness is evaluated from a societal perspective. Finally, mechanisms of change (intermediate outcomes) and potential patient-treatment matching variables are investigated. This study hopes to provide valuable knowledge regarding an intervention to treat depression, one of the most common

  16. Rationale and Design of the Randomized Evaluation of Default Access to Palliative Services (REDAPS) Trial.

    Science.gov (United States)

    Courtright, Katherine R; Madden, Vanessa; Gabler, Nicole B; Cooney, Elizabeth; Small, Dylan S; Troxel, Andrea; Casarett, David; Ersek, Mary; Cassel, J Brian; Nicholas, Lauren Hersch; Escobar, Gabriel; Hill, Sarah H; O'Brien, Dan; Vogel, Mark; Halpern, Scott D

    2016-09-01

    The substantial nationwide investment in inpatient palliative care services stems from their great promise to improve patient-centered outcomes and reduce costs. However, robust experimental evidence of these benefits is lacking. The Randomized Evaluation of Default Access to Palliative Services (REDAPS) study is a pragmatic, stepped-wedge, cluster randomized trial designed to test the efficacy and costs of specialized palliative care consultative services for hospitalized patients with advanced chronic obstructive pulmonary disease, dementia, or end-stage renal disease, as well as the overall effectiveness of ordering such services by default. Additional aims are to identify the types of services that are most beneficial and the types of patients most likely to benefit, including comparisons between ward and intensive care unit patients. We hypothesize that patient-centered outcomes can be improved without increasing costs by simply changing the default option for palliative care consultation from opt-in to opt-out for patients with life-limiting illnesses. Patients aged 65 years or older are enrolled at 11 hospitals using an integrated electronic health record. As a pragmatic trial designed to enroll between 12,000 and 15,000 patients, eligibility is determined using a validated, electronic health record-based algorithm, and all outcomes are captured via the electronic health record and billing systems data. The time at which each hospital transitions from control, opt-in palliative care consultation to intervention, opt-out consultation is randomly assigned. The primary outcome is a composite measure of in-hospital mortality and length of stay. Secondary outcomes include palliative care process measures and clinical and economic outcomes. Clinical trial registered with www.clinicaltrials.gov (NCT02505035).

  17. Quantitative Digital Autoradiography for Environmental Swipe Sample Prioritization: System design, Characterization, and Initial Measurements

    Energy Technology Data Exchange (ETDEWEB)

    McDonald, Benjamin S.; Zalavadia, Mital A.; Miller, Brian W.; Bliss, Mary; Olsen, Khris B.; Kasparek, Dustin M.; Clarke, Ardelia M.

    2017-07-17

    Environmental sampling and sample analyses by the International Atomic Energy Agency’s (IAEA) Network of Analytical Laboratories (NWAL) is a critical technical tool used to detect facility misuse under a Comprehensive Safeguards Agreement and to verify the absence of undeclared nuclear material activities under an Additional Protocol. Currently all environmental swipe samples (ESS) are screened using gamma spectrometry and x-ray fluorescence to estimate the amount of U and/or Pu in the ESS, to guide further analysis, and to assist in the shipment of ESS to the NWAL. Quantitative Digital Autoradiography for Environmental Samples (QDARES) is being developed to complement existing techniques through the use of a portable, real-time, high-spatial-resolution camera called the Ionizing-radiation Quantum Imaging Detector (iQID). The iQID constructs a spatial map of radionuclides within a sample or surface in real-time as charged particles (betas) and photons (gamma/x-rays) are detected and localized on an event-by-event basis. Knowledge of the location and nature of radioactive hot spots on the ESS could provide information for subsequent laboratory analysis. As a nondestructive technique, QDARES does not compromise the ESS chain of custody or subsequent laboratory analysis. In this paper we will present the system design and construction, characterization measurements with calibration sources, and initial measurements of ESS.

  18. DESIGN OF A SIMPLE SLOW COOLING DEVICE FOR CRYOPRESERVATION OF SMALL BIOLOGICAL SAMPLES.

    Science.gov (United States)

    de Paz, Leonardo Juan; Robert, Maria Celeste; Graf, Daniel Adolfo; Guibert, Edgardo Elvio; Rodriguez, Joaquin Valentin

    2015-01-01

    Slow cooling is a cryopreservation methodology where samples are cooled to its storage temperature at controlled cooling rates. Design, construction and evaluation of a simple and low cost device for slow cooling of small biological samples. The device was constructed based on Pye's freezer idea. A Dewar flask filled with liquid nitrogen was used as heat sink and a methanol bath containing the sample was cooled at constant rates using copper bars as heat conductor. Sample temperature may be lowered at controlled cooling rate (ranging from 0.4°C/min to 6.0°C/min) down to ~-60°C, where it could be conserved at lower temperatures. An example involving the cryopreservation of Neuro-2A cell line showed a marked influence of cooling rate over post preservation cell viability with optimal values between 2.6 and 4.6°C/min. The cooling device proved to be a valuable alternative to more expensive systems allowing the assessment of different cooling rates to evaluate the optimal condition for cryopreservation of such samples.

  19. Academic and Behavioral Design Parameters for Cluster Randomized Trials in Kindergarten: An Analysis of the Early Childhood Longitudinal Study 2011 Kindergarten Cohort (ECLS-K 2011).

    Science.gov (United States)

    Hedberg, E C

    2016-06-28

    There is an increased focus on randomized trials for proximal behavioral outcomes in early childhood research. However, planning sample sizes for such designs requires extant information on the size of effect, variance decomposition, and effectiveness of covariates. The purpose of this article is to employ a recent large representative sample of early childhood longitudinal study kindergartners to estimate design parameters for use in planning cluster randomized trials. A secondary objective is to compare the results of math and reading with the previous kindergartner cohort of 1999. For each measure, fall-spring gains in effect size units are calculated. In addition, multilevel models are fit to estimate variance components that are used to calculate intraclass correlations (ICCs) and R (2) statistics. The implications of the reported parameters are summarized in tables of required school sample sizes to detect small effects. The outcomes include information about student scores regarding learning behaviors, general behaviors, and academic abilities. Aside from math and reading, there were small gains in these measures from fall to spring, leading to effect sizes between about .1 and .2. In addition, the nonacademic ICCs are smaller than the academic ICCs but are still nontrivial. Use of a pretest covariate is generally effective in reducing the required sample size in power analyses. The ICCs for math and reading are smaller for the current sample compared with the 1999 sample. © The Author(s) 2016.

  20. Sampling design considerations for demographic studies: a case of colonial seabirds.

    Science.gov (United States)

    Kendall, William L; Converse, Sarah J; Doherty, Paul F; Naughton, Maura B; Anders, Angela; Hines, James E; Flint, Elizabeth

    2009-01-01

    For the purposes of making many informed conservation decisions, the main goal for data collection is to assess population status and allow prediction of the consequences of candidate management actions. Reducing the bias and variance of estimates of population parameters reduces uncertainty in population status and projections, thereby reducing the overall uncertainty under which a population manager must make a decision. In capture-recapture studies, imperfect detection of individuals, unobservable life-history states, local movement outside study areas, and tag loss can cause bias or precision problems with estimates of population parameters. Furthermore, excessive disturbance to individuals during capture-recapture sampling may be of concern because disturbance may have demographic consequences. We address these problems using as an example a monitoring program for Black-footed Albatross (Phoebastria nigripes) and Laysan Albatross (Phoebastria immutabilis) nesting populations in the northwestern Hawaiian Islands. To mitigate these estimation problems, we describe a synergistic combination of sampling design and modeling approaches. Solutions include multiple capture periods per season and multistate, robust design statistical models, dead recoveries and incidental observations, telemetry and data loggers, buffer areas around study plots to neutralize the effect of local movements outside study plots, and double banding and statistical models that account for band loss. We also present a variation on the robust capture-recapture design and a corresponding statistical model that minimizes disturbance to individuals. For the albatross case study, this less invasive robust design was more time efficient and, when used in combination with a traditional robust design, reduced the standard error of detection probability by 14% with only two hours of additional effort in the field. These field techniques and associated modeling approaches are applicable to studies of

  1. Sampling design considerations for demographic studies: a case of colonial seabirds

    Science.gov (United States)

    Kendall, William L.; Converse, Sarah J.; Doherty, Paul F.; Naughton, Maura B.; Anders, Angela; Hines, James E.; Flint, Elizabeth

    2009-01-01

    For the purposes of making many informed conservation decisions, the main goal for data collection is to assess population status and allow prediction of the consequences of candidate management actions. Reducing the bias and variance of estimates of population parameters reduces uncertainty in population status and projections, thereby reducing the overall uncertainty under which a population manager must make a decision. In capture-recapture studies, imperfect detection of individuals, unobservable life-history states, local movement outside study areas, and tag loss can cause bias or precision problems with estimates of population parameters. Furthermore, excessive disturbance to individuals during capture?recapture sampling may be of concern because disturbance may have demographic consequences. We address these problems using as an example a monitoring program for Black-footed Albatross (Phoebastria nigripes) and Laysan Albatross (Phoebastria immutabilis) nesting populations in the northwestern Hawaiian Islands. To mitigate these estimation problems, we describe a synergistic combination of sampling design and modeling approaches. Solutions include multiple capture periods per season and multistate, robust design statistical models, dead recoveries and incidental observations, telemetry and data loggers, buffer areas around study plots to neutralize the effect of local movements outside study plots, and double banding and statistical models that account for band loss. We also present a variation on the robust capture?recapture design and a corresponding statistical model that minimizes disturbance to individuals. For the albatross case study, this less invasive robust design was more time efficient and, when used in combination with a traditional robust design, reduced the standard error of detection probability by 14% with only two hours of additional effort in the field. These field techniques and associated modeling approaches are applicable to studies of

  2. Within-otolith variability in chemical fingerprints: implications for sampling designs and possible environmental interpretation.

    Directory of Open Access Journals (Sweden)

    Antonio Di Franco

    Full Text Available Largely used as a natural biological tag in studies of dispersal/connectivity of fish, otolith elemental fingerprinting is usually analyzed by laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS. LA-ICP-MS produces an elemental fingerprint at a discrete time-point in the life of a fish and can generate data on within-otolith variability of that fingerprint. The presence of within-otolith variability has been previously acknowledged but not incorporated into experimental designs on the presumed, but untested, grounds of both its negligibility compared to among-otolith variability and of spatial autocorrelation among multiple ablations within an otolith. Here, using a hierarchical sampling design of spatial variation at multiple scales in otolith chemical fingerprints for two Mediterranean coastal fishes, we explore: 1 whether multiple ablations within an otolith can be used as independent replicates for significance tests among otoliths, and 2 the implications of incorporating within-otolith variability when assessing spatial variability in otolith chemistry at a hierarchy of spatial scales (different fish, from different sites, at different locations on the Apulian Adriatic coast. We find that multiple ablations along the same daily rings do not necessarily exhibit spatial dependency within the otolith and can be used to estimate residual variability in a hierarchical sampling design. Inclusion of within-otolith measurements reveals that individuals at the same site can show significant variability in elemental uptake. Within-otolith variability examined across the spatial hierarchy identifies differences between the two fish species investigated, and this finding leads to discussion of the potential for within-otolith variability to be used as a marker for fish exposure to stressful conditions. We also demonstrate that a 'cost'-optimal allocation of sampling effort should typically include some level of within

  3. Design optimisation of powers-of-two FIR filter using self-organising random immigrants GA

    Science.gov (United States)

    Chandra, Abhijit; Chattopadhyay, Sudipta

    2015-01-01

    In this communication, we propose a novel design strategy of multiplier-less low-pass finite impulse response (FIR) filter with the aid of a recent evolutionary optimisation technique, known as the self-organising random immigrants genetic algorithm. Individual impulse response coefficients of the proposed filter have been encoded as sum of signed powers-of-two. During the formulation of the cost function for the optimisation algorithm, both the frequency response characteristic and the hardware cost of the discrete coefficient FIR filter have been considered. The role of crossover probability of the optimisation technique has been evaluated on the overall performance of the proposed strategy. For this purpose, the convergence characteristic of the optimisation technique has been included in the simulation results. In our analysis, two design examples of different specifications have been taken into account. In order to substantiate the efficiency of our proposed structure, a number of state-of-the-art design strategies of multiplier-less FIR filter have also been included in this article for the purpose of comparison. Critical analysis of the result unambiguously establishes the usefulness of our proposed approach for the hardware efficient design of digital filter.

  4. Sample Processor for Life on Icy Worlds (SPLIce): Design and Test Results

    Science.gov (United States)

    Chinn, Tori N.; Lee, Anthony K.; Boone, Travis D.; Tan, Ming X.; Chin, Matthew M.; McCutcheon, Griffin C.; Horne, Mera F.; Padgen, Michael R.; Blaich, Justin T.; Forgione, Joshua B.; hide

    2017-01-01

    We report the design, development, and testing of the Sample Processor for Life on Icy Worlds (SPLIce) system, a microfluidic sample processor to enable autonomous detection of signatures of life and measurements of habitability parameters in Ocean Worlds. This monolithic fluid processing-and-handling system (Figure 1; mass 0.5 kg) retrieves a 50-L-volume sample and prepares it to supply a suite of detection instruments, each with unique preparation needs. SPLIce has potential applications in orbiter missions that sample ocean plumes, such as found in Saturns icy moon Enceladus, or landed missions on the surface of icy satellites, such as Jupiters moon Europa. Answering the question Are we alone in the universe? is captivating and exceptionally challenging. Even general criteria that define life very broadly include a significant role for water [1,2]. Searches for extinct or extant life therefore prioritize locations of abundant water whether in ancient (Mars), or present (Europa and Enceladus) times. Only two previous planetary missions had onboard fluid processing: the Viking Biology Experiments [3] and Phoenixs Wet Chemistry Laboratory (WCL) [4]. SPLIce differs crucially from those systems, including its capability to process and distribute L-volume samples and the integration autonomous control of a wide range of fluidic functions, including: 1) retrieval of fluid samples from an evacuated sample chamber; 2) onboard multi-year storage of dehydrated reagents; 3) integrated pressure, pH, and conductivity measurement; 4) filtration and retention of insoluble particles for microscopy; 5) dilution or vacuum-driven concentration of samples to accommodate instrument working ranges; 6) removal of gas bubbles from sample aliquots; 7) unidirectional flow (check valves); 8) active flow-path selection (solenoid-actuated valves); 9) metered pumping in 100 nL volume increments. The SPLIce manifold, made of three thermally fused layers of precision-machined cyclo

  5. Randomized Comparison of Two Vaginal Self-Sampling Methods for Human Papillomavirus Detection: Dry Swab versus FTA Cartridge.

    Directory of Open Access Journals (Sweden)

    Rosa Catarino

    Full Text Available Human papillomavirus (HPV self-sampling (self-HPV is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab.A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed first: self-HPV using dry swabs (s-DRY or vaginal specimen collection using a cytobrush applied to an FTA cartridge (s-FTA. After self-HPV, a physician collected a cervical sample using liquid-based medium (Dr-WET. HPV types were identified by real-time PCR. Agreement between collection methods was measured using the kappa statistic.HPV prevalence for high-risk types was 62.3% (95%CI: 53.7-70.2 detected by s-DRY, 56.2% (95%CI: 47.6-64.4 by Dr-WET, and 54.6% (95%CI: 46.1-62.9 by s-FTA. There was overall agreement of 70.8% between s-FTA and s-DRY samples (kappa = 0.34, and of 82.3% between self-HPV and Dr-WET samples (kappa = 0.56. Detection sensitivities for low-grade squamous intraepithelial lesion or worse (LSIL+ were: 64.0% (95%CI: 44.5-79.8 for s-FTA, 84.6% (95%CI: 66.5-93.9 for s-DRY, and 76.9% (95%CI: 58.0-89.0 for Dr-WET. The preferred self-collection method among patients was s-DRY (40.8% vs. 15.4%. Regarding costs, FTA card was five times more expensive than the swab (~5 US dollars (USD/per card vs. ~1 USD/per swab.Self-HPV using dry swabs is sensitive for detecting LSIL+ and less expensive than s-FTA.International Standard Randomized Controlled Trial Number (ISRCTN: 43310942.

  6. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling

    Directory of Open Access Journals (Sweden)

    Fuqun Zhou

    2016-10-01

    Full Text Available Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS. It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2–3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests’ features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  7. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.

    Science.gov (United States)

    Zhou, Fuqun; Zhang, Aining

    2016-10-25

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  8. Tai Chi for treating knee osteoarthritis: Designing a long-term follow up randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Rones Ramel

    2008-07-01

    Full Text Available Abstract Background Knee Osteoarthritis (KOA is a major cause of pain and functional impairment among elders. Currently, there are neither feasible preventive intervention strategies nor effective medical remedies for the management of KOA. Tai Chi, an ancient Chinese mind-body exercise that is reported to enhance muscle function, balance and flexibility, and to reduce pain, depression and anxiety, may safely and effectively be used to treat KOA. However, current evidence is inconclusive. Our study examines the effects of a 12-week Tai Chi program compared with an attention control (wellness education and stretching on pain, functional capacity, psychosocial variables, joint proprioception and health status in elderly people with KOA. The study will be completed by July 2009. Methods/Design Forty eligible patients, age > 55 yr, BMI ≤ 40 kg/m2 with tibiofemoral osteoarthritis (American College of Rheumatology criteria are identified and randomly allocated to either Tai Chi (10 modified forms from classical Yang style Tai Chi or attention control (wellness education and stretching. The 60-minute intervention sessions take place twice weekly for 12 weeks. The study is conducted at an urban tertiary medical center in Boston, Massachusetts. The primary outcome measure is the Western Ontario and McMaster Universities (WOMAC pain subscale at 12 weeks. Secondary outcomes include weekly WOMAC pain, function and stiffness scores, patient and physician global assessments, lower-extremity function, knee proprioception, depression, self-efficacy, social support, health-related quality of life, adherence and occurrence of adverse events after 12, 24 and 48 weeks. Discussion In this article, we present the challenges of designing a randomized controlled trial with long-term follow up. The challenges encountered in this design are: strategies for recruitment, avoidance of selection bias, the actual practice of Tai Chi, and the maximization of adherence

  9. Reduction of claustrophobia during magnetic resonance imaging: methods and design of the "CLAUSTRO" randomized controlled trial.

    Science.gov (United States)

    Enders, Judith; Zimmermann, Elke; Rief, Matthias; Martus, Peter; Klingebiel, Randolf; Asbach, Patrick; Klessen, Christian; Diederichs, Gerd; Bengner, Thomas; Teichgräber, Ulf; Hamm, Bernd; Dewey, Marc

    2011-02-10

    Magnetic resonance (MR) imaging has been described as the most important medical innovation in the last 25 years. Over 80 million MR procedures are now performed each year and on average 2.3% (95% confidence interval: 2.0 to 2.5%) of all patients scheduled for MR imaging suffer from claustrophobia. Thus, prevention of MR imaging by claustrophobia is a common problem and approximately 2,000,000 MR procedures worldwide cannot be completed due to this situation. Patients with claustrophobic anxiety are more likely to be frightened and experience a feeling of confinement or being closed in during MR imaging. In these patients, conscious sedation and additional sequences (after sedation) may be necessary to complete the examinations. Further improvements in MR design appear to be essential to alleviate this situation and broaden the applicability of MR imaging. A more open scanner configuration might help reduce claustrophobic reactions while maintaining image quality and diagnostic accuracy. We propose to analyze the rate of claustrophobic reactions, clinical utility, image quality, patient acceptance, and cost-effectiveness of an open MR scanner in a randomized comparison with a recently designed short-bore but closed scanner with 97% noise reduction. The primary aim of this study is thus to determine whether an open MR scanner can reduce claustrophobic reactions, thereby enabling more examinations of claustrophobic patients without incurring the safety issues associated with conscious sedation. In this manuscript we detail the methods and design of the prospective "CLAUSTRO" trial. This randomized controlled trial will be the first direct comparison of open vertical and closed short-bore MR systems in regards to claustrophobia and image quality as well as diagnostic utility. ClinicalTrials.gov: NCT00715806.

  10. [Methodological Aspects of the Sampling Design for the 2015 National Mental Health Survey].

    Science.gov (United States)

    Rodríguez, Nelcy; Rodríguez, Viviana Alejandra; Ramírez, Eugenia; Cediel, Sandra; Gil, Fabián; Rondón, Martín Alonso

    2016-12-01

    The WHO has encouraged the development, implementation and evaluation of policies related to mental health all over the world. In Colombia, within this framework and promoted by the Ministry of Health and Social Protection, as well as being supported by Colciencias, the fourth National Mental Health Survey (NMHST) was conducted using a observational cross sectional study. According to the context and following the guidelines and sampling design, a summary of the methodology used for this sampling process is presented. The fourth NMHST used the Homes Master Sample for Studies in Health from the National System of Studies and Population Surveys for Health to calculate its sample. This Master Sample was developed and implemented in the year 2013 by the Ministry of Social Protection. This study included non-institutionalised civilian population divided into four age groups: children 7-11 years, adolescent 12-17 years, 18-44 years and 44 years old or older. The sample size calculation was based on the reported prevalences in other studies for the outcomes of mental disorders, depression, suicide, associated morbidity, and alcohol use. A probabilistic, cluster, stratified and multistage selection process was used. Expansions factors to the total population were calculated. A total of 15,351 completed surveys were collected and were distributed according to the age groups: 2727, 7-11 years, 1754, 12-17 years, 5889, 18-44 years, and 4981, ≥45 years. All the surveys were distributed in five regions: Atlantic, Oriental, Bogotá, Central and Pacific. A sufficient number of surveys were collected in this study to obtain a more precise approximation of the mental problems and disorders at the regional and national level. Copyright © 2016 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.

  11. Media Use and Source Trust among Muslims in Seven Countries: Results of a Large Random Sample Survey

    Directory of Open Access Journals (Sweden)

    Steven R. Corman

    2013-12-01

    Full Text Available Despite the perceived importance of media in the spread of and resistance against Islamist extremism, little is known about how Muslims use different kinds of media to get information about religious issues, and what sources they trust when doing so. This paper reports the results of a large, random sample survey among Muslims in seven countries Southeast Asia, West Africa and Western Europe, which helps fill this gap. Results show a diverse set of profiles of media use and source trust that differ by country, with overall low trust in mediated sources of information. Based on these findings, we conclude that mass media is still the most common source of religious information for Muslims, but that trust in mediated information is low overall. This suggests that media are probably best used to persuade opinion leaders, who will then carry anti-extremist messages through more personal means.

  12. Enhancing positive parent-child interactions and family functioning in a poverty sample: a randomized control trial.

    Science.gov (United States)

    Negrão, Mariana; Pereira, Mariana; Soares, Isabel; Mesman, Judi

    2014-01-01

    This study tested the attachment-based intervention program Video-feedback Intervention to promote Positive Parenting and Sensitive Discipline (VIPP-SD) in a randomized controlled trial with poor families of toddlers screened for professional's concerns about the child's caregiving environment. The VIPP-SD is an evidence-based intervention, but has not yet been tested in the context of poverty. The sample included 43 families with 1- to 4-year-old children: mean age at the pretest was 29 months and 51% were boys. At the pretest and posttest, mother-child interactions were observed at home, and mothers reported on family functioning. The VIPP-SD proved to be effective in enhancing positive parent-child interactions and positive family relations in a severely deprived context. Results are discussed in terms of implications for support services provided to such poor families in order to reduce intergenerational risk transmission.

  13. Adaption of G-TAG Software for Validating Touch and Go Asteroid Sample Return Design Methodology

    Science.gov (United States)

    Blackmore, Lars James C.; Acikmese, Behcet; Mandic, Milan

    2012-01-01

    A software tool is used to demonstrate the feasibility of Touch and Go (TAG) sampling for Asteroid Sample Return missions. TAG is a concept whereby a spacecraft is in contact with the surface of a small body, such as a comet or asteroid, for a few seconds or less before ascending to a safe location away from the small body. Previous work at JPL developed the G-TAG simulation tool, which provides a software environment for fast, multi-body simulations of the TAG event. G-TAG is described in Multibody Simulation Software Testbed for Small-Body Exploration and Sampling, (NPO-47196) NASA Tech Briefs, Vol. 35, No. 11 (November 2011), p.54. This current innovation adapts this tool to a mission that intends to return a sample from the surface of an asteroid. In order to demonstrate the feasibility of the TAG concept, the new software tool was used to generate extensive simulations that demonstrate the designed spacecraft meets key requirements. These requirements state that contact force and duration must be sufficient to ensure that enough material from the surface is collected in the brushwheel sampler (BWS), and that the spacecraft must survive the contact and must be able to recover and ascend to a safe position, and maintain velocity and orientation after the contact.

  14. Sampling Design and Enumeration Statistics for Bacteria Extracted from Marine Sediments †

    OpenAIRE

    Montagna, Paul A.

    1982-01-01

    The spatial and temporal distributions of marine bacteria were studied at both a muddy and a sandy subtidal site in North Inlet, S.C. The sampling design was hierarchical, since subsampling (by a dilution series) of the sediments was necessary to count bacterial cells using acridine orange epifluoresence microscopy. The cell count data fit a log-normal distribution. The abundance of bacteria was 1011 g−1 (dry weight) of mud and 109 g−1 (dry weight) of sand. Variance component analyses demonst...

  15. Investigating causal associations between use of nicotine, alcohol, caffeine, and cannabis: A two-sample bidirectional Mendelian randomization study.

    Science.gov (United States)

    Verweij, Karin J H; Treur, Jorien L; Vink, Jacqueline M

    2018-01-15

    Epidemiological studies consistently show co-occurrence of use of different addictive substances. Whether these associations are causal or due to overlapping underlying influences remains an important question in addiction research. Methodological advances have made it possible to use published genetic associations to infer causal relationships between phenotypes. In this exploratory study, we used Mendelian randomization (MR) to examine the causality of well-established associations between nicotine, alcohol, caffeine, and cannabis use. Two-sample MR was employed to estimate bi-directional causal effects between four addictive substances: nicotine (smoking initiation and cigarettes smoked per day), caffeine (cups of coffee per day), alcohol (units per week), and cannabis (initiation). Based on existing genome-wide association results we selected genetic variants associated with the exposure measure as an instrument to estimate causal effects. Where possible we applied sensitivity analyses (MR-Egger and weighted median) more robust to horizontal pleiotropy. Most MR tests did not reveal causal associations. There was some weak evidence for a causal positive effect of genetically instrumented alcohol use on smoking initiation and of cigarettes per day on caffeine use, but these did not hold up with the sensitivity analyses. There was also some suggestive evidence for a positive effect of alcohol use on caffeine use (only with MR-Egger) and smoking initiation on cannabis initiation (only with weighted median). None of the suggestive causal associations survived corrections for multiple testing. Two-sample Mendelian randomization analyses found little evidence for causal relationships between nicotine, alcohol, caffeine, and cannabis use. This article is protected by copyright. All rights reserved.

  16. Designing a sampling scheme to reveal correlations between weeds and soil properties at multiple spatial scales.

    Science.gov (United States)

    Metcalfe, H; Milne, A E; Webster, R; Lark, R M; Murdoch, A J; Storkey, J

    2016-02-01

    Weeds tend to aggregate in patches within fields, and there is evidence that this is partly owing to variation in soil properties. Because the processes driving soil heterogeneity operate at various scales, the strength of the relations between soil properties and weed density would also be expected to be scale-dependent. Quantifying these effects of scale on weed patch dynamics is essential to guide the design of discrete sampling protocols for mapping weed distribution. We developed a general method that uses novel within-field nested sampling and residual maximum-likelihood (reml) estimation to explore scale-dependent relations between weeds and soil properties. We validated the method using a case study of Alopecurus myosuroides in winter wheat. Using reml, we partitioned the variance and covariance into scale-specific components and estimated the correlations between the weed counts and soil properties at each scale. We used variograms to quantify the spatial structure in the data and to map variables by kriging. Our methodology successfully captured the effect of scale on a number of edaphic drivers of weed patchiness. The overall Pearson correlations between A. myosuroides and soil organic matter and clay content were weak and masked the stronger correlations at >50 m. Knowing how the variance was partitioned across the spatial scales, we optimised the sampling design to focus sampling effort at those scales that contributed most to the total variance. The methods have the potential to guide patch spraying of weeds by identifying areas of the field that are vulnerable to weed establishment.

  17. A new inclinable shear apparatus for large sample testing: design and perspectives

    Science.gov (United States)

    Rickli, Ch.; Burger, S.; Herranhof, H.; Michel, T.; Moser, A.; Tröger, A.

    2012-04-01

    Soil bioengineering methods are commonly applied to protect slopes from erosion and shallow landslides. However, the precise effectiveness of vegetation regarding slope stability is difficult to determine. Root reinforcement can be evaluated directly in terms of the additional shear strength provided by roots in root-reinforced soils. In this context we designed a shearing device for large scale planted soil samples with the aim to provide information about the contribution of plant roots to soil shear strength. The apparatus allows investigations on soil block samples with roots of different plant species commonly used for remediation and habitat restoration purposes under almost natural conditions. Shear stress results of rooted soils can be compared to those of un-vegetated soils with similar soil types. New and different to conventionally applied concepts, shear tests can be performed at variable inclinations up to 45° , considering plant growth at the corresponding angle of slope. Furthermore, experiments can be conducted at variable depth of the shearing zone, with low normal stresses and low shearing rates of≥ 0.01 mm/min. The measurements involve shearing force, shearing displacement (up to 200 mm), normal stress, normal displacement (dilatancy/consolidation) all recorded with high accuracy. Saturated and partially saturated soil samples containing roots can be tested with the soil humidity measured near the shearing zone. An automatic data logging system was designed for real-time visualisation of the different parameters and recording all required data in conjunction with the described direct shear apparatus. The device for soil samples of up to 500 x 500 x 400 mm offers a unique possibility to span the gap between investigations concerning vegetation effects on small planted soil specimens (e.g. triaxial tests) and the calculation of slope stability on entire slopes with vegetation. In addition, it combines the advantages of laboratory tests under

  18. Brief cognitive behavioral therapy in primary care: a hybrid type 2 patient-randomized effectiveness-implementation design.

    Science.gov (United States)

    Cully, Jeffrey A; Armento, Maria E A; Mott, Juliette; Nadorff, Michael R; Naik, Aanand D; Stanley, Melinda A; Sorocco, Kristen H; Kunik, Mark E; Petersen, Nancy J; Kauth, Michael R

    2012-07-11

    Despite the availability of evidence-based psychotherapies for depression and anxiety, they are underused in non-mental health specialty settings such as primary care. Hybrid effectiveness-implementation designs have the potential to evaluate clinical and implementation outcomes of evidence-based psychotherapies to improve their translation into routine clinical care practices. This protocol article discusses the study methodology and implementation strategies employed in an ongoing, hybrid, type 2 randomized controlled trial with two primary aims: (1) to determine whether a brief, manualized cognitive behavioral therapy administered by Veterans Affairs Primary Care Mental Health Integration program clinicians is effective in treating depression and anxiety in a sample of medically ill (chronic cardiopulmonary diseases) primary care patients and (2) to examine the acceptability, feasibility, and preliminary outcomes of a focused implementation strategy on improving adoption and fidelity of brief cognitive behavioral therapy at two Primary Care-Mental Health Integration clinics. The study uses a hybrid type 2 effectiveness/implementation design to simultaneously test clinical effectiveness and to collect pilot data on a multifaceted implementation strategy that includes an online training program, audit and feedback of session content, and internal and external facilitation. Additionally, the study engages the participation of an advisory council consisting of stakeholders from Primary Care-Mental Health Integration, as well as regional and national mental health leaders within the Veterans Administration. It targets recruitment of 320 participants randomized to brief cognitive behavioral therapy (n = 200) or usual care (n = 120). Both effectiveness and implementation outcomes are being assessed using mixed methods, including quantitative evaluation (e.g., intent-to-treat analyses across multiple time points) and qualitative methods (e.g., focus interviews

  19. Brief cognitive behavioral therapy in primary care: a hybrid type 2 patient-randomized effectiveness-implementation design

    Directory of Open Access Journals (Sweden)

    Cully Jeffrey A

    2012-07-01

    Full Text Available Abstract Background Despite the availability of evidence-based psychotherapies for depression and anxiety, they are underused in non-mental health specialty settings such as primary care. Hybrid effectiveness-implementation designs have the potential to evaluate clinical and implementation outcomes of evidence-based psychotherapies to improve their translation into routine clinical care practices. Methods This protocol article discusses the study methodology and implementation strategies employed in an ongoing, hybrid, type 2 randomized controlled trial with two primary aims: (1 to determine whether a brief, manualized cognitive behavioral therapy administered by Veterans Affairs Primary Care Mental Health Integration program clinicians is effective in treating depression and anxiety in a sample of medically ill (chronic cardiopulmonary diseases primary care patients and (2 to examine the acceptability, feasibility, and preliminary outcomes of a focused implementation strategy on improving adoption and fidelity of brief cognitive behavioral therapy at two Primary Care-Mental Health Integration clinics. The study uses a hybrid type 2 effectiveness/implementation design to simultaneously test clinical effectiveness and to collect pilot data on a multifaceted implementation strategy that includes an online training program, audit and feedback of session content, and internal and external facilitation. Additionally, the study engages the participation of an advisory council consisting of stakeholders from Primary Care-Mental Health Integration, as well as regional and national mental health leaders within the Veterans Administration. It targets recruitment of 320 participants randomized to brief cognitive behavioral therapy (n = 200 or usual care (n = 120. Both effectiveness and implementation outcomes are being assessed using mixed methods, including quantitative evaluation (e.g., intent-to-treat analyses across multiple time points and

  20. Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning.

    Science.gov (United States)

    Baykal, Cenk; Torres, Luis G; Alterovitz, Ron

    2015-09-28

    Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot's behavior and reachable workspace. Optimizing a robot's design by appropriately selecting tube parameters can improve the robot's effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot's configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy.

  1. Tai Chi for osteopenic women: design and rationale of a pragmatic randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Fischer Mary

    2010-03-01

    Full Text Available Abstract Background Post-menopausal osteopenic women are at increased risk for skeletal fractures. Current osteopenia treatment guidelines include exercise, however, optimal exercise regimens for attenuating bone mineral density (BMD loss, or for addressing other fracture-related risk factors (e.g. poor balance, decreased muscle strength are not well-defined. Tai Chi is an increasingly popular weight bearing mind-body exercise that has been reported to positively impact BMD dynamics and improve postural control, however, current evidence is inconclusive. This study will determine the effectiveness of Tai Chi in reducing rates of bone turnover in post-menopausal osteopenic women, compared with standard care, and will preliminarily explore biomechanical processes that might inform how Tai Chi impacts BMD and associated fracture risks. Methods/Design A total of 86 post-menopausal women, aged 45-70y, T-score of the hip and/or spine -1.0 and -2.5, have been recruited from primary care clinics of a large healthcare system based in Boston. They have been randomized to a group-based 9-month Tai Chi program plus standard care or to standard care only. A unique aspect of this trial is its pragmatic design, which allows participants randomized to Tai Chi to choose from a pre-screened list of community-based Tai Chi programs. Interviewers masked to participants' treatment group assess outcomes at baseline and 3 and 9 months after randomization. Primary outcomes are serum markers of bone resorption (C-terminal cross linking telopeptide of type I collagen, bone formation (osteocalcin, and BMD of the lumbar spine and proximal femur (dual-energy X-ray absorptiometry. Secondary outcomes include health-related quality-of-life, exercise behavior, and psychological well-being. In addition, kinetic and kinematic characterization of gait, standing, and rising from a chair are assessed in subset of participants (n = 16 to explore the feasibility of modeling skeletal

  2. Effectiveness of adjuvant occupational therapy in employees with depression: design of a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Hees Hiske L

    2010-09-01

    Full Text Available Abstract Background Major depressive disorder is among the medical conditions with the highest negative impact on work outcome. However, little is known regarding evidence-based interventions targeting the improvement of work outcomes in depressed employees. In this paper, the design of a randomized controlled trial is presented in order to evaluate the effectiveness of adjuvant occupational therapy in employees with depression. This occupational intervention is based on an earlier intervention, which was designed and proven effective by our research group, and is the only intervention to date that specifically targets work outcome in depressed employees. Methods/Design In a two-arm randomized controlled trial, a total of 117 participants are randomized to either 'care as usual' or ' care as usual' with the addition of occupational therapy. Patients included in the study are employees who are absent from work due to depression for at least 25% of their contract hours, and who have a possibility of returning to their own or a new job. The occupational intervention consists of six individual sessions, eight group sessions and a work-place visit over a 16-week period. By increasing exposure to the working environment, and by stimulating communication between employer and employee, the occupational intervention aims to enhance self-efficacy and the acquisition of more adaptive coping strategies. Assessments take place at baseline, and at 6, 12, and 18-month follow-ups. Primary outcome measure is work participation (hours of absenteeism and time until work resumption. Secondary outcome measures are work functioning, symptomatology, health-related quality of life, and neurocognitive functioning. In addition, cost-effectiveness is evaluated from a societal perspective. Finally, mechanisms of change (intermediate outcomes and potential patient-treatment matching variables are investigated. Discussion This study hopes to provide valuable knowledge

  3. SMART trial: A randomized clinical trial of self-monitoring in behavioral weight management-design and baseline findings.

    Science.gov (United States)

    Burke, Lora E; Styn, Mindi A; Glanz, Karen; Ewing, Linda J; Elci, Okan U; Conroy, Margaret B; Sereika, Susan M; Acharya, Sushama D; Music, Edvin; Keating, Alison L; Sevick, Mary Ann

    2009-11-01

    The primary form of treatment for obesity today is behavioral therapy. Self-monitoring diet and physical activity plays an important role in interventions targeting behavior and weight change. The SMART weight loss trial examined the impact of replacing the standard paper record used for self-monitoring with a personal digital assistant (PDA). This paper describes the design, methods, intervention, and baseline sample characteristics of the SMART trial. The SMART trial used a 3-group design to determine the effects of different modes of self-monitoring on short- and long-term weight loss and on adherence to self-monitoring in a 24-month intervention. Participants were randomized to one of three conditions (1) use of a standard paper record (PR); (2) use of a PDA with dietary and physical activity software (PDA); or (3), use of a PDA with the same software plus a customized feedback program (PDA + FB). We screened 704 individuals and randomized 210. There were statistically but not clinically significant differences among the three cohorts in age, education, HDL cholesterol, blood glucose and systolic blood pressure. At 24 months, retention rate for the first of three cohorts was 90%. To the best of our knowledge, the SMART trial is the first large study to compare different methods of self-monitoring in a behavioral weight loss intervention and to compare the use of PDAs to conventional paper records. This study has the potential to reveal significant details about self-monitoring patterns and whether technology can improve adherence to this vital intervention component.

  4. A randomized trial of the clinical utility of genetic testing for obesity: design and implementation considerations.

    Science.gov (United States)

    Wang, Catharine; Gordon, Erynn S; Stack, Catharine B; Liu, Ching-Ti; Norkunas, Tricia; Wawak, Lisa; Christman, Michael F; Green, Robert C; Bowen, Deborah J

    2014-02-01

    Obesity rates in the United States have escalated in recent decades and present a major challenge in public health prevention efforts. Currently, testing to identify genetic risk for obesity is readily available through several direct-to-consumer companies. Despite the availability of this type of testing, there is a paucity of evidence as to whether providing people with personal genetic information on obesity risk will facilitate or impede desired behavioral responses. We describe the key issues in the design and implementation of a randomized controlled trial examining the clinical utility of providing genetic risk information for obesity. Participants are being recruited from the Coriell Personalized Medicine Collaborative, an ongoing, longitudinal research cohort study designed to determine the utility of personal genome information in health management and clinical decision making. The primary focus of the ancillary Obesity Risk Communication Study is to determine whether genetic risk information added value to traditional communication efforts for obesity, which are based on lifestyle risk factors. The trial employs a 2 × 2 factorial design in order to examine the effects of providing genetic risk information for obesity, alone or in combination with lifestyle risk information, on participants' psychological responses, behavioral intentions, health behaviors, and weight. The factorial design generated four experimental arms based on communication of estimated risk to participants: (1) no risk feedback (control), (2) genetic risk only, (3) lifestyle risk only, and (4) both genetic and lifestyle risk (combined). Key issues in study design pertained to the selection of algorithms to estimate lifestyle risk and determination of information to be provided to participants assigned to each experimental arm to achieve a balance between clinical standards and methodological rigor. Following the launch of the trial in September 2011, implementation challenges

  5. Healthy School, Happy School: Design and Protocol for a Randomized Clinical Trial Designed to Prevent Weight Gain in Children

    Directory of Open Access Journals (Sweden)

    Daniela Schneid Schuh

    Full Text Available Abstract Background: Schools have become a key figure for the promotion of health and obesity interventions, bringing the development of critical awareness to the construction and promotion of a healthy diet, physical activity, and the monitoring of the nutritional status in childhood and adolescence. Objectives: To describe a study protocol to evaluate the effectiveness of an intervention designed to improve knowledge of food choices in the school environment. Methods: This is a cluster-randomized, parallel, two-arm study conducted in public elementary and middle schools in Brazil. Participants will be children and adolescents between the ages of 5 and 15 years, from both genders. The interventions will be focusing on changes in lifestyle, physical activities and nutritional education. Intervention activities will occur monthly in the school’s multimedia room or sports court. The control group arm will receive usual recommendations by the school. The primary outcome variable will be anthropometric measures, such as body mass index percentiles and levels of physical activity by the International Physical Activity Questionnaire. Results: We expect that after the study children will increase the ingestion of fresh food, reduce excessive consumption of sugary and processed foods, and reduce the hours of sedentary activities. Conclusion: The purpose of starting the dietary intervention at this stage of life is to develop a knowledge that will enable for healthy choices, providing opportunities for a better future for this population.

  6. Healthy School, Happy School: Design and Protocol for a Randomized Clinical Trial Designed to Prevent Weight Gain in Children

    Science.gov (United States)

    Schuh, Daniela Schneid; Goulart, Maíra Ribas; Barbiero, Sandra Mari; Sica, Caroline D’Azevedo; Borges, Raphael; Moraes, David William; Pellanda, Lucia Campos

    2017-01-01

    Background: Schools have become a key figure for the promotion of health and obesity interventions, bringing the development of critical awareness to the construction and promotion of a healthy diet, physical activity, and the monitoring of the nutritional status in childhood and adolescence. Objectives: To describe a study protocol to evaluate the effectiveness of an intervention designed to improve knowledge of food choices in the school environment. Methods: This is a cluster-randomized, parallel, two-arm study conducted in public elementary and middle schools in Brazil. Participants will be children and adolescents between the ages of 5 and 15 years, from both genders. The interventions will be focusing on changes in lifestyle, physical activities and nutritional education. Intervention activities will occur monthly in the school’s multimedia room or sports court. The control group arm will receive usual recommendations by the school. The primary outcome variable will be anthropometric measures, such as body mass index percentiles and levels of physical activity by the International Physical Activity Questionnaire. Results: We expect that after the study children will increase the ingestion of fresh food, reduce excessive consumption of sugary and processed foods, and reduce the hours of sedentary activities. Conclusion: The purpose of starting the dietary intervention at this stage of life is to develop a knowledge that will enable for healthy choices, providing opportunities for a better future for this population. PMID:28699973

  7. Sample of Dutch FADN 2012 : design principles and quality of the sample of agricultural and horticultural holdings

    NARCIS (Netherlands)

    Veen, van der H.B.; Ge, L.; Meer, van der R.W.; Vrolijk, H.C.J.

    2014-01-01

    The EU Farm Accountancy Data Network (FADN) requires the Netherlands to yearly send bookkeeping data of 1,500 farms to Brussels. This task is carried out by LEI and CEI. This report explains the background of the farm sample for the year 2012. All phases from the determination of the selection plan,

  8. Increased statistical power with combined independent randomization tests used with multiple-baseline design.

    Science.gov (United States)

    Tyrrell, Pascal N; Corey, Paul N; Feldman, Brian M; Silverman, Earl D

    2013-06-01

    Physicians often assess the effectiveness of treatments on a small number of patients. Multiple-baseline designs (MBDs), based on the Wampold-Worsham (WW) method of randomization and applied to four subjects, have relatively low power. Our objective was to propose another approach with greater power that does not suffer from the time requirements of the WW method applied to a greater number of subjects. The power of a design that involves the combination of two four-subject MBDs was estimated using computer simulation and compared with the four- and eight-subject designs. The effect of a delayed linear response to treatment on the power of the test was also investigated. Power was found to be adequate (>80%) for a standardized mean difference (SMD) greater than 0.8. The effect size associated with 80% power from combined tests was smaller than that of the single four-subject MBD (SMD=1.3) and comparable with the eight-subject MBD (SMD=0.6). A delayed linear response to the treatment resulted in important reductions in power (20-35%). By combining two four-subject MBD tests, an investigator can detect better effect sizes (SMD=0.8) and be able to complete a comparatively timelier and feasible study. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. A novel design for randomized immuno-oncology clinical trials with potentially delayed treatment effects

    Directory of Open Access Journals (Sweden)

    Pei He

    2015-10-01

    Full Text Available The semi-parametric proportional hazards model is widely adopted in randomized clinical trials with time-to-event outcomes, and the log-rank test is frequently used to detect a potential treatment effect. Immuno-oncology therapies pose unique challenges to the design of a trial as the treatment effect may be delayed, which violates the proportional hazards assumption, and the log-rank test has been shown to markedly lose power under the non-proportional hazards setting. A novel design and analysis approach for immuno-oncology trials is proposed through a piecewise treatment effect function, which is capable of detecting a potentially delayed treatment effect. The number of events required for the trial will be determined to ensure sufficient power for both the overall log-rank test without a delayed effect and the test beyond the delayed period when such a delay exists. The existence of a treatment delay is determined by a likelihood ratio test with resampling. Numerical results show that the proposed design adequately controls the Type I error rate, has a minimal loss in power under the proportional hazards setting and is markedly more powerful than the log-rank test with a delayed treatment effect.

  10. Delta inflation: a bias in the design of randomized controlled trials in critical care medicine.

    Science.gov (United States)

    Aberegg, Scott K; Richards, D Roxanne; O'Brien, James M

    2010-01-01

    Mortality is the most widely accepted outcome measure in randomized controlled trials of therapies for critically ill adults, but most of these trials fail to show a statistically significant mortality benefit. The reasons for this are unknown. We searched five high impact journals (Annals of Internal Medicine, British Medical Journal, JAMA, The Lancet, New England Journal of Medicine) for randomized controlled trials comparing mortality of therapies for critically ill adults over a ten year period. We abstracted data on the statistical design and results of these trials to compare the predicted delta (delta; the effect size of the therapy compared to control expressed as an absolute mortality reduction) to the observed delta to determine if there is a systematic overestimation of predicted delta that might explain the high prevalence of negative results in these trials. We found 38 trials meeting our inclusion criteria. Only 5/38 (13.2%) of the trials provided justification for the predicted delta. The mean predicted delta among the 38 trials was 10.1% and the mean observed delta was 1.4% (P inflation", is a potential reason that these trials have a high rate of negative results."Absence of evidence is not evidence of absence."

  11. Design and Implementation of a Randomized Controlled Trial of Genomic Counseling for Patients with Chronic Disease

    Directory of Open Access Journals (Sweden)

    Kevin Sweet

    2014-01-01

    Full Text Available We describe the development and implementation of a randomized controlled trial to investigate the impact of genomic counseling on a cohort of patients with heart failure (HF or hypertension (HTN, managed at a large academic medical center, the Ohio State University Wexner Medical Center (OSUWMC. Our study is built upon the existing Coriell Personalized Medicine Collaborative (CPMC®. OSUWMC patient participants with chronic disease (CD receive eight actionable complex disease and one pharmacogenomic test report through the CPMC® web portal. Participants are randomized to either the in-person post-test genomic counseling—active arm, versus web-based only return of results—control arm. Study-specific surveys measure: (1 change in risk perception; (2 knowledge retention; (3 perceived personal control; (4 health behavior change; and, for the active arm (5, overall satisfaction with genomic counseling. This ongoing partnership has spurred creation of both infrastructure and procedures necessary for the implementation of genomics and genomic counseling in clinical care and clinical research. This included creation of a comprehensive informed consent document and processes for prospective return of actionable results for multiple complex diseases and pharmacogenomics (PGx through a web portal, and integration of genomic data files and clinical decision support into an EPIC-based electronic medical record. We present this partnership, the infrastructure, genomic counseling approach, and the challenges that arose in the design and conduct of this ongoing trial to inform subsequent collaborative efforts and best genomic counseling practices.

  12. Design and protocol of a randomized multiple behavior change trial: Make Better Choices 2 (MBC2).

    Science.gov (United States)

    Pellegrini, Christine A; Steglitz, Jeremy; Johnston, Winter; Warnick, Jennifer; Adams, Tiara; McFadden, H G; Siddique, Juned; Hedeker, Donald; Spring, Bonnie

    2015-03-01

    Suboptimal diet and inactive lifestyle are among the most prevalent preventable causes of premature death. Interventions that target multiple behaviors are potentially efficient; however the optimal way to initiate and maintain multiple health behavior changes is unknown. The Make Better Choices 2 (MBC2) trial aims to examine whether sustained healthful diet and activity change are best achieved by targeting diet and activity behaviors simultaneously or sequentially. Study design approximately 250 inactive adults with poor quality diet will be randomized to 3 conditions examining the best way to prescribe healthy diet and activity change. The 3 intervention conditions prescribe: 1) an increase in fruit and vegetable consumption (F/V+), decrease in sedentary leisure screen time (Sed-), and increase in physical activity (PA+) simultaneously (Simultaneous); 2) F/V+ and Sed- first, and then sequentially add PA+ (Sequential); or 3) Stress Management Control that addresses stress, relaxation, and sleep. All participants will receive a smartphone application to self-monitor behaviors and regular coaching calls to help facilitate behavior change during the 9 month intervention. Healthy lifestyle change in fruit/vegetable and saturated fat intakes, sedentary leisure screen time, and physical activity will be assessed at 3, 6, and 9 months. MBC2 is a randomized m-Health intervention examining methods to maximize initiation and maintenance of multiple healthful behavior changes. Results from this trial will provide insight about an optimal technology supported approach to promote improvement in diet and physical activity. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. The "Healthy Habits, Healthy Girls" randomized controlled trial for girls: study design, protocol, and baseline results.

    Science.gov (United States)

    Leme, Ana Carolina Barco; Philippi, Sonia Tucunduva

    2015-07-01

    The purpose of this article is to describe the study design, protocol, and baseline results of the "Healthy Habits, Healthy Girls" program. The intervention is being evaluated through a randomized controlled trial in 10 public schools in the city of São Paulo, Brazil. Data on the following variables were collected and assessed at baseline and will be reevaluated at 7 and 12 months: body mass index, waist circumference, dietary intake, nutrition, physical activity, social cognitive mediators, physical activity level, sedentary behaviors, self-rated physical status, and overall self-esteem. According to the baseline results, 32.4% and 23.4% of girls were overweight in the intervention and control groups, respectively, and in both groups a higher percentage failed to meet daily recommendations for moderate and vigorous physical activity and maximum screen time (TV, computer, mobile devices). There were no significant differences between the groups for most of the variables, except age (p = 0.000) and waist circumference (p = 0.014). The study showed a gap in the Brazilian literature on protocols for randomized controlled trials to prevent obesity among youth. The current study may thus be an important initial contribution to the field.

  14. Studies with staggered starts: multiple baseline designs and group-randomized trials.

    Science.gov (United States)

    Rhoda, Dale A; Murray, David M; Andridge, Rebecca R; Pennell, Michael L; Hade, Erinn M

    2011-11-01

    Multiple baseline designs (MBDs) have been suggested as alternatives to group-randomized trials (GRT). We reviewed structural features of MBDs and considered their potential effectiveness in public health research. We also reviewed the effect of staggered starts on statistical power. We reviewed the MBD literature to identify key structural features, recent suggestions that MBDs be adopted in public health research, and the literature on power in GRTs with staggered starts. We also computed power for MBDs and GRTs. The features that have contributed to the success of small MBDs in some fields are not likely to translate well to public health research. MBDs can be more powerful than GRTs under some conditions, but those conditions involve assumptions that require careful evaluation in practice. MBDs will often serve better as a complement of rather than as an alternative to GRTs. GRTs may employ staggered starts for logistical or ethical reasons, but this will always increase their duration and will often increase their cost.

  15. Optimum signal input distribution design in the presence of random pointing jitter for intersatellite optical communications

    Science.gov (United States)

    Li, Xin; Ma, Jing; Yu, Siyuan; Tan, Liying; Shen, Tao

    2013-02-01

    Channel capacity is widely investigated for free space optical links to approach high-speed data-rate communication. Instead of traditional equiprobable binary symbol input distribution, an optimum input distribution is proposed with respect to channel capacity by maximizing mutual information for intersatellite optical communications in the presence of random pointing jitter. It is shown that the optimum input distribution varies with the variance of pointing jitter σ and laser beam divergence angle w0 and the normalized intensity threshold IT. For traditional normalized intensity threshold IT=0.5, the optimum input distribution ranges from about p(x=0)=0.52 for weak pointing jitter to about p(x=0)=0.24 for strong pointing jitter given the same laser beam divergence angle. The results obtained in this paper will be useful for intersatellite optical communication system design.

  16. H∞ Networked Cascade Control System Design for Turboshaft Engines with Random Packet Dropouts

    Directory of Open Access Journals (Sweden)

    Xiaofeng Liu

    2017-01-01

    Full Text Available The distributed control architecture becomes more and more important in future gas turbine engine control systems, in which the sensors and actuators will be connected to the controllers via a network. Therefore, the control problem of network-enabled high-performance distributed engine control (DEC has come to play an important role in modern gas turbine control systems, while, due to the properties of the network, the packet dropouts must be considered. This study introduces a distributed control system architecture based on a networked cascade control system (NCCS. Typical turboshaft engine distributed controllers are designed based on the NCCS framework with H∞ state feedback under random packet dropouts. The sufficient robust stable conditions are derived via the Lyapunov stability theory and linear matrix inequality approach. Simulations illustrate the effectiveness of the presented method.

  17. Design and protocol of the weight loss lottery- a cluster randomized trial.

    Science.gov (United States)

    van der Swaluw, Koen; Lambooij, Mattijs S; Mathijssen, Jolanda J P; Schipper, Maarten; Zeelenberg, Marcel; Polder, Johan J; Prast, Henriëtte M

    2016-07-01

    People often intend to exercise but find it difficult to attend their gyms on a regular basis. At times, people seek and accept deadlines with consequences to realize their own goals (i.e. commitment devices). The aim of our cluster randomized controlled trial is to test whether a lottery-based commitment device can promote regular gym attendance. The winners of the lottery always get feedback on the outcome but can only claim their prize if they attended their gyms on a regular basis. In this paper we present the design and baseline characteristics of a three-arm trial which is performed with 163 overweight participants in six in-company fitness centers in the Netherlands. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Determination of cathinones and other stimulant, psychedelic, and dissociative designer drugs in real hair samples.

    Science.gov (United States)

    Salomone, Alberto; Gazzilli, Giulia; Di Corcia, Daniele; Gerace, Enrico; Vincenti, Marco

    2016-03-01

    The detection of new psychoactive substances (NPS) in hair proved to provide insight into their current diffusion among the population and the social characteristics of these synthetic drugs' users. Therefore, a UHPLC-MS/MS method was developed in order to determine 31 stimulant and psychedelic substituted phenethylamines, and dissociative drugs in hair samples. The method proved to be simple, fast, specific, and sensitive. The absence of matrix interferents, together with excellent repeatability of both retention times and relative abundances of diagnostic transitions, allowed the correct identification of all analytes tested. The method showed optimal linearity in the interval 10-1000 pg/mg, with correlation coefficient values varying between 0.9981 and 0.9997. Quantitation limits ranged from 1.8 pg/mg for 4-methoxyphencyclidine (4-MeO-PCP) up to 35 pg/mg for 6-(2-aminopropyl)benzofuran (6-APB). The method was applied to (i) 23 real samples taken from proven MDMA and ketamine abusers and (ii) 54 real hair samples which had been previously tested negative during regular drug screening in driver's license recovery. Six samples tested positive for at least one target analyte. Methoxetamine (MXE) was found in three cases (range of concentration 7.7-27 pg/mg); mephedrone (4-MMC) was found in two cases (50-59 pg/mg) while one sample tested positive for methylone at 28 pg/mg. Other positive findings included 4-methylethcathinone (4-MEC), alpha-pyrrolidinovalerophenone (α-PVP), 4-fluoroamphetamine (4-FA), 3,4-methylenedioxypyrovalerone (MDPV), and diphenidine. The present study confirms the increasing diffusion of new designer drugs with enhanced stimulant activity among the target population of poly-abuse consumers.

  19. Estimating the prevalence of sensitive behaviour and cheating with a dual design for direct questioning and randomized reponse.

    NARCIS (Netherlands)

    Van den Hout, A.D.L.; Bockenholt, U.; Van der Heijden, P.G.M.

    2010-01-01

    Randomized response is a misclassification design to estimate the prevalence of sensitive behaviour.Respondents who do not followthe instructions of the design are considered to be cheating. A mixture model is proposed to estimate the prevalence of sensitive behaviour and cheating in the case of a

  20. Design and characterization of an aerodynamic shoe sampling system for screening trace explosive materials

    Science.gov (United States)

    Staymates, Matthew; Gillen, Greg; Grandner, Jessica; Lukow, Stefan

    2011-11-01

    As part of an ongoing effort with the Transportation Security Laboratory, the National Institute of Standards and Technology has been developing a prototype shoe sampling system that relies on aerodynamic sampling as the primary mechanism for liberating, transporting, and collecting explosive contamination. This presentation will focus on the fluid dynamics associated with the current prototype design. This design includes several air jets and air blades that are used to dislodge particles from target areas of a shoe. A large blower then draws air and liberated particles into a collection device at several hundred liters per second. Experiments that utilize optical particle counting techniques have shown that the applied shear forces from these jets are capable of liberating particles efficiently from shoe surfaces. Results from real-world contamination testing also support the effectiveness of air jet impingement in this prototype. Many examples of flow visualization will be shown. The issues associated with air spillage, particle release efficiency, and particle transport will also be discussed.

  1. A worksite prevention program for construction workers: design of a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Proper Karin I

    2010-06-01

    Full Text Available Abstract Background A worksite prevention program was developed to promote the work ability of construction workers and thereby prolong a healthy working life. The objective of this paper is to present the design of a randomized controlled trial evaluating the effectiveness of that intervention program compared with usual care for construction workers. Methods The study is designed as a randomized controlled trial with a follow-up of one year. Employees eligible for this study are construction workers performing actual construction work. The worksite intervention will be compared with usual care. This intervention was developed by using the Intervention Mapping approach and consists of the following components: (1 two individual training sessions of a physical therapist to lower the physical workload, (2 a Rest-Break tool to improve the balance between work and recovery, and (3 two empowerment training sessions to increase the influence of the construction workers at the worksite. Outcome measures are assessed at baseline, 3, 6, and 12 months. The primary outcome measures of this study are work ability and health-related quality of life. Secondary outcome measures include need for recovery, musculoskeletal complaints, work engagement and self efficacy. Cost-effectiveness will be evaluated from the company perspective. Moreover, a process evaluation will be conducted. Discussion The feasibility of the intervention and the study has been enhanced by creating an intervention program that explicitly appeals to construction workers and will not interfere too much with the ongoing construction. The feasibility and effectiveness of this worksite prevention program will be investigated by means of an effect- and a process evaluation. If proven effective, this worksite prevention program can be implemented on a larger scale within the construction industry. Trial Registration NTR1278

  2. Non-pharmacological care for patients with generalized osteoarthritis: design of a randomized clinical trial

    Directory of Open Access Journals (Sweden)

    Cornelissen Jessica

    2010-07-01

    Full Text Available Abstract Background Non-pharmacological treatment (NPT is a useful treatment option in the management of hip or knee osteoarthritis. To our knowledge however, no studies have investigated the effect of NPT in patients with generalized osteoarthritis (GOA. The primary aim of this study is to compare the effectiveness of two currently existing health care programs with different intensity and mode of delivery on daily functioning in patients with GOA. The secondary objective is to compare the cost-effectiveness of both interventions. Methods/Design In this randomized, single blind, clinical trial with active controls, we aim to include 170 patients with GOA. The experimental intervention consist of six self-management group sessions provided by a multi-disciplinary team (occupational therapist, physiotherapist, dietician and specialized nurse. The active control group consists of two group sessions and four sessions by telephone, provided by a specialized nurse and physiotherapist. Both therapies last six weeks. Main study outcome is daily functioning during the first year after the treatment, assessed on the Health Assessment Questionnaire. Secondary outcomes are health related quality of life, specific complaints, fatigue, and costs. Illness cognitions, global perceived effect and self-efficacy, will also be assessed for a responder analysis. Outcome assessments are performed directly after the intervention, after 26 weeks and after 52 weeks. Discussion This article describes the design of a randomized, single blind, clinical trial with a one year follow up to compare the costs and effectiveness of two non-pharmacological interventions with different modes of delivery for patients with GOA. Trial registration Dutch Trial Register NTR2137

  3. A UAV-Based Fog Collector Design for Fine-Scale Aerobiological Sampling

    Science.gov (United States)

    Gentry, D.; Guarro, M.; Demachkie, I. S.; Stumfall, I.; Dahlgren, R. P.

    2016-12-01

    Airborne microbes are found throughout the troposphere and into the stratosphere. Knowing how the activity of airborne microorganisms can alter water, carbon, and other geochemical cycles is vital to a full understanding of local and global ecosystems. Just as on the land or in the ocean, atmospheric regions vary in habitability; the underlying geochemical, climatic, and ecological dynamics must be characterized at different scales to be effectively modeled. Most aerobiological studies have focused on a high level: 'How high are airborne microbes found?' and 'How far can they travel?' Most fog and cloud water studies collect from stationary ground stations (point) or along flight transects (1D). To complement and provide context for this data, we have designed a UAV-based modified fog and cloud water collector to retrieve 4D-resolved samples for biological and chemical analysis. Our design uses a passive impacting collector hanging from a rigid rod suspended between two multi-rotor UAVs. The suspension design reduces the effect of turbulence and potential for contamination from the UAV downwash. The UAVs are currently modeled in a leader-follower configuration, taking advantage of recent advances in modular UAVs, UAV swarming, and flight planning. The collector itself is a hydrophobic mesh. Materials including Tyvek, PTFE, nylon, and polypropylene monofilament fabricated via laser cutting, CNC knife, or 3D printing were characterized for droplet collection efficiency using a benchtop atomizer and particle counter. Because the meshes can be easily and inexpensively fabricated, a set can be pre-sterilized and brought to the field for 'hot swapping' to decrease cross-contamination between flight sessions or use as negative controls. An onboard sensor and logging system records the time and location of each sample; when combined with flight tracking data, the samples can be resolved into a 4D volumetric map of the fog bank. Collected samples can be returned to the lab

  4. A UAV-Based Fog Collector Design for Fine-Scale Aerobiological Sampling

    Science.gov (United States)

    Gentry, Diana; Guarro, Marcello; Demachkie, Isabella Siham; Stumfall, Isabel; Dahlgren, Robert P.

    2017-01-01

    Airborne microbes are found throughout the troposphere and into the stratosphere. Knowing how the activity of airborne microorganisms can alter water, carbon, and other geochemical cycles is vital to a full understanding of local and global ecosystems. Just as on the land or in the ocean, atmospheric regions vary in habitability; the underlying geochemical, climatic, and ecological dynamics must be characterized at different scales to be effectively modeled. Most aerobiological studies have focused on a high level: 'How high are airborne microbes found?' and 'How far can they travel?' Most fog and cloud water studies collect from stationary ground stations (point) or along flight transects (1D). To complement and provide context for this data, we have designed a UAV-based modified fog and cloud water collector to retrieve 4D-resolved samples for biological and chemical analysis.Our design uses a passive impacting collector hanging from a rigid rod suspended between two multi-rotor UAVs. The suspension design reduces the effect of turbulence and potential for contamination from the UAV downwash. The UAVs are currently modeled in a leader-follower configuration, taking advantage of recent advances in modular UAVs, UAV swarming, and flight planning.The collector itself is a hydrophobic mesh. Materials including Tyvek, PTFE, nylon, and polypropylene monofilament fabricated via laser cutting, CNC knife, or 3D printing were characterized for droplet collection efficiency using a benchtop atomizer and particle counter. Because the meshes can be easily and inexpensively fabricated, a set can be pre-sterilized and brought to the field for 'hot swapping' to decrease cross-contamination between flight sessions or use as negative controls.An onboard sensor and logging system records the time and location of each sample; when combined with flight tracking data, the samples can be resolved into a 4D volumetric map of the fog bank. Collected samples can be returned to the lab for

  5. Design of a multi-purpose titanium bottle for uncontaminated sampling of carbon monoxide and potentially of other analytes

    Science.gov (United States)

    Doherty, K. W.; Taylor, C. D.; Zafiriou, O. C.

    2003-03-01

    The design, construction, and performance of a non-contaminating titanium sampler for carbon monoxide (CO) are described. In light of the favorable properties of titanium and the minimal contact of O-rings with samples, this multi-purpose design is expected to excel at a broad range of other uses: sampling gases, organic compounds, some trace metals, and living and dead particles.

  6. New Design of Crypto-Based Pseudo random number generator (CBPRNG) using BLOW FISH cipher

    OpenAIRE

    T.Chalama Reddy; Dr.R.Seshadri

    2013-01-01

    Random Number Generators (RNGs) are an important building block for algorithms and protocols in cryptography. Random number generation is used in a wide variety of cryptographic operations, such as key generation and challenge/response protocols. A random number generator outputs a sequence of 0s and 1s such that at any position, the next bit cannot be expected on the previous bits. However, true random number produces non- deterministic output since if the same random generator is run twice,...

  7. Analytical decision model for sample size and effectiveness projections for use in planning a population-based randomized controlled trial of colorectal cancer screening.

    Science.gov (United States)

    Chiu, Sherry Y-H; Malila, Nea; Yen, Amy M-F; Anttila, Ahti; Hakama, Matti; Chen, H-H

    2011-02-01

    Population-based randomized controlled trials (RCTs) often involve enormous costs and long-term follow-up to evaluate primary end points. Analytical decision-simulated model for sample size and effectiveness projections based on primary and surrogate end points are necessary before planning a population-based RCT. Based on the study design similar to two previous RCTs, transition rates were estimated using a five-state natural history model [normal, preclinical detection phase (PCDP) Dukes' A/B, PCDP Dukes' C/D, Clinical Dukes' A/B and Clinical Dukes' C/D]. The Markov cycle tree was assigned transition parameters, variables related to screening and survival rate that simulated results of 10-year follow-up in the absence of screening for a hypothetical cohort aged 45-74 years. The corresponding screened arm was to simulate the results after the introduction of population-based screening for colorectal cancer with fecal occult blood test with stop screen design. The natural course of mean sojourn time for five-state Markov model were estimated as 2.75 years for preclinical Dukes' A/B and 1.38 years for preclinical Dukes' C/D. The expected reductions in mortality and Dukes' C/D were 13% (95% confidence intervals: 7-19%) and 26% (95% confidence intervals: 20-32%), respectively, given a 70% acceptance rate and a 90% colonoscopy referral rate. Sample sizes required were 86,150 and 65,592 subjects for the primary end point and the surrogate end point, respectively, given an incidence rate up to 0.0020 per year. The sample sizes required for primary and surrogate end points and the projection of effectiveness of fecal occult blood test for colorectal cancer screening were developed. Both are very important to plan a population-based RCT. © 2010 Blackwell Publishing Ltd.

  8. Methods of learning in statistical education: Design and analysis of a randomized trial

    Science.gov (United States)

    Boyd, Felicity Turner

    Background. Recent psychological and technological advances suggest that active learning may enhance understanding and retention of statistical principles. A randomized trial was designed to evaluate the addition of innovative instructional methods within didactic biostatistics courses for public health professionals. Aims. The primary objectives were to evaluate and compare the addition of two active learning methods (cooperative and internet) on students' performance; assess their impact on performance after adjusting for differences in students' learning style; and examine the influence of learning style on trial participation. Methods. Consenting students enrolled in a graduate introductory biostatistics course were randomized to cooperative learning, internet learning, or control after completing a pretest survey. The cooperative learning group participated in eight small group active learning sessions on key statistical concepts, while the internet learning group accessed interactive mini-applications on the same concepts. Controls received no intervention. Students completed evaluations after each session and a post-test survey. Study outcome was performance quantified by examination scores. Intervention effects were analyzed by generalized linear models using intent-to-treat analysis and marginal structural models accounting for reported participation. Results. Of 376 enrolled students, 265 (70%) consented to randomization; 69, 100, and 96 students were randomized to the cooperative, internet, and control groups, respectively. Intent-to-treat analysis showed no differences between study groups; however, 51% of students in the intervention groups had dropped out after the second session. After accounting for reported participation, expected examination scores were 2.6 points higher (of 100 points) after completing one cooperative learning session (95% CI: 0.3, 4.9) and 2.4 points higher after one internet learning session (95% CI: 0.0, 4.7), versus

  9. Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model

    Science.gov (United States)

    Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.

    2017-09-01

    The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.

  10. Sampling design for the Study of Cardiovascular Risks in Adolescents (ERICA).

    Science.gov (United States)

    Vasconcellos, Mauricio Teixeira Leite de; Silva, Pedro Luis do Nascimento; Szklo, Moyses; Kuschnir, Maria Cristina Caetano; Klein, Carlos Henrique; Abreu, Gabriela de Azevedo; Barufaldi, Laura Augusta; Bloch, Katia Vergetti

    2015-05-01

    The Study of Cardiovascular Risk in Adolescents (ERICA) aims to estimate the prevalence of cardiovascular risk factors and metabolic syndrome in adolescents (12-17 years) enrolled in public and private schools of the 273 municipalities with over 100,000 inhabitants in Brazil. The study population was stratified into 32 geographical strata (27 capitals and five sets with other municipalities in each macro-region of the country) and a sample of 1,251 schools was selected with probability proportional to size. In each school three combinations of shift (morning and afternoon) and grade were selected, and within each of these combinations, one class was selected. All eligible students in the selected classes were included in the study. The design sampling weights were calculated by the product of the reciprocals of the inclusion probabilities in each sampling stage, and were later calibrated considering the projections of the numbers of adolescents enrolled in schools located in the geographical strata by sex and age.

  11. Designing to Sample the Unknown: Lessons from OSIRIS-REx Project Systems Engineering

    Science.gov (United States)

    Everett, David; Mink, Ronald; Linn, Timothy; Wood, Joshua

    2017-01-01

    On September 8, 2016, the third NASA New Frontiers mission launched on an Atlas V 411. The Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer (OSIRIS-REx) will rendezvous with asteroid Bennu in 2018, collect a sample in 2020, and return that sample to Earth in September 2023. The development team has overcome a number of challenges in order to design and build a system that will make contact with an unexplored, airless, low-gravity body. This paper will provide an overview of the mission, then focus in on the system-level challenges and some of the key system-level processes. Some of the lessons here are unique to the type of mission, like discussion of operating at a largely-unknown, low-gravity object. Other lessons, particularly from the build phase, have broad implications. The OSIRIS-REx risk management process was particularly effective in achieving an on-time and under-budget development effort. The systematic requirements management and verification and the system validation also helped identify numerous potential problems. The final assessment of the OSIRIS-REx performance will need to wait until the sample is returned in 2023, but this post-launch assessment will capture some of the key systems-engineering lessons from the development team.

  12. Sample size calculation for microarray experiments with blocked one-way design

    Directory of Open Access Journals (Sweden)

    Jung Sin-Ho

    2009-05-01

    Full Text Available Abstract Background One of the main objectives of microarray analysis is to identify differentially expressed genes for different types of cells or treatments. Many statisti