Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
Phylogenetic effective sample size.
Bartoszek, Krzysztof
2016-10-21
In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Desu, M M
2012-01-01
One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria
White, Simon R; Muniz-Terrera, Graciela; Matthews, Fiona E
2018-05-01
Many medical (and ecological) processes involve the change of shape, whereby one trajectory changes into another trajectory at a specific time point. There has been little investigation into the study design needed to investigate these models. We consider the class of fixed effect change-point models with an underlying shape comprised two joined linear segments, also known as broken-stick models. We extend this model to include two sub-groups with different trajectories at the change-point, a change and no change class, and also include a missingness model to account for individuals with incomplete follow-up. Through a simulation study, we consider the relationship of sample size to the estimates of the underlying shape, the existence of a change-point, and the classification-error of sub-group labels. We use a Bayesian framework to account for the missing labels, and the analysis of each simulation is performed using standard Markov chain Monte Carlo techniques. Our simulation study is inspired by cognitive decline as measured by the Mini-Mental State Examination, where our extended model is appropriate due to the commonly observed mixture of individuals within studies who do or do not exhibit accelerated decline. We find that even for studies of modest size ( n = 500, with 50 individuals observed past the change-point) in the fixed effect setting, a change-point can be detected and reliably estimated across a range of observation-errors.
Sample size determination and power
Ryan, Thomas P, Jr
2013-01-01
THOMAS P. RYAN, PhD, teaches online advanced statistics courses for Northwestern University and The Institute for Statistics Education in sample size determination, design of experiments, engineering statistics, and regression analysis.
How Sample Size Affects a Sampling Distribution
Mulekar, Madhuri S.; Siegel, Murray H.
2009-01-01
If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…
Yu, P.; Sun, J.; Wolz, R.; Stephenson, D.; Brewer, J.; Fox, N.C.; Cole, P.E.; Jack, C.R.; Hill, D.L.G.; Schwarz, A.J.
2014-01-01
Objective To evaluate the effect of computational algorithm, measurement variability and cut-point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). Methods We used normal control and amnestic MCI subjects from ADNI-1 as normative reference and screening cohorts. We evaluated the enrichment performance of four widely-used hippocampal segmentation algorithms (FreeSurfer, HMAPS, LEAP and NeuroQuant) in terms of two-year changes in MMSE, ADAS-Cog and CDR-SB. We modeled the effect of algorithm, test-retest variability and cut-point on sample size, screen fail rates and trial cost and duration. Results HCV-based patient selection yielded not only reduced sample sizes (by ~40–60%) but also lower trial costs (by ~30–40%) across a wide range of cut-points. Overall, the dependence on the cut-point value was similar for the three clinical instruments considered. Conclusion These results provide a guide to the choice of HCV cut-point for aMCI clinical trials, allowing an informed trade-off between statistical and practical considerations. PMID:24211008
Sample Size Estimation: The Easy Way
Weller, Susan C.
2015-01-01
This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…
Predicting sample size required for classification performance
Directory of Open Access Journals (Sweden)
Figueroa Rosa L
2012-02-01
Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.
Yu, Peng; Sun, Jia; Wolz, Robin; Stephenson, Diane; Brewer, James; Fox, Nick C; Cole, Patricia E; Jack, Clifford R; Hill, Derek L G; Schwarz, Adam J
2014-04-01
The objective of this study was to evaluate the effect of computational algorithm, measurement variability, and cut point on hippocampal volume (HCV)-based patient selection for clinical trials in mild cognitive impairment (MCI). We used normal control and amnestic MCI subjects from the Alzheimer's Disease Neuroimaging Initiative 1 (ADNI-1) as normative reference and screening cohorts. We evaluated the enrichment performance of 4 widely used hippocampal segmentation algorithms (FreeSurfer, Hippocampus Multi-Atlas Propagation and Segmentation (HMAPS), Learning Embeddings Atlas Propagation (LEAP), and NeuroQuant) in terms of 2-year changes in Mini-Mental State Examination (MMSE), Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), and Clinical Dementia Rating Sum of Boxes (CDR-SB). We modeled the implications for sample size, screen fail rates, and trial cost and duration. HCV based patient selection yielded reduced sample sizes (by ∼40%-60%) and lower trial costs (by ∼30%-40%) across a wide range of cut points. These results provide a guide to the choice of HCV cut point for amnestic MCI clinical trials, allowing an informed tradeoff between statistical and practical considerations. Copyright © 2014 Elsevier Inc. All rights reserved.
Water Sample Points, Navajo Nation, 2000, USACE
U.S. Environmental Protection Agency — This point shapefile presents the locations and results for water samples collected on the Navajo Nation by the US Army Corps of Engineers (USACE) for the US...
Sample size in qualitative interview studies
DEFF Research Database (Denmark)
Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit Kristiane
2016-01-01
Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is “saturation.” Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose...... the concept “information power” to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power...... depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning...
Basic Statistical Concepts for Sample Size Estimation
Directory of Open Access Journals (Sweden)
Vithal K Dhulkhed
2008-01-01
Full Text Available For grant proposals the investigator has to include an estimation of sample size .The size of the sample should be adequate enough so that there is sufficient data to reliably answer the research question being addressed by the study. At the very planning stage of the study the investigator has to involve the statistician. To have meaningful dialogue with the statistician every research worker should be familiar with the basic concepts of statistics. This paper is concerned with simple principles of sample size calculation. Concepts are explained based on logic rather than rigorous mathematical calculations to help him assimilate the fundamentals.
Determining sample size for assessing species composition in ...
African Journals Online (AJOL)
Species composition is measured in grasslands for a variety of reasons. Commonly, observations are made using the wheel-point apparatus, but the problem of determining optimum sample size has not yet been satisfactorily resolved. In this study the wheel-point apparatus was used to record 2 000 observations in each of ...
Research Note Pilot survey to assess sample size for herbaceous ...
African Journals Online (AJOL)
A pilot survey to determine sub-sample size (number of point observations per plot) for herbaceous species composition assessments, using a wheel-point apparatus applying the nearest-plant method, was conducted. Three plots differing in species composition on the Zululand coastal plain were selected, and on each plot ...
Sample size tables for clinical studies
National Research Council Canada - National Science Library
Machin, David
2009-01-01
... with sample size software S S S , which we hope will give the user even greater ﬂexibility and easy access to a wide range of designs, and allow design parameters to be tailored more readily to speciﬁc problems. Further, as some early phase designs are adaptive in nature and require knowledge of earlier patients' response to determine t...
Sample size for morphological traits of pigeonpea
Directory of Open Access Journals (Sweden)
Giovani Facco
2015-12-01
Full Text Available The objectives of this study were to determine the sample size (i.e., number of plants required to accurately estimate the average of morphological traits of pigeonpea (Cajanus cajan L. and to check for variability in sample size between evaluation periods and seasons. Two uniformity trials (i.e., experiments without treatment were conducted for two growing seasons. In the first season (2011/2012, the seeds were sown by broadcast seeding, and in the second season (2012/2013, the seeds were sown in rows spaced 0.50 m apart. The ground area in each experiment was 1,848 m2, and 360 plants were marked in the central area, in a 2 m × 2 m grid. Three morphological traits (e.g., number of nodes, plant height and stem diameter were evaluated 13 times during the first season and 22 times in the second season. Measurements for all three morphological traits were normally distributed and confirmed through the Kolmogorov-Smirnov test. Randomness was confirmed using the Run Test, and the descriptive statistics were calculated. For each trait, the sample size (n was calculated for the semiamplitudes of the confidence interval (i.e., estimation error equal to 2, 4, 6, ..., 20% of the estimated mean with a confidence coefficient (1-? of 95%. Subsequently, n was fixed at 360 plants, and the estimation error of the estimated percentage of the average for each trait was calculated. Variability of the sample size for the pigeonpea culture was observed between the morphological traits evaluated, among the evaluation periods and between seasons. Therefore, to assess with an accuracy of 6% of the estimated average, at least 136 plants must be evaluated throughout the pigeonpea crop cycle to determine the sample size for the traits (e.g., number of nodes, plant height and stem diameter in the different evaluation periods and between seasons.
Determining sample size when assessing mean equivalence.
Asberg, Arne; Solem, Kristine B; Mikkelsen, Gustav
2014-11-01
When we want to assess whether two analytical methods are equivalent, we could test if the difference between the mean results is within the specification limits of 0 ± an acceptance criterion. Testing the null hypothesis of zero difference is less interesting, and so is the sample size estimation based on testing that hypothesis. Power function curves for equivalence testing experiments are not widely available. In this paper we present power function curves to help decide on the number of measurements when testing equivalence between the means of two analytical methods. Computer simulation was used to calculate the probability that the 90% confidence interval for the difference between the means of two analytical methods would exceed the specification limits of 0 ± 1, 0 ± 2 or 0 ± 3 analytical standard deviations (SDa), respectively. The probability of getting a nonequivalence alarm increases with increasing difference between the means when the difference is well within the specification limits. The probability increases with decreasing sample size and with smaller acceptance criteria. We may need at least 40-50 measurements with each analytical method when the specification limits are 0 ± 1 SDa, and 10-15 and 5-10 when the specification limits are 0 ± 2 and 0 ± 3 SDa, respectively. The power function curves provide information of the probability of false alarm, so that we can decide on the sample size under less uncertainty.
Correlation of Scan and Sample Measurements Using Appropriate Sample Size
International Nuclear Information System (INIS)
Lux, Jeff
2008-01-01
, gamma count rates were elevated, but samples yielded background concentrations of thorium. Gamma scans tended to correlate with gamma exposure rate measurements. The lack of correlation between scan and sample data threatened to invalidate the characterization methodology, because neither method demonstrated reliability in identifying material which required excavation, shipment, and disposal. The NRC-approved site decommissioning plan required the excavation of any material that exceeded the Criteria based on either measurement. It was necessary to resolve the differences between the various types of measurements. Health Physics technicians selected 27 locations where the relationship between scan measurements and sample counts was highly variable. To determine if 'shine' was creating this data-correlation problem, they returned to those 27 locations to collect gamma count rates with a lead shielded NaI detector. Figure 2 shows that the shielded and unshielded count rates correlated fairly well for those locations. However, the technicians also noted the presence of 'tar balls' in this area. These small chunks of tarry material typically varied in size from 2 - 10 mm in diameters. Thorium-contaminated tars had apparently been disked into the soil to biodegrade the tar. The technicians evaluated the samples, and determined that the samples yielding higher activity contained more tar balls, and the samples yielding near background levels had fewer or none. The tar was collected for analysis, and its thorium activity varied from 2 - 3 Bq/g (60 - 90 pCi/g) total thorium. Since the sample mass was small, these small tar balls greatly impacted the sample activity. Technicians determined that the maximum particle size was less than 20 mm in diameter. Based on this maximum 'particle size', over one kilogram of sample would be required to minimize the impact of the tar balls on sample results. They returned to the same 27 locations and collected soil samples containing at least
Optimal allocation of point-count sampling effort
Barker, Richard J.; Sauer, John R.; Link, William A.
1993-01-01
Both unlimited and fixed-radius point counts only provide indices to population size. Because longer count durations lead to counting a higher proportion of individuals at the point, proper design of these surveys must incorporate both count duration and sampling characteristics of population size. Using information about the relationship between proportion of individuals detected at a point and count duration, we present a method of optimizing a point-count survey given a fixed total time for surveying and travelling between count points. The optimization can be based on several quantities that measure precision, accuracy, or power of tests based on counts, including (1) mean-square error of estimated population change; (2) mean-square error of average count; (3) maximum expected total count; or (4) power of a test for differences in average counts. Optimal solutions depend on a function that relates count duration at a point to the proportion of animals detected. We model this function using exponential and Weibull distributions, and use numerical techniques to conduct the optimization. We provide an example of the procedure in which the function is estimated from data of cumulative number of individual birds seen for different count durations for three species of Hawaiian forest birds. In the example, optimal count duration at a point can differ greatly depending on the quantities that are optimized. Optimization of the mean-square error or of tests based on average counts generally requires longer count durations than does estimation of population change. A clear formulation of the goals of the study is a critical step in the optimization process.
Sample size estimation and sampling techniques for selecting a representative sample
Directory of Open Access Journals (Sweden)
Aamir Omair
2014-01-01
Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.
Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang
2018-01-01
Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.
Size structure, not metabolic scaling rules, determines fisheries reference points
DEFF Research Database (Denmark)
Andersen, Ken Haste; Beyer, Jan
2015-01-01
these empirical relations is lacking. Here, we combine life-history invariants, metabolic scaling and size-spectrum theory to develop a general size- and trait-based theory for demography and recruitment of exploited fish stocks. Important concepts are physiological or metabolic scaled mortalities and flux...... that even though small species have a higher productivity than large species their resilience towards fishing is lower than expected from metabolic scaling rules. Further, we show that the fishing mortality leading to maximum yield per recruit is an ill-suited reference point. The theory can be used......Impact assessments of fishing on a stock require parameterization of vital rates: growth, mortality and recruitment. For 'data-poor' stocks, vital rates may be estimated from empirical size-based relationships or from life-history invariants. However, a theoretical framework to synthesize...
Effects of sample size on the second magnetization peak in ...
Indian Academy of Sciences (India)
the sample size decreases – a result that could be interpreted as a size effect in the order– disorder vortex matter phase transition. However, local magnetic measurements trace this effect to metastable disordered vortex states, revealing the same order–disorder transition induction in samples of different size. Keywords.
Evaluation of the point-centred-quarter method of sampling ...
African Journals Online (AJOL)
-quarter method.The parameter which was most efficiently sampled was species composition relativedensity) with 90% replicate similarity being achieved with 100 point-centred-quarters. However, this technique cannot be recommended, even ...
Naulin, Paulette I; Valenzuela, Gerardo; Estay, Sergio A
2017-03-01
Stomata distribution is an example of biological patterning. Formal methods used to study stomata patterning are generally based on point-pattern analysis, which assumes that stomata are points and ignores the constraints imposed by size on the placement of neighbors. The inclusion of size in the analysis requires the use of a null model based on finite-size object geometry. In this study, we compare the results obtained by analyzing samples from several species using point and disc null models. The results show that depending on the null model used, there was a 20% reduction in the number of samples classified as uniform; these results suggest that stomata patterning is not as general as currently reported. Some samples changed drastically from being classified as uniform to being classified as clustered. In samples of Arabidopsis thaliana, only the disc model identified clustering at high densities of stomata. This reinforces the importance of selecting an appropriate null model to avoid incorrect inferences about underlying biological mechanisms. Based on the results gathered here, we encourage researchers to abandon point-pattern analysis when studying stomata patterning; more realistic conclusions can be drawn from finite-size object analysis. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.
Sample size determination in clinical trials with multiple endpoints
Sozu, Takashi; Hamasaki, Toshimitsu; Evans, Scott R
2015-01-01
This book integrates recent methodological developments for calculating the sample size and power in trials with more than one endpoint considered as multiple primary or co-primary, offering an important reference work for statisticians working in this area. The determination of sample size and the evaluation of power are fundamental and critical elements in the design of clinical trials. If the sample size is too small, important effects may go unnoticed; if the sample size is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. Recently many clinical trials have been designed with more than one endpoint considered as multiple primary or co-primary, creating a need for new approaches to the design and analysis of these clinical trials. The book focuses on the evaluation of power and sample size determination when comparing the effects of two interventions in superiority clinical trials with multiple endpoints. Methods for sample size calculation in clin...
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas
2014-01-01
Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357
Sahin, Alper; Weiss, David J.
2015-01-01
This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…
The influence of sample size on the determination of population ...
African Journals Online (AJOL)
Reliable measures of population sizes of endangered and vulnerable species are difficult to achieve because of high variability in population sizes and logistic constraints on sample sizes, yet such measures are crucial for the determination of the success of conservation and management strategies aimed at curbing ...
Spiraling Edge: Fast Surface Reconstruction from Partially Organized Sample Points
Energy Technology Data Exchange (ETDEWEB)
Angel, E.; Crossno, P.
1999-01-06
Many applications produce three-dimensional points that must be further processed to generate a surface. Surface reconstruction algorithms that start with a set of unorganized points are extremely time-consuming. Often, however, points are generated such that there is additional information available to the reconstruction algorithm. We present a specialized algorithm for surface reconstruction that is three orders of magnitude faster than algorithms for the general case. In addition to sample point locations, our algorithm starts with normal information and knowledge of each point's neighbors. Our algorithm produces a localized approximation to the surface by creating a star-shaped triangulation between a point and a subset of its nearest neighbors. This surface patch is extended by locally triangulating each of the points along the edge of the patch. As each edge point is triangulated, it is removed from the edge and new edge points along the patch's edge are inserted in its place. The updated edge spirals out over the surface until the edge encounters a surface boundary and stops growing in that direction, or until the edge reduces to a small hole that fills itself in.
Spiraling Edge: Fast Surface Reconstruction from Partially Organized Sample Points
Energy Technology Data Exchange (ETDEWEB)
Angel, Edward; Crossno, Patricia
1999-07-12
Many applications produce three-dimensional points that must be further processed to generate a surface. Surface reconstruction algorithms that start with a set of unorganized points are extremely time-consuming. Sometimes, however, points are generated such that there is additional information available to the reconstruction algorithm. We present Spiraling Edge, a specialized algorithm for surface reconstruction that is three orders of magnitude faster than algorithms for the general case. In addition to sample point locations, our algorithm starts with normal information and knowledge of each point's neighbors. Our algorithm produces a localized approximation to the surface by creating a star-shaped triangulation between a point and a subset of its nearest neighbors. This surface patch is extended by locally triangulating each of the points along the edge of the patch. As each edge point is triangulated, it is removed from the edge and new edge points along the patch's edge are inserted in its place. The updated edge spirals out over the surface until the edge encounters a surface boundary and stops growing in that direction, or until the edge reduces to a small hole that is filled by the final triangle.
Estimating population size with correlated sampling unit estimates
David C. Bowden; Gary C. White; Alan B. Franklin; Joseph L. Ganey
2003-01-01
Finite population sampling theory is useful in estimating total population size (abundance) from abundance estimates of each sampled unit (quadrat). We develop estimators that allow correlated quadrat abundance estimates, even for quadrats in different sampling strata. Correlated quadrat abundance estimates based on markârecapture or distance sampling methods occur...
Approaches to sample size determination for multivariate data
Saccenti, Edoardo; Timmerman, Marieke E.
2016-01-01
Sample size determination is a fundamental step in the design of experiments. Methods for sample size determination are abundant for univariate analysis methods, but scarce in the multivariate case. Omics data are multivariate in nature and are commonly investigated using multivariate statistical
Sample size computation for association studies using case–parents ...
Indian Academy of Sciences (India)
sample size for case–control association studies is discussed. Materials and methods. Parameter settings. We consider a candidate locus with two alleles A and a where. A is putatively associated with the disease status (increasing. Keywords. sample size; association tests; genotype relative risk; power; autism. Journal of ...
[Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].
Suzukawa, Yumi; Toyoda, Hideki
2012-04-01
This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.
PointSampler: A GIS Tool for Point Intercept Sampling of Digital Images
Directory of Open Access Journals (Sweden)
David Lyon Gobbett
2014-06-01
Full Text Available Close-range digital photography to assess vegetation cover is useful in disciplines ranging from ecological monitoring to agricultural research. An on-screen point intercept sampling method, which is analogous to the equivalent field based method, can be used to manually derive the percentage occurrence of multiple cover classes within an image. PointSampler is a GIS embedded tool that provides a semi-automated approach for performing point intercept sampling of digital images, and which integrates with existing GIS functionality and workflows. We describe and illustrate the two general applications of this tool, in in efficiently deriving primary ecological data from digital photographs , and for the generation of validation data to complement automated image classification of a time series of groundcover images. The flexible design and GIS integration of PointSampler allows it to be put to a wide range of similar uses.
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
Sampling strategies for estimating brook trout effective population size
Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher
2012-01-01
The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...
Efficient triangulation of Poisson-disk sampled point sets
Guo, Jianwei
2014-05-06
In this paper, we present a simple yet efficient algorithm for triangulating a 2D input domain containing a Poisson-disk sampled point set. The proposed algorithm combines a regular grid and a discrete clustering approach to speedup the triangulation. Moreover, our triangulation algorithm is flexible and performs well on more general point sets such as adaptive, non-maximal Poisson-disk sets. The experimental results demonstrate that our algorithm is robust for a wide range of input domains and achieves significant performance improvement compared to the current state-of-the-art approaches. © 2014 Springer-Verlag Berlin Heidelberg.
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sample size calculation for comparing two negative binomial rates.
Zhu, Haiyuan; Lakkis, Hassan
2014-02-10
Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations. Copyright © 2013 John Wiley & Sons, Ltd.
Sample size for estimating average productive traits of pigeon pea
Directory of Open Access Journals (Sweden)
Giovani Facco
2016-04-01
Full Text Available ABSTRACT: The objectives of this study were to determine the sample size, in terms of number of plants, needed to estimate the average values of productive traits of the pigeon pea and to determine whether the sample size needed varies between traits and between crop years. Separate uniformity trials were conducted in 2011/2012 and 2012/2013. In each trial, 360 plants were demarcated, and the fresh and dry masses of roots, stems, and leaves and of shoots and the total plant were evaluated during blossoming for 10 productive traits. Descriptive statistics were calculated, normality and randomness were checked, and the sample size was calculated. There was variability in the sample size between the productive traits and crop years of the pigeon pea culture. To estimate the averages of the productive traits with a 20% maximum estimation error and 95% confidence level, 70 plants are sufficient.
Effects of sample size on the second magnetization peak in ...
Indian Academy of Sciences (India)
8+ crystals are observed at low temperatures, above the temperature where the SMP totally disappears. In particular, the onset of the SMP shifts to lower fields as the sample size decreases - a result that could be interpreted as a size effect in ...
Sample size computation for association studies using case–parents ...
Indian Academy of Sciences (India)
ple size needed to reach a given power (Knapp 1999; Schaid. 1999; Chen and Deng 2001; Brown 2004). In their seminal paper, Risch and Merikangas (1996) showed that for a mul- tiplicative mode of inheritance (MOI) for the susceptibility gene, sample size depends on two parameters: the frequency of the risk allele at the ...
Accurate microfour-point probe sheet resistance measurements on small samples
DEFF Research Database (Denmark)
Thorsteinsson, Sune; Wang, Fei; Petersen, Dirch Hjorth
2009-01-01
We show that accurate sheet resistance measurements on small samples may be performed using microfour-point probes without applying correction factors. Using dual configuration measurements, the sheet resistance may be extracted with high accuracy when the microfour-point probes are in proximity...... of a mirror plane on small samples with dimensions of a few times the probe pitch. We calculate theoretically the size of the “sweet spot,” where sufficiently accurate sheet resistances result and show that even for very small samples it is feasible to do correction free extraction of the sheet resistance...
Sample size re-estimation in a breast cancer trial.
Hade, Erinn M; Jarjoura, David; Lai Wei
2010-06-01
During the recruitment phase of a randomized breast cancer trial, investigating the time to recurrence, we found a strong suggestion that the failure probabilities used at the design stage were too high. Since most of the methodological research involving sample size re-estimation has focused on normal or binary outcomes, we developed a method which preserves blinding to re-estimate sample size in our time to event trial. A mistakenly high estimate of the failure rate at the design stage may reduce the power unacceptably for a clinically important hazard ratio. We describe an ongoing trial and an application of a sample size re-estimation method that combines current trial data with prior trial data or assumes a parametric model to re-estimate failure probabilities in a blinded fashion. Using our current blinded trial data and additional information from prior studies, we re-estimate the failure probabilities to be used in sample size re-calculation. We employ bootstrap re-sampling to quantify uncertainty in the re-estimated sample sizes. At the time of re-estimation data from 278 patients were available, averaging 1.2 years of follow up. Using either method, we estimated a sample size increase of zero for the hazard ratio because the estimated failure probabilities at the time of re-estimation differed little from what was expected. We show that our method of blinded sample size re-estimation preserves the type I error rate. We show that when the initial guess of the failure probabilities are correct, the median increase in sample size is zero. Either some prior knowledge of an appropriate survival distribution shape or prior data is needed for re-estimation. In trials when the accrual period is lengthy, blinded sample size re-estimation near the end of the planned accrual period should be considered. In our examples, when assumptions about failure probabilities and HRs are correct the methods usually do not increase sample size or otherwise increase it by very
Sample size determination for equivalence assessment with multiple endpoints.
Sun, Anna; Dong, Xiaoyu; Tsong, Yi
2014-01-01
Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.
Fundamental size limitations of micro four-point probes
DEFF Research Database (Denmark)
Ansbæk, Thor; Petersen, Dirch Hjorth; Hansen, Ole
2009-01-01
The continued down-scaling of integrated circuits and magnetic tunnel junctions (MTJ) for hard disc read heads presents a challenge to current metrology technology. The four-point probes (4PP), currently used for sheet resistance characterization in these applications, therefore must be down......-scaled as well in order to correctly characterize the extremely thin films used. This presents a four-point probe design and fabrication challenge. We analyze the fundamental limitation on down-scaling of a generic micro four-point probe (M4PP) in a comprehensive study, where mechanical, thermal, and electrical...
Bayesian Sample Size Determination For The Accurate Identification ...
African Journals Online (AJOL)
Background & Aim: Sample size estimation is a major component of the design of virtually every experiment in biosciences. Microbiologists face a challenge when allocating resources to surveys designed to determine the sampling unit of bacterial strains of interest. In this study we derived a Bayesian approach with a ...
A COMPUTATIONAL TOOL TO EVALUATE THE SAMPLE SIZE IN MAP POSITIONAL ACCURACY
Directory of Open Access Journals (Sweden)
Marcelo Antonio Nero
Full Text Available Abstract: In many countries, the positional accuracy control by points in Cartography or Spatial data corresponds to the comparison between sets of coordinates of well-defined points in relation to the same set of points from a more accurate source. Usually, each country determines a maximum number of points which could present error values above a pre-established threshold. In many cases, the standards define the sample size as 20 points, with no more consideration, and fix this threshold in 10% of the sample. However, the sampling dimension (n, considering the statistical risk, especially when the percentages of outliers are around 10%, can lead to a producer risk (to reject a good map and a user risk (to accept a bad map. This article analyzes this issue and allows defining the sampling dimension considering the risk of the producer and of the user. As a tool, a program developed by us allows defining the sample size according to the risk that the producer / user can or wants to assume. This analysis uses 600 control points, each of them with a known error. We performed the simulations with a sample size of 20 points (n and calculate the associated risk. Then we changed the value of (n, using smaller and larger sizes, calculating for each situation the associated risk both for the user and for the producer. The computer program developed draws the operational curves or risk curves, which considers three parameters: the number of control points; the number of iterations to create the curves; and the percentage of control points above the threshold, that can be the Brazilian standard or other parameters from different countries. Several graphs and tables are presented which were created with different parameters, leading to a better decision both for the user and for the producer, as well as to open possibilities for other simulations and researches in the future.
Directory of Open Access Journals (Sweden)
R. Eric Heidel
2016-01-01
Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.
Current sample size conventions: Flaws, harms, and alternatives
Directory of Open Access Journals (Sweden)
Bacchetti Peter
2010-03-01
Full Text Available Abstract Background The belief remains widespread that medical research studies must have statistical power of at least 80% in order to be scientifically sound, and peer reviewers often question whether power is high enough. Discussion This requirement and the methods for meeting it have severe flaws. Notably, the true nature of how sample size influences a study's projected scientific or practical value precludes any meaningful blanket designation of value of information methods, simple choices based on cost or feasibility that have recently been justified, sensitivity analyses that examine a meaningful array of possible findings, and following previous analogous studies. To promote more rational approaches, research training should cover the issues presented here, peer reviewers should be extremely careful before raising issues of "inadequate" sample size, and reports of completed studies should not discuss power. Summary Common conventions and expectations concerning sample size are deeply flawed, cause serious harm to the research process, and should be replaced by more rational alternatives.
Impact of shoe size in a sample of elderly individuals
Directory of Open Access Journals (Sweden)
Daniel López-López
Full Text Available Summary Introduction: The use of an improper shoe size is common in older people and is believed to have a detrimental effect on the quality of life related to foot health. The objective is to describe and compare, in a sample of participants, the impact of shoes that fit properly or improperly, as well as analyze the scores related to foot health and health overall. Method: A sample of 64 participants, with a mean age of 75.3±7.9 years, attended an outpatient center where self-report data was recorded, the measurements of the size of the feet and footwear were determined and the scores compared between the group that wears the correct size of shoes and another group of individuals who do not wear the correct size of shoes, using the Spanish version of the Foot Health Status Questionnaire. Results: The group wearing an improper shoe size showed poorer quality of life regarding overall health and specifically foot health. Differences between groups were evaluated using a t-test for independent samples resulting statistically significant (p<0.05 for the dimension of pain, function, footwear, overall foot health, and social function. Conclusion: Inadequate shoe size has a significant negative impact on quality of life related to foot health. The degree of negative impact seems to be associated with age, sex, and body mass index (BMI.
Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size
Shieh, Gwowen
2015-01-01
Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…
Sampling from Determinantal Point Processes for Scalable Manifold Learning.
Wachinger, Christian; Golland, Polina
2015-01-01
High computational costs of manifold learning prohibit its application for large datasets. A common strategy to overcome this problem is to perform dimensionality reduction on selected landmarks and to successively embed the entire dataset with the Nyström method. The two main challenges that arise are: (i) the landmarks selected in non-Euclidean geometries must result in a low reconstruction error, (ii) the graph constructed from sparsely sampled landmarks must approximate the manifold well. We propose to sample the landmarks from determinantal distributions on non-Euclidean spaces. Since current determinantal sampling algorithms have the same complexity as those for manifold learning, we present an efficient approximation with linear complexity. Further, we recover the local geometry after the sparsification by assigning each landmark a local covariance matrix, estimated from the original point set. The resulting neighborhood selection .based on the Bhattacharyya distance improves the embedding of sparsely sampled manifolds. Our experiments show a significant performance improvement compared to state-of-the-art landmark selection techniques on synthetic and medical data.
Optimal sample size for probability of detection curves
International Nuclear Information System (INIS)
Annis, Charles; Gandossi, Luca; Martin, Oliver
2013-01-01
Highlights: • We investigate sample size requirement to develop probability of detection curves. • We develop simulations to determine effective inspection target sizes, number and distribution. • We summarize these findings and provide guidelines for the NDE practitioner. -- Abstract: The use of probability of detection curves to quantify the reliability of non-destructive examination (NDE) systems is common in the aeronautical industry, but relatively less so in the nuclear industry, at least in European countries. Due to the nature of the components being inspected, sample sizes tend to be much lower. This makes the manufacturing of test pieces with representative flaws, in sufficient numbers, so to draw statistical conclusions on the reliability of the NDT system under investigation, quite costly. The European Network for Inspection and Qualification (ENIQ) has developed an inspection qualification methodology, referred to as the ENIQ Methodology. It has become widely used in many European countries and provides assurance on the reliability of NDE systems, but only qualitatively. The need to quantify the output of inspection qualification has become more important as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. A measure of the NDE reliability is necessary to quantify risk reduction after inspection and probability of detection (POD) curves provide such a metric. The Joint Research Centre, Petten, The Netherlands supported ENIQ by investigating the question of the sample size required to determine a reliable POD curve. As mentioned earlier manufacturing of test pieces with defects that are typically found in nuclear power plants (NPPs) is usually quite expensive. Thus there is a tendency to reduce sample sizes, which in turn increases the uncertainty associated with the resulting POD curve. The main question in conjunction with POS curves is the appropriate sample size. Not
National Oceanic and Atmospheric Administration, Department of Commerce — Benthic fauna in the vicinity of the Barbers Point (Honouliuli) ocean outfall were sampled from 1986-2010. To assess the environmental quality, sediment grain size...
Point source atom interferometry with a cloud of finite size
Energy Technology Data Exchange (ETDEWEB)
Hoth, Gregory W., E-mail: gregory.hoth@nist.gov; Pelle, Bruno; Riedl, Stefan; Kitching, John; Donley, Elizabeth A. [National Institute of Standards and Technology, Boulder, Colorado 80305 (United States)
2016-08-15
We demonstrate a two axis gyroscope by the use of light pulse atom interferometry with an expanding cloud of atoms in the regime where the cloud has expanded by 1.1–5 times its initial size during the interrogation. Rotations are measured by analyzing spatial fringe patterns in the atom population obtained by imaging the final cloud. The fringes arise from a correlation between an atom's initial velocity and its final position. This correlation is naturally created by the expansion of the cloud, but it also depends on the initial atomic distribution. We show that the frequency and contrast of these spatial fringes depend on the details of the initial distribution and develop an analytical model to explain this dependence. We also discuss several challenges that must be overcome to realize a high-performance gyroscope with this technique.
Approximate sample size calculations with microarray data: an illustration
Ferreira, José A.; Zwinderman, Aeilko
2006-01-01
We outline a method of sample size calculation in microarray experiments on the basis of pilot data and illustrate its practical application with both simulated and real data. The method was shown to be consistent (as the number of 'probed genes' tends to infinity) under general conditions in an
Sample size for collecting germplasms – a polyploid model with ...
Indian Academy of Sciences (India)
Unknown
Conservation; diploid; exploration; germplasm; inbreeding; polyploid; seeds ... A seed factor which influences the plant sample size has also been isolated to aid the collectors in selecting the appropriate combination of number of plants and seeds per plant. ..... able saving of resources during collection and storage of.
Sample size computation for association studies using case–parents ...
Indian Academy of Sciences (India)
jgen/085/03/0187-0191. Keywords. sample size; association tests; genotype relative risk; power; autism. Author Affiliations. Najla Kharrat1 Imen Ayadi1 Ahmed Rebaï1. Unité de Biostatistique et de Bioinformatique, Centre de Biotechnologie de ...
Estimating wildlife activity curves: comparison of methods and sample size.
Lashley, Marcus A; Cove, Michael V; Chitwood, M Colter; Penido, Gabriel; Gardner, Beth; DePerno, Chris S; Moorman, Chris E
2018-03-08
Camera traps and radiotags commonly are used to estimate animal activity curves. However, little empirical evidence has been provided to validate whether they produce similar results. We compared activity curves from two common camera trapping techniques to those from radiotags with four species that varied substantially in size (~1 kg-~50 kg), diet (herbivore, omnivore, carnivore), and mode of activity (diurnal and crepuscular). Also, we sub-sampled photographs of each species with each camera trapping technique to determine the minimum sample size needed to maintain accuracy and precision of estimates. Camera trapping estimated greater activity during feeding times than radiotags in all but the carnivore, likely reflective of the close proximity of foods readily consumed by all species except the carnivore (i.e., corn bait or acorns). However, additional analyses still indicated both camera trapping methods produced relatively high overlap and correlation to radiotags. Regardless of species or camera trapping method, mean overlap increased and overlap error decreased rapidly as sample sizes increased until an asymptote near 100 detections which we therefore recommend as a minimum sample size. Researchers should acknowledge that camera traps and radiotags may estimate the same mode of activity but differ in their estimation of magnitude in activity peaks.
Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests
Duncanson, L.; Rourke, O.; Dubayah, R.
2015-11-01
Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height and crown radius. We use LiDAR remote sensing to isolate between 10,000 to more than 1,000,000 tree height and crown radius measurements per site in six U.S. forests. We find that fitted allometric parameters are highly sensitive to sample size, producing systematic overestimates of height. We extend our analysis to biomass through the application of empirical relationships from the literature, and show that given the small sample sizes used in common allometric equations for biomass, the average site-level biomass bias is ~+70% with a standard deviation of 71%, ranging from -4% to +193%. These findings underscore the importance of increasing the sample sizes used for allometric equation generation.
Blinded sample size re-estimation in crossover bioequivalence trials.
Golkowski, Daniel; Friede, Tim; Kieser, Meinhard
2014-01-01
In drug development, bioequivalence studies are used to indirectly demonstrate clinical equivalence of a test formulation and a reference formulation of a specific drug by establishing their equivalence in bioavailability. These studies are typically run as crossover studies. In the planning phase of such trials, investigators and sponsors are often faced with a high variability in the coefficients of variation of the typical pharmacokinetic endpoints such as the area under the concentration curve or the maximum plasma concentration. Adaptive designs have recently been considered to deal with this uncertainty by adjusting the sample size based on the accumulating data. Because regulators generally favor sample size re-estimation procedures that maintain the blinding of the treatment allocations throughout the trial, we propose in this paper a blinded sample size re-estimation strategy and investigate its error rates. We show that the procedure, although blinded, can lead to some inflation of the type I error rate. In the context of an example, we demonstrate how this inflation of the significance level can be adjusted for to achieve control of the type I error rate at a pre-specified level. Furthermore, some refinements of the re-estimation procedure are proposed to improve the power properties, in particular in scenarios with small sample sizes. Copyright © 2014 John Wiley & Sons, Ltd.
Sample size and power calculation for molecular biology studies.
Jung, Sin-Ho
2010-01-01
Sample size calculation is a critical procedure when designing a new biological study. In this chapter, we consider molecular biology studies generating huge dimensional data. Microarray studies are typical examples, so that we state this chapter in terms of gene microarray data, but the discussed methods can be used for design and analysis of any molecular biology studies involving high-dimensional data. In this chapter, we discuss sample size calculation methods for molecular biology studies when the discovery of prognostic molecular markers is performed by accurately controlling false discovery rate (FDR) or family-wise error rate (FWER) in the final data analysis. We limit our discussion to the two-sample case.
Xie, Mengmin; Chen, Yueping; Zhang, Anshe; Fang, Rui
2018-03-01
The inspection accuracy of free-form surfaces is mainly affected by the processing, the number of sampling points, the distribution of sampling points, the measurement equipment and other factors. This paper focuses on the influence of sampling points on inspection accuracy of free-form surfaces, and isoparametric distribution was used in sample point distribution. Different sampling points number was compared on a same surface and a probe, the measurement data were analyzed and the optimal sampling points number was obtained.
Aerosol Sampling Bias from Differential Electrostatic Charge and Particle Size
Jayjock, Michael Anthony
Lack of reliable epidemiological data on long term health effects of aerosols is due in part to inadequacy of sampling procedures and the attendant doubt regarding the validity of the concentrations measured. Differential particle size has been widely accepted and studied as a major potential biasing effect in the sampling of such aerosols. However, relatively little has been done to study the effect of electrostatic particle charge on aerosol sampling. The objective of this research was to investigate the possible biasing effects of differential electrostatic charge, particle size and their interaction on the sampling accuracy of standard aerosol measuring methodologies. Field studies were first conducted to determine the levels and variability of aerosol particle size and charge at two manufacturing facilities making acrylic powder. The field work showed that the particle mass median aerodynamic diameter (MMAD) varied by almost an order of magnitude (4-34 microns) while the aerosol surface charge was relatively stable (0.6-0.9 micro coulombs/m('2)). The second part of this work was a series of laboratory experiments in which aerosol charge and MMAD were manipulated in a 2('n) factorial design with the percentage of sampling bias for various standard methodologies as the dependent variable. The experiments used the same friable acrylic powder studied in the field work plus two size populations of ground quartz as a nonfriable control. Despite some ill conditioning of the independent variables due to experimental difficulties, statistical analysis has shown aerosol charge (at levels comparable to those measured in workroom air) is capable of having a significant biasing effect. Physical models consistent with the sampling data indicate that the level and bipolarity of the aerosol charge are determining factors in the extent and direction of the bias.
Sample size for detecting differentially expressed genes in microarray experiments
Directory of Open Access Journals (Sweden)
Li Jiangning
2004-11-01
Full Text Available Abstract Background Microarray experiments are often performed with a small number of biological replicates, resulting in low statistical power for detecting differentially expressed genes and concomitant high false positive rates. While increasing sample size can increase statistical power and decrease error rates, with too many samples, valuable resources are not used efficiently. The issue of how many replicates are required in a typical experimental system needs to be addressed. Of particular interest is the difference in required sample sizes for similar experiments in inbred vs. outbred populations (e.g. mouse and rat vs. human. Results We hypothesize that if all other factors (assay protocol, microarray platform, data pre-processing were equal, fewer individuals would be needed for the same statistical power using inbred animals as opposed to unrelated human subjects, as genetic effects on gene expression will be removed in the inbred populations. We apply the same normalization algorithm and estimate the variance of gene expression for a variety of cDNA data sets (humans, inbred mice and rats comparing two conditions. Using one sample, paired sample or two independent sample t-tests, we calculate the sample sizes required to detect a 1.5-, 2-, and 4-fold changes in expression level as a function of false positive rate, power and percentage of genes that have a standard deviation below a given percentile. Conclusions Factors that affect power and sample size calculations include variability of the population, the desired detectable differences, the power to detect the differences, and an acceptable error rate. In addition, experimental design, technical variability and data pre-processing play a role in the power of the statistical tests in microarrays. We show that the number of samples required for detecting a 2-fold change with 90% probability and a p-value of 0.01 in humans is much larger than the number of samples commonly used in
Development of sample size allocation program using hypergeometric distribution
International Nuclear Information System (INIS)
Kim, Hyun Tae; Kwack, Eun Ho; Park, Wan Soo; Min, Kyung Soo; Park, Chan Sik
1996-01-01
The objective of this research is the development of sample allocation program using hypergeometric distribution with objected-oriented method. When IAEA(International Atomic Energy Agency) performs inspection, it simply applies a standard binomial distribution which describes sampling with replacement instead of a hypergeometric distribution which describes sampling without replacement in sample allocation to up to three verification methods. The objective of the IAEA inspection is the timely detection of diversion of significant quantities of nuclear material, therefore game theory is applied to its sampling plan. It is necessary to use hypergeometric distribution directly or approximate distribution to secure statistical accuracy. Improved binomial approximation developed by Mr. J. L. Jaech and correctly applied binomial approximation are more closer to hypergeometric distribution in sample size calculation than the simply applied binomial approximation of the IAEA. Object-oriented programs of 1. sample approximate-allocation with correctly applied standard binomial approximation, 2. sample approximate-allocation with improved binomial approximation, and 3. sample approximate-allocation with hypergeometric distribution were developed with Visual C ++ and corresponding programs were developed with EXCEL(using Visual Basic for Application). 8 tabs., 15 refs. (Author)
Estimation of individual reference intervals in small sample sizes
DEFF Research Database (Denmark)
Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz
2007-01-01
of that order of magnitude for all topics in question. Therefore, new methods to estimate reference intervals for small sample sizes are needed. We present an alternative method based on variance component models. The models are based on data from 37 men and 84 women taking into account biological variation...... presented in this study. The presented method enables occupational health researchers to calculate reference intervals for specific groups, i.e. smokers versus non-smokers, etc. In conclusion, the variance component models provide an appropriate tool to estimate reference intervals based on small sample...
Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests
Duncanson, L.; Rourke, O.; Dubayah, R.
2015-01-01
Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height a...
Simple and multiple linear regression: sample size considerations.
Hanley, James A
2016-11-01
The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright Â© 2016 Elsevier Inc. All rights reserved.
Sample size for equivalence trials: a case study from a vaccine lot consistency trial.
Ganju, Jitendra; Izu, Allen; Anemona, Alessandra
2008-08-30
For some trials, simple but subtle assumptions can have a profound impact on the size of the trial. A case in point is a vaccine lot consistency (or equivalence) trial. Standard sample size formulas used for designing lot consistency trials rely on only one component of variation, namely, the variation in antibody titers within lots. The other component, the variation in the means of titers between lots, is assumed to be equal to zero. In reality, some amount of variation between lots, however small, will be present even under the best manufacturing practices. Using data from a published lot consistency trial, we demonstrate that when the between-lot variation is only 0.5 per cent of the total variation, the increase in the sample size is nearly 300 per cent when compared with the size assuming that the lots are identical. The increase in the sample size is so pronounced that in order to maintain power one is led to consider a less stringent criterion for demonstration of lot consistency. The appropriate sample size formula that is a function of both components of variation is provided. We also discuss the increase in the sample size due to correlated comparisons arising from three pairs of lots as a function of the between-lot variance.
Sample size of the reference sample in a case-augmented study.
Ghosh, Palash; Dewanji, Anup
2017-05-01
The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Vermeer, Willemijn M; Steenhuis, Ingrid H M; Seidell, Jacob C
2010-02-01
This qualitative study assessed consumers' opinions of food portion sizes and their attitudes toward portion-size interventions located in various point-of-purchase settings targeting overweight and obese people. Eight semi-structured focus group discussions were conducted with 49 participants. Constructs from the diffusion of innovations theory were included in the interview guide. Each focus group was recorded and transcribed verbatim. Data were coded and analyzed with Atlas.ti 5.2 using the framework approach. Results showed that many participants thought that portion sizes of various products have increased during the past decades and are larger than acceptable. The majority also indicated that value for money is important when purchasing and that large portion sizes offer more value for money than small portion sizes. Furthermore, many experienced difficulties with self-regulating the consumption of large portion sizes. Among the portion-size interventions that were discussed, participants had most positive attitudes toward a larger availability of portion sizes and pricing strategies, followed by serving-size labeling. In general, reducing package serving sizes as an intervention strategy to control food intake met resistance. The study concludes that consumers consider interventions consisting of a larger variety of available portion sizes, pricing strategies and serving-size labeling as most acceptable to implement.
Sample size for monitoring sirex populations and their natural enemies
Directory of Open Access Journals (Sweden)
Susete do Rocio Chiarello Penteado
2016-09-01
Full Text Available The woodwasp Sirex noctilio Fabricius (Hymenoptera: Siricidae was introduced in Brazil in 1988 and became the main pest in pine plantations. It has spread to about 1.000.000 ha, at different population levels, in the states of Rio Grande do Sul, Santa Catarina, Paraná, São Paulo and Minas Gerais. Control is done mainly by using a nematode, Deladenus siricidicola Bedding (Nematoda: Neothylenchidae. The evaluation of the efficiency of natural enemies has been difficult because there are no appropriate sampling systems. This study tested a hierarchical sampling system to define the sample size to monitor the S. noctilio population and the efficiency of their natural enemies, which was found to be perfectly adequate.
Luo, Maoyi; Xing, Shan; Yang, Yonggang; Song, Lijuan; Ma, Yan; Wang, Yadong; Dai, Xiongxin; Happel, Steffen
2018-07-01
There is a growing demand for the determination of actinides in soil and sediment samples for environmental monitoring and tracing, radiological protection, and nuclear forensic reasons. A total sample dissolution method based on lithium metaborate fusion, followed by sequential column chromatography separation, was developed for simultaneous determination of Pu, Am and Cm isotopes in large-size environmental samples by alpha spectrometry and mass spectrometric techniques. The overall recoveries of both Pu and Am for the entire procedure were higher than 70% for large-size soil samples. The method was validated using 20 g of soil samples spiked with known amounts of 239 Pu and 241 Am as well as the certified reference materials IAEA-384 (Fangataufa Lagoon sediment) and IAEA-385 (Irish Sea sediment). All the measured results agreed very well with the expected values. Copyright © 2018 Elsevier Ltd. All rights reserved.
Optimal Sample Size for Probability of Detection Curves
International Nuclear Information System (INIS)
Annis, Charles; Gandossi, Luca; Martin, Oliver
2012-01-01
The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)
Distance of Sample Measurement Points to Prototype Catalog Curve
DEFF Research Database (Denmark)
Hjorth, Poul G.; Karamehmedovic, Mirza; Perram, John
2006-01-01
We discuss strategies for comparing discrete data points to a catalog (reference) curve by means of the Euclidean distance from each point to the curve in a pump's head H vs. flow Qdiagram. In particular we find that a method currently in use is inaccurate. We propose several alternatives that ar...
Uncertainty analysis of point by point sampling complex surfaces using touch probe CMMs
DEFF Research Database (Denmark)
Barini, Emanuele; Tosello, Guido; De Chiffre, Leonardo
2007-01-01
The paper describes a study concerning point by point scanning of complex surfaces using tactile CMMs. A four factors-two level full factorial experiment was carried out, involving measurements on a complex surface configuration item comprising a sphere, a cylinder and a cone, combined in a singl...
Meta-Analysis of Effect Sizes Reported at Multiple Time Points Using General Linear Mixed Model
Musekiwa, Alfred; Manda, Samuel O. M.; Mwambi, Henry G.; Chen, Ding-Geng
2016-01-01
Meta-analysis of longitudinal studies combines effect sizes measured at pre-determined time points. The most common approach involves performing separate univariate meta-analyses at individual time points. This simplistic approach ignores dependence between longitudinal effect sizes, which might result in less precise parameter estimates. In this paper, we show how to conduct a meta-analysis of longitudinal effect sizes where we contrast different covariance structures for dependence between effect sizes, both within and between studies. We propose new combinations of covariance structures for the dependence between effect size and utilize a practical example involving meta-analysis of 17 trials comparing postoperative treatments for a type of cancer, where survival is measured at 6, 12, 18 and 24 months post randomization. Although the results from this particular data set show the benefit of accounting for within-study serial correlation between effect sizes, simulations are required to confirm these results. PMID:27798661
Sample size re-estimation in paired comparative diagnostic accuracy studies with a binary response.
McCray, Gareth P J; Titman, Andrew C; Ghaneh, Paula; Lancaster, Gillian A
2017-07-14
The sample size required to power a study to a nominal level in a paired comparative diagnostic accuracy study, i.e. studies in which the diagnostic accuracy of two testing procedures is compared relative to a gold standard, depends on the conditional dependence between the two tests - the lower the dependence the greater the sample size required. A priori, we usually do not know the dependence between the two tests and thus cannot determine the exact sample size required. One option is to use the implied sample size for the maximal negative dependence, giving the largest possible sample size. However, this is potentially wasteful of resources and unnecessarily burdensome on study participants as the study is likely to be overpowered. A more accurate estimate of the sample size can be determined at a planned interim analysis point where the sample size is re-estimated. This paper discusses a sample size estimation and re-estimation method based on the maximum likelihood estimates, under an implied multinomial model, of the observed values of conditional dependence between the two tests and, if required, prevalence, at a planned interim. The method is illustrated by comparing the accuracy of two procedures for the detection of pancreatic cancer, one procedure using the standard battery of tests, and the other using the standard battery with the addition of a PET/CT scan all relative to the gold standard of a cell biopsy. Simulation of the proposed method illustrates its robustness under various conditions. The results show that the type I error rate of the overall experiment is stable using our suggested method and that the type II error rate is close to or above nominal. Furthermore, the instances in which the type II error rate is above nominal are in the situations where the lowest sample size is required, meaning a lower impact on the actual number of participants recruited. We recommend multinomial model maximum likelihood estimation of the conditional
Overestimation of test performance by ROC analysis: Effect of small sample size
International Nuclear Information System (INIS)
Seeley, G.W.; Borgstrom, M.C.; Patton, D.D.; Myers, K.J.; Barrett, H.H.
1984-01-01
New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described
The effects of size, clutter, and complexity on vanishing-point distances in visual imagery.
Hubbard, T L; Baird, J C
1993-01-01
The portrayal of vanishing-point distances in visual imagery was examined in six experiments. In all experiments, subjects formed visual images of squares, and the squares were to be oriented orthogonally to subjects' line of sight. The squares differed in their level of surface complexity, and were either undivided, divided into 4 equally sized smaller squares, or divided into 16 equally sized smaller squares. Squares also differed in stated referent size, and ranged from 3 in. to 128 ft along each side. After subjects had formed an image of a specified square, they transformed their image so that the square was portrayed to move away from them. Eventually, the imaged square was portrayed to be so far away that if it were any further away, it could not be identified. Subjects estimated the distance to the square that was portrayed in their image at that time, the vanishing-point distance, and the relationship between stated referent size and imaged vanishing-point distance was best described by a power function with an exponent less than 1. In general, there were trends for exponents (slopes on log axes) to increase slightly and for multiplicative constants (y intercepts on log axes) to decrease as surface complexity increased. No differences in exponents or in multiplicative constants were found when the vanishing-point was approached from either subthreshold or suprathreshold directions. When clutter in the form of additional imaged objects located to either side of the primary imaged object was added to the image, the exponent of the vanishing-point function increased slightly and the multiplicative constant decreased. The success of a power function (and the failure of the size-distance invariance hypothesis) in describing the vanishing-point distance function calls into question the notions (a) that a constant grain size exists in the imaginal visual field at a given location and (b) that grain size specifies a lower limit in the storage of information in
Sample Size of One: Operational Qualitative Analysis in the Classroom
Directory of Open Access Journals (Sweden)
John Hoven
2015-10-01
Full Text Available Qualitative analysis has two extraordinary capabilities: first, finding answers to questions we are too clueless to ask; and second, causal inference – hypothesis testing and assessment – within a single unique context (sample size of one. These capabilities are broadly useful, and they are critically important in village-level civil-military operations. Company commanders need to learn quickly, "What are the problems and possibilities here and now, in this specific village? What happens if we do A, B, and C?" – and that is an ill-defined, one-of-a-kind problem. The U.S. Army's Eighty-Third Civil Affairs Battalion is our "first user" innovation partner in a new project to adapt qualitative research methods to an operational tempo and purpose. Our aim is to develop a simple, low-cost methodology and training program for local civil-military operations conducted by non-specialist conventional forces. Complementary to that, this paper focuses on some essential basics that can be implemented by college professors without significant cost, effort, or disruption.
Two-sample binary phase 2 trials with low type I error and low sample size.
Litwin, Samuel; Basickes, Stanley; Ross, Eric A
2017-04-30
We address design of two-stage clinical trials comparing experimental and control patients. Our end point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p 0 and alternative that it is p 0 among controls and p 1 > p 0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E, sufficiently exceeds C, that among (C)ontrols. Here, we combine one-sample rejection decision rules, E⩾m, with two-sample rules of the form E - C > r to achieve two-sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two-sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Three-point correlation functions of giant magnons with finite size
International Nuclear Information System (INIS)
Ahn, Changrim; Bozhilov, Plamen
2011-01-01
We compute holographic three-point correlation functions or structure constants of a zero-momentum dilaton operator and two (dyonic) giant magnon string states with a finite-size length in the semiclassical approximation. We show that the semiclassical structure constants match exactly with the three-point functions between two su(2) magnon single trace operators with finite size and the Lagrangian in the large 't Hooft coupling constant limit. A special limit J>>√(λ) of our result is compared with the relevant result based on the Luescher corrections.
Boczkaj, Grzegorz; Przyjazny, Andrzej; Kamiński, Marian
2015-03-01
The paper describes a new procedure for the determination of boiling point distribution of high-boiling petroleum fractions using size-exclusion chromatography with refractive index detection. Thus far, the determination of boiling range distribution by chromatography has been accomplished using simulated distillation with gas chromatography with flame ionization detection. This study revealed that in spite of substantial differences in the separation mechanism and the detection mode, the size-exclusion chromatography technique yields similar results for the determination of boiling point distribution compared with simulated distillation and novel empty column gas chromatography. The developed procedure using size-exclusion chromatography has a substantial applicability, especially for the determination of exact final boiling point values for high-boiling mixtures, for which a standard high-temperature simulated distillation would have to be used. In this case, the precision of final boiling point determination is low due to the high final temperatures of the gas chromatograph oven and an insufficient thermal stability of both the gas chromatography stationary phase and the sample. Additionally, the use of high-performance liquid chromatography detectors more sensitive than refractive index detection allows a lower detection limit for high-molar-mass aromatic compounds, and thus increases the sensitivity of final boiling point determination. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M; Harper, Martin
2014-01-01
This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the
Comparing Server Energy Use and Efficiency Using Small Sample Sizes
Energy Technology Data Exchange (ETDEWEB)
Coles, Henry C.; Qin, Yong; Price, Phillip N.
2014-11-01
This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a
Shieh, Gwowen
2013-01-01
The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…
Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.
Algina, James; Olejnik, Stephen
2000-01-01
Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)
Bhandari, Mohit; Tornetta, Paul; Rampersad, Shelly-Ann; Sprague, Sheila; Heels-Ansdell, Diane; Sanders, David W.; Schemitsch, Emil H.; Swiontkowski, Marc; Walter, Stephen; Guyatt, Gordon; Buckingham, Lisa; Leece, Pamela; Viveiros, Helena; Mignott, Tashay; Ansell, Natalie; Sidorkewicz, Natalie; Agel, Julie; Bombardier, Claire; Berlin, Jesse A.; Bosse, Michael; Browner, Bruce; Gillespie, Brenda; O'Brien, Peter; Poolman, Rudolf; Macleod, Mark D.; Carey, Timothy; Leitch, Kellie; Bailey, Stuart; Gurr, Kevin; Konito, Ken; Bartha, Charlene; Low, Isolina; MacBean, Leila V.; Ramu, Mala; Reiber, Susan; Strapp, Ruth; Tieszer, Christina; Kreder, Hans; Stephen, David J. G.; Axelrod, Terry S.; Yee, Albert J. M.; Richards, Robin R.; Finkelstein, Joel; Holtby, Richard M.; Cameron, Hugh; Cameron, John; Gofton, Wade; Murnaghan, John; Schatztker, Joseph; Bulmer, Beverly; Conlan, Lisa; Laflamme, Yves; Berry, Gregory; Beaumont, Pierre; Ranger, Pierre; Laflamme, Georges-Henri; Jodoin, Alain; Renaud, Eric; Gagnon, Sylvain; Maurais, Gilles; Malo, Michel; Fernandes, Julio; Latendresse, Kim; Poirier, Marie-France; Daigneault, Gina; McKee, Michael M.; Waddell, James P.; Bogoch, Earl R.; Daniels, Timothy R.; McBroom, Robert R.; Vicente, Milena R.; Storey, Wendy; Wild, Lisa M.; McCormack, Robert; Perey, Bertrand; Goetz, Thomas J.; Pate, Graham; Penner, Murray J.; Panagiotopoulos, Kostas; Pirani, Shafique; Dommisse, Ian G.; Loomer, Richard L.; Stone, Trevor; Moon, Karyn; Zomar, Mauri; Webb, Lawrence X.; Teasdall, Robert D.; Birkedal, John Peter; Martin, David Franklin; Ruch, David S.; Kilgus, Douglas J.; Pollock, David C.; Harris, Mitchel Brion; Wiesler, Ethan Ron; Ward, William G.; Shilt, Jeffrey Scott; Koman, Andrew L.; Poehling, Gary G.; Kulp, Brenda; Creevy, William R.; Stein, Andrew B.; Bono, Christopher T.; Einhorn, Thomas A.; Brown, T. Desmond; Pacicca, Donna; Sledge, John B.; Foster, Timothy E.; Voloshin, Ilva; Bolton, Jill; Carlisle, Hope; Shaughnessy, Lisa; Ombremsky, William T.; LeCroy, C. Michael; Meinberg, Eric G.; Messer, Terry M.; Craig, William L.; Dirschl, Douglas R.; Caudle, Robert; Harris, Tim; Elhert, Kurt; Hage, William; Jones, Robert; Piedrahita, Luis; Schricker, Paul O.; Driver, Robin; Godwin, Jean; Hansley, Gloria; Obremskey, William Todd; Kregor, Philip James; Tennent, Gregory; Truchan, Lisa M.; Sciadini, Marcus; Shuler, Franklin D.; Driver, Robin E.; Nading, Mary Alice; Neiderstadt, Jacky; Vap, Alexander R.; Vallier, Heather A.; Patterson, Brendan M.; Wilber, John H.; Wilber, Roger G.; Sontich, John K.; Moore, Timothy Alan; Brady, Drew; Cooperman, Daniel R.; Davis, John A.; Cureton, Beth Ann; Mandel, Scott; Orr, R. Douglas; Sadler, John T. S.; Hussain, Tousief; Rajaratnam, Krishan; Petrisor, Bradley; Drew, Brian; Bednar, Drew A.; Kwok, Desmond C. H.; Pettit, Shirley; Hancock, Jill; Cole, Peter A.; Smith, Joel J.; Brown, Gregory A.; Lange, Thomas A.; Stark, John G.; Levy, Bruce; Swiontkowski, Marc F.; Garaghty, Mary J.; Salzman, Joshua G.; Schutte, Carol A.; Tastad, Linda Toddie; Vang, Sandy; Seligson, David; Roberts, Craig S.; Malkani, Arthur L.; Sanders, Laura; Gregory, Sharon Allen; Dyer, Carmen; Heinsen, Jessica; Smith, Langan; Madanagopal, Sudhakar; Coupe, Kevin J.; Tucker, Jeffrey J.; Criswell, Allen R.; Buckle, Rosemary; Rechter, Alan Jeffrey; Sheth, Dhiren Shaskikant; Urquart, Brad; Trotscher, Thea; Anders, Mark J.; Kowalski, Joseph M.; Fineberg, Marc S.; Bone, Lawrence B.; Phillips, Matthew J.; Rohrbacher, Bernard; Stegemann, Philip; Mihalko, William M.; Buyea, Cathy; Augustine, Stephen J.; Jackson, William Thomas; Solis, Gregory; Ero, Sunday U.; Segina, Daniel N.; Berrey, Hudson B.; Agnew, Samuel G.; Fitzpatrick, Michael; Campbell, Lakina C.; Derting, Lynn; McAdams, June; Goslings, J. Carel; Ponsen, Kees Jan; Luitse, Jan; Kloen, Peter; Joosse, Pieter; Winkelhagen, Jasper; Duivenvoorden, Raphaël; Teague, David C.; Davey, Joseph; Sullivan, J. Andy; Ertl, William J. J.; Puckett, Timothy A.; Pasque, Charles B.; Tompkins, John F.; Gruel, Curtis R.; Kammerlocher, Paul; Lehman, Thomas P.; Puffinbarger, William R.; Carl, Kathy L.; Weber, Donald W.; Jomha, Nadr M.; Goplen, Gordon R.; Masson, Edward; Beaupre, Lauren A.; Greaves, Karen E.; Schaump, Lori N.; Jeray, Kyle J.; Goetz, David R.; Westberry, Davd E.; Broderick, J. Scott; Moon, Bryan S.; Tanner, Stephanie L.; Powell, James N.; Buckley, Richard E.; Elves, Leslie; Connolly, Stephen; Abraham, Edward P.; Eastwood, Donna; Steele, Trudy; Ellis, Thomas; Herzberg, Alex; Brown, George A.; Crawford, Dennis E.; Hart, Robert; Hayden, James; Orfaly, Robert M.; Vigland, Theodore; Vivekaraj, Maharani; Bundy, Gina L.; Miclau, Theodore; Matityahu, Amir; Coughlin, R. Richard; Kandemir, Utku; McClellan, R. Trigg; Lin, Cindy Hsin-Hua; Karges, David; Cramer, Kathryn; Watson, J. Tracy; Moed, Berton; Scott, Barbara; Beck, Dennis J.; Orth, Carolyn; Puskas, David; Clark, Russell; Jones, Jennifer; Egol, Kenneth A.; Paksima, Nader; France, Monet; Wai, Eugene K.; Johnson, Garth; Wilkinson, Ross; Gruszczynski, Adam T.; Vexler, Liisa
2013-01-01
Inadequate sample size and power in randomized trials can result in misleading findings. This study demonstrates the effect of sample size in a large clinical trial by evaluating the results of the Study to Prospectively evaluate Reamed Intramedullary Nails in Patients with Tibial fractures (SPRINT)
Point-Sampling and Line-Sampling Probability Theory, Geometric Implications, Synthesis
L.R. Grosenbaugh
1958-01-01
Foresters concerned with measuring tree populations on definite areas have long employed two well-known methods of representative sampling. In list or enumerative sampling the entire tree population is tallied with a known proportion being randomly selected and measured for volume or other variables. In area sampling all trees on randomly located plots or strips...
Significance, Errors, Power, and Sample Size: The Blocking and Tackling of Statistics.
Mascha, Edward J; Vetter, Thomas R
2018-02-01
Inferential statistics relies heavily on the central limit theorem and the related law of large numbers. According to the central limit theorem, regardless of the distribution of the source population, a sample estimate of that population will have a normal distribution, but only if the sample is large enough. The related law of large numbers holds that the central limit theorem is valid as random samples become large enough, usually defined as an n ≥ 30. In research-related hypothesis testing, the term "statistically significant" is used to describe when an observed difference or association has met a certain threshold. This significance threshold or cut-point is denoted as alpha (α) and is typically set at .05. When the observed P value is less than α, one rejects the null hypothesis (Ho) and accepts the alternative. Clinical significance is even more important than statistical significance, so treatment effect estimates and confidence intervals should be regularly reported. A type I error occurs when the Ho of no difference or no association is rejected, when in fact the Ho is true. A type II error occurs when the Ho is not rejected, when in fact there is a true population effect. Power is the probability of detecting a true difference, effect, or association if it truly exists. Sample size justification and power analysis are key elements of a study design. Ethical concerns arise when studies are poorly planned or underpowered. When calculating sample size for comparing groups, 4 quantities are needed: α, type II error, the difference or effect of interest, and the estimated variability of the outcome variable. Sample size increases for increasing variability and power, and for decreasing α and decreasing difference to detect. Sample size for a given relative reduction in proportions depends heavily on the proportion in the control group itself, and increases as the proportion decreases. Sample size for single-group studies estimating an unknown parameter
Directory of Open Access Journals (Sweden)
Kaisheng Zhang
2014-07-01
Full Text Available According to the characteristics of photovoltaic cell output power curve, this paper analyzed and explained the principle of Maximum Power Point Tracking (MPPT and both advantages and disadvantages of constant voltage tracking method & perturbation observation method. Further, in combination with the advantages of existing maximum power tracking methods, this paper comes up with an improved tracking method which is recognized as maximum power point tracking combined with constant voltage tracking method & variable step-size perturbation observation method. The Simulink simulation results have proven this enhanced tracking method has a better performance in System response and steady state characteristics.
Influence of the size of facets on point focus solar concentrators
Energy Technology Data Exchange (ETDEWEB)
Riveros-Rosas, David [Instituto de Geofisica, Universidad Nacional Autonoma de Mexico, Ciudad Universitaria, Col. Copilco, Coyoacan, CP 04510 DF (Mexico); Sanchez-Gonzalez, Marcelino [Centro Nacional de Energias Renovables, c/Somera 7-9, CP 28026 Madrid (Spain); Arancibia-Bulnes, Camilo A.; Estrada, Claudio A. [Centro de Investigacion en Energia, Universidad Nacional Autonoma de Mexico, Priv. Xochicalco s/n, Morelos (Mexico)
2011-03-15
It is a common practice in the development of point focus solar concentrators to use multiple identical reflecting facets, as a practical and economic alternative for the design and construction of large systems. This kind of systems behaves in a different manner than continuous paraboloidal concentrators. A theoretical study is carried out to understand the effect of the size of facets and of their optical errors in multiple facet point focus solar concentrating systems. For this purpose, a ray tracing program was developed based on the convolution technique, in which the brightness distribution of the sun and the optical errors of the reflecting surfaces are considered. The study shows that both the peak of concentration and the optimal focal distance of the system strongly depend on the size of the facets, and on their optical errors. These results are useful to help concentrator developers to have a better understanding of the relationship between manufacturing design restrictions and final optical behavior. (author)
A comparison of point counts with a new acoustic sampling method ...
African Journals Online (AJOL)
We showed that the estimates of species richness, abundance and community composition based on point counts and post-hoc laboratory listening to acoustic samples are very similar, especially for a distance limited up to 50 m. Species that were frequently missed during both point counts and listening to acoustic samples ...
Sample size and power calculation for univariate case in quantile regression
Yanuar, Ferra
2018-01-01
The purpose of this study is to calculate the statistical power and sample size in simple linear regression model based on quantile approach. The statistical theoretical framework isthen implemented to generate data using R. For any given covariate and regression coefficient, we generate a random variable and error. There are two conditions for error distributions here; normal and nonnormal distribution. This study resulted that for normal error term, sample size is large if the effect size is small. Meanwhile, the level of statistical power is also affected by effect size, the more effect size the more level of power. For nonnormal error terms, it isn’t recommended using small effect size, moderate effect size unless sample size more than 320 and large effect size unless sample size more than 160 because it resulted in lower statistical power.
Bill, Anthony; Henderson, Sally; Penman, John
2010-01-01
Two test items that examined high school students' beliefs of sample size for large populations using the context of opinion polls conducted prior to national and state elections were developed. A trial of the two items with 21 male and 33 female Year 9 students examined their naive understanding of sample size: over half of students chose a…
Statistical characterization of a large geochemical database and effect of sample size
Zhang, C.; Manheim, F. T.; Hinde, J.; Grossman, J.N.
2005-01-01
smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.
Determination of boron concentration in biopsy-sized tissue samples
International Nuclear Information System (INIS)
Hou, Yougjin; Fong, Katrina; Edwards, Benjamin; Autry-Conwell, Susan; Boggan, James
2000-01-01
Inductively coupled plasma mass spectrometry (ICP-MS) is the most sensitive analytical method for boron determination. However, because boron is volatile and ubiquitous in nature, low-concentration boron sample measurement remains a challenge. In this study, an improved ICP-MS method was developed for quantitation of tissue samples with low (less than 10 ppb) and high (100 ppb) boron concentrations. The addition of an ammonia-mannitol solution converts volatile boric acid to the non-volatile ammonium borate in the spray chamber and with the formation of a boron-mannitol complex, the boron memory effect and background are greatly reduced. This results in measurements that are more accurate, repeatable, and efficient. This improved analysis method has facilitated rapid and reliable tissue biodistribution analyses of newly developed boronated compounds for potential use in neutron capture therapy. (author)
Sample size reduction in groundwater surveys via sparse data assimilation
Hussain, Z.
2013-04-01
In this paper, we focus on sparse signal recovery methods for data assimilation in groundwater models. The objective of this work is to exploit the commonly understood spatial sparsity in hydrodynamic models and thereby reduce the number of measurements to image a dynamic groundwater profile. To achieve this we employ a Bayesian compressive sensing framework that lets us adaptively select the next measurement to reduce the estimation error. An extension to the Bayesian compressive sensing framework is also proposed which incorporates the additional model information to estimate system states from even lesser measurements. Instead of using cumulative imaging-like measurements, such as those used in standard compressive sensing, we use sparse binary matrices. This choice of measurements can be interpreted as randomly sampling only a small subset of dug wells at each time step, instead of sampling the entire grid. Therefore, this framework offers groundwater surveyors a significant reduction in surveying effort without compromising the quality of the survey. © 2013 IEEE.
Zeestraten, Eva; Lambert, Christian; Chis Ster, Irina; Williams, Owen A; Lawrence, Andrew J; Patel, Bhavini; MacKinnon, Andrew D; Barrick, Thomas R; Markus, Hugh S
2016-01-01
Detecting treatment efficacy using cognitive change in trials of cerebral small vessel disease (SVD) has been challenging, making the use of surrogate markers such as magnetic resonance imaging (MRI) attractive. We determined the sensitivity of MRI to change in SVD and used this information to calculate sample size estimates for a clinical trial. Data from the prospective SCANS (St George’s Cognition and Neuroimaging in Stroke) study of patients with symptomatic lacunar stroke and confluent leukoaraiosis was used (n = 121). Ninety-nine subjects returned at one or more time points. Multimodal MRI and neuropsychologic testing was performed annually over 3 years. We evaluated the change in brain volume, T2 white matter hyperintensity (WMH) volume, lacunes, and white matter damage on diffusion tensor imaging (DTI). Over 3 years, change was detectable in all MRI markers but not in cognitive measures. WMH volume and DTI parameters were most sensitive to change and therefore had the smallest sample size estimates. MRI markers, particularly WMH volume and DTI parameters, are more sensitive to SVD progression over short time periods than cognition. These markers could significantly reduce the size of trials to screen treatments for efficacy in SVD, although further validation from longitudinal and intervention studies is required. PMID:26036939
Calculating sample sizes for cluster randomized trials: we can keep it simple and efficient !
van Breukelen, Gerard J.P.; Candel, Math J.J.M.
2012-01-01
Objective: Simple guidelines for efficient sample sizes in cluster randomized trials with unknown intraclass correlation and varying cluster sizes. Methods: A simple equation is given for the optimal number of clusters and sample size per cluster. Here, optimal means maximizing power for a given
Structured estimation - Sample size reduction for adaptive pattern classification
Morgera, S.; Cooper, D. B.
1977-01-01
The Gaussian two-category classification problem with known category mean value vectors and identical but unknown category covariance matrices is considered. The weight vector depends on the unknown common covariance matrix, so the procedure is to estimate the covariance matrix in order to obtain an estimate of the optimum weight vector. The measure of performance for the adapted classifier is the output signal-to-interference noise ratio (SIR). A simple approximation for the expected SIR is gained by using the general sample covariance matrix estimator; this performance is both signal and true covariance matrix independent. An approximation is also found for the expected SIR obtained by using a Toeplitz form covariance matrix estimator; this performance is found to be dependent on both the signal and the true covariance matrix.
Determination of the freezing point in cow milk samples preserved with azidiol
Directory of Open Access Journals (Sweden)
Nataša Pintić-Pukec
2011-12-01
Full Text Available The study involved determination of the freezing point of cow milk by a reference (thermistor cryoscopy and an instrumental (infrared spectrometry method. The aim of the study was to evaluate the possibility of milk freezing point determination in milk samples preserved with azidiol by using a reference and an instrumental method of analysis. Five hundred cow milk samples were analysed during three research periods. Samples were taken at milk collection points in north-western Croatia. Samples preserved with azidiol (0.3 mL azidiol/40 mL; 0.011 g sodium azide/40 mL and without preservatives (control samples were analysed. The freezing point of milk was determined in duplicate. Average freezing point results of azidiol preserved samples were lower compared to control samples. A statistically significant difference between the means of the results obtained for azidiol preserved and control samples was determined (P<0.05; P<0.01 in all research periods. The results revealed a significant influence of the preservative azidiol on milk freezing point determination regardless of the method of analysis applied, which could lead to wrong interpretation of the results.
Sample size reassessment for a two-stage design controlling the false discovery rate.
Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin
2015-11-01
Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.
Directory of Open Access Journals (Sweden)
Elsa Tavernier
Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.
Evaluation of design flood estimates with respect to sample size
Kobierska, Florian; Engeland, Kolbjorn
2016-04-01
Estimation of design floods forms the basis for hazard management related to flood risk and is a legal obligation when building infrastructure such as dams, bridges and roads close to water bodies. Flood inundation maps used for land use planning are also produced based on design flood estimates. In Norway, the current guidelines for design flood estimates give recommendations on which data, probability distribution, and method to use dependent on length of the local record. If less than 30 years of local data is available, an index flood approach is recommended where the local observations are used for estimating the index flood and regional data are used for estimating the growth curve. For 30-50 years of data, a 2 parameter distribution is recommended, and for more than 50 years of data, a 3 parameter distribution should be used. Many countries have national guidelines for flood frequency estimation, and recommended distributions include the log Pearson II, generalized logistic and generalized extreme value distributions. For estimating distribution parameters, ordinary and linear moments, maximum likelihood and Bayesian methods are used. The aim of this study is to r-evaluate the guidelines for local flood frequency estimation. In particular, we wanted to answer the following questions: (i) Which distribution gives the best fit to the data? (ii) Which estimation method provides the best fit to the data? (iii) Does the answer to (i) and (ii) depend on local data availability? To answer these questions we set up a test bench for local flood frequency analysis using data based cross-validation methods. The criteria were based on indices describing stability and reliability of design flood estimates. Stability is used as a criterion since design flood estimates should not excessively depend on the data sample. The reliability indices describe to which degree design flood predictions can be trusted.
Sample Size Requirements for Discrete-Choice Experiments in Healthcare: a Practical Guide.
de Bekker-Grob, Esther W; Donkers, Bas; Jonker, Marcel F; Stolk, Elly A
2015-10-01
Discrete-choice experiments (DCEs) have become a commonly used instrument in health economics and patient-preference analysis, addressing a wide range of policy questions. An important question when setting up a DCE is the size of the sample needed to answer the research question of interest. Although theory exists as to the calculation of sample size requirements for stated choice data, it does not address the issue of minimum sample size requirements in terms of the statistical power of hypothesis tests on the estimated coefficients. The purpose of this paper is threefold: (1) to provide insight into whether and how researchers have dealt with sample size calculations for healthcare-related DCE studies; (2) to introduce and explain the required sample size for parameter estimates in DCEs; and (3) to provide a step-by-step guide for the calculation of the minimum sample size requirements for DCEs in health care.
GPU-accelerated Direct Sampling method for multiple-point statistical simulation
Huang, Tao; Li, Xue; Zhang, Ting; Lu, De-Tang
2013-08-01
Geostatistical simulation techniques have become a widely used tool for the modeling of oil and gas reservoirs and the assessment of uncertainty. The Direct Sampling (DS) algorithm is a recent multiple-point statistical simulation technique. It directly samples the training image (TI) during the simulation process by calculating distances between the TI patterns and the given data events found in the simulation grid (SG). Omitting the prior storage of all the TI patterns in a database, the DS algorithm can be used to simulate categorical, continuous and multivariate variables. Three fundamental input parameters are required for the definition of DS applications: the number of neighbors n, the acceptance threshold t and the fraction of the TI to scan f. For very large grids and complex spatial models with more severe parameter restrictions, the computational costs in terms of simulation time often become the bottleneck of practical applications. This paper focuses on an innovative implementation of the Direct Sampling method which exploits the benefits of graphics processing units (GPUs) to improve computational performance. Parallel schemes are applied to deal with two of the DS input parameters, n and f. Performance tests are carried out with large 3D grid size and the results are compared with those obtained based on the simulations with central processing units (CPU). The comparison indicates that the use of GPUs reduces the computation time by a factor of 10X-100X depending on the input parameters. Moreover, the concept of the search ellipsoid can be conveniently combined with the flexible data template of the DS method, and our experimental results of sand channels reconstruction show that it can improve the reproduction of the long-range connectivity patterns.
Sample to answer visualization pipeline for low-cost point-of-care blood cell counting
CSIR Research Space (South Africa)
Smith, S
2015-02-01
Full Text Available We present a visualization pipeline from sample to answer for point-of-care blood cell counting applications. Effective and low-cost point-of-care medical diagnostic tests provide developing countries and rural communities with accessible healthcare...
Issues of sample size in sensitivity and specificity analysis with special reference to oncology
Directory of Open Access Journals (Sweden)
Atul Juneja
2015-01-01
Full Text Available Sample size is one of the basics issues, which medical researcher including oncologist faces with any research program. The current communication attempts to discuss the computation of sample size when sensitivity and specificity are being evaluated. The article intends to present the situation that the researcher could easily visualize for appropriate use of sample size techniques for sensitivity and specificity when any screening method for early detection of cancer is in question. Moreover, the researcher would be in a position to efficiently communicate with a statistician for sample size computation and most importantly applicability of the results under the conditions of the negotiated precision.
An inversion-relaxation approach for sampling stationary points of spin model Hamiltonians
International Nuclear Information System (INIS)
Hughes, Ciaran; Mehta, Dhagash; Wales, David J.
2014-01-01
Sampling the stationary points of a complicated potential energy landscape is a challenging problem. Here, we introduce a sampling method based on relaxation from stationary points of the highest index of the Hessian matrix. We illustrate how this approach can find all the stationary points for potentials or Hamiltonians bounded from above, which includes a large class of important spin models, and we show that it is far more efficient than previous methods. For potentials unbounded from above, the relaxation part of the method is still efficient in finding minima and transition states, which are usually the primary focus of attention for atomistic systems
CT dose survey in adults: what sample size for what precision?
Energy Technology Data Exchange (ETDEWEB)
Taylor, Stephen [Hopital Ambroise Pare, Department of Radiology, Mons (Belgium); Muylem, Alain van [Hopital Erasme, Department of Pneumology, Brussels (Belgium); Howarth, Nigel [Clinique des Grangettes, Department of Radiology, Chene-Bougeries (Switzerland); Gevenois, Pierre Alain [Hopital Erasme, Department of Radiology, Brussels (Belgium); Tack, Denis [EpiCURA, Clinique Louis Caty, Department of Radiology, Baudour (Belgium)
2017-01-15
To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)
Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A new model to describe the relationship between species richness and sample size
Directory of Open Access Journals (Sweden)
WenJun Zhang
2017-03-01
Full Text Available In the sampling of species richness, the number of newly found species declines as increase of sample size, and the number of distinct species tends to an upper asymptote as sample size tends to the infinity. This leads to a curve of species richness vs. sample size. In present study, I follow my principle proposed earlier (Zhang, 2016, and re-develop the model, y=K(1-e^(-rx/K, for describing the relationship between species richness (y and sample size (x, where K is the expected total number of distinct species, and r is the maximum variation of species richness per sample size (i.e., max dy/dx. Computer software and codes were given.
A Maximum Power Point Tracker with Automatic Step Size Tuning Scheme for Photovoltaic Systems
Directory of Open Access Journals (Sweden)
Kuei-Hsiang Chao
2012-01-01
Full Text Available The purpose of this paper is to study on a novel maximum power point tracking (MPPT method for photovoltaic (PV systems. First, the simulation environment for PV systems is constructed by using PSIM software package. A 516 W PV system established with Kyocera KC40T photovoltaic modules is used as an example to finish the simulation of the proposed MPPT method. When using incremental conductance (INC MPPT method, it usually should consider the tradeoff between the dynamic response and the steady-state oscillation, whereas the proposed modified incremental conductance method based on extension theory can automatically adjust the step size to track the maximum power point (MPP of PV array and effectively improve the dynamic response and steady-state performance of the PV systems, simultaneously. Some simulation and experimental results are made to verify that the proposed extension maximum power point tracking method can provide a good dynamic response and steady-state performance for a photovoltaic power generation system.
Pacific Northwest National Laboratory Facility Radionuclide Emission Points and Sampling Systems
Energy Technology Data Exchange (ETDEWEB)
Barfuss, Brad C.; Barnett, J. Matthew; Ballinger, Marcel Y.
2009-04-08
Battelle—Pacific Northwest Division operates numerous research and development laboratories in Richland, Washington, including those associated with the Pacific Northwest National Laboratory (PNNL) on the Department of Energy’s Hanford Site that have the potential for radionuclide air emissions. The National Emission Standard for Hazardous Air Pollutants (NESHAP 40 CFR 61, Subparts H and I) requires an assessment of all effluent release points that have the potential for radionuclide emissions. Potential emissions are assessed annually. Sampling, monitoring, and other regulatory compliance requirements are designated based upon the potential-to-emit dose criteria found in the regulations. The purpose of this document is to describe the facility radionuclide air emission sampling program and provide current and historical facility emission point system performance, operation, and design information. A description of the buildings, exhaust points, control technologies, and sample extraction details is provided for each registered or deregistered facility emission point. Additionally, applicable stack sampler configuration drawings, figures, and photographs are provided.
Usami, Satoshi
2014-12-01
Recent years have shown increased awareness of the importance of sample size determination in experimental research. Yet effective and convenient methods for sample size determination, especially in longitudinal experimental design, are still under development, and application of power analysis in applied research remains limited. This article presents a convenient method for sample size determination in longitudinal experimental research using a multilevel model. A fundamental idea of this method is transformation of model parameters (level 1 error variance [σ(2)], level 2 error variances [τ 00, τ 11] and its covariance [τ 01, τ 10], and a parameter representing experimental effect [δ]) into indices (reliability of measurement at the first time point [ρ 1], effect size at the last time point [Δ T ], proportion of variance of outcomes between the first and the last time points [k], and level 2 error correlation [r]) that are intuitively understandable and easily specified. To foster more convenient use of power analysis, numerical tables are constructed that refer to ANOVA results to investigate the influence on statistical power by respective indices.
Ruf, B.; Erdnuess, B.; Weinmann, M.
2017-08-01
With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative
Impact of Sample Size on the Performance of Multiple-Model Pharmacokinetic Simulations▿
Tam, Vincent H.; Kabbara, Samer; Yeh, Rosa F.; Leary, Robert H.
2006-01-01
Monte Carlo simulations are increasingly used to predict pharmacokinetic variability of antimicrobials in a population. We investigated the sample size necessary to provide robust pharmacokinetic predictions. To obtain reasonably robust predictions, a nonparametric model derived from a sample population size of ≥50 appears to be necessary as the input information.
45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations
2010-10-01
... 45 Public Welfare 4 2010-10-01 2010-10-01 false Calculating Sample Size for NYTD Follow-Up Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... REQUIREMENTS APPLICABLE TO TITLE IV-E Pt. 1356, App. C Appendix C to Part 1356—Calculating Sample Size for NYTD...
Sample Size Requirements for Accurate Estimation of Squared Semi-Partial Correlation Coefficients.
Algina, James; Moulder, Bradley C.; Moser, Barry K.
2002-01-01
Studied the sample size requirements for accurate estimation of squared semi-partial correlation coefficients through simulation studies. Results show that the sample size necessary for adequate accuracy depends on: (1) the population squared multiple correlation coefficient (p squared); (2) the population increase in p squared; and (3) the…
Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model
Custer, Michael
2015-01-01
This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…
Post-stratified estimation: with-in strata and total sample size recommendations
James A. Westfall; Paul L. Patterson; John W. Coulston
2011-01-01
Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...
A test of alternative estimators for volume at time 1 from remeasured point samples
Francis A. Roesch; Edwin J. Green; Charles T. Scott
1993-01-01
Two estimators for volume at time 1 for use with permanent horizontal point samples are evaluated. One estimator, used traditionally, uses only the trees sampled at time 1, while the second estimator, originally presented by Roesch and coauthors (F.A. Roesch, Jr., E.J. Green, and C.T. Scott. 1989. For. Sci. 35(2):281-293). takes advantage of additional sample...
Implications of sampling design and sample size for national carbon accounting systems.
Köhl, Michael; Lister, Andrew; Scott, Charles T; Baldauf, Thomas; Plugge, Daniel
2011-11-08
Countries willing to adopt a REDD regime need to establish a national Measurement, Reporting and Verification (MRV) system that provides information on forest carbon stocks and carbon stock changes. Due to the extensive areas covered by forests the information is generally obtained by sample based surveys. Most operational sampling approaches utilize a combination of earth-observation data and in-situ field assessments as data sources. We compared the cost-efficiency of four different sampling design alternatives (simple random sampling, regression estimators, stratified sampling, 2-phase sampling with regression estimators) that have been proposed in the scope of REDD. Three of the design alternatives provide for a combination of in-situ and earth-observation data. Under different settings of remote sensing coverage, cost per field plot, cost of remote sensing imagery, correlation between attributes quantified in remote sensing and field data, as well as population variability and the percent standard error over total survey cost was calculated. The cost-efficiency of forest carbon stock assessments is driven by the sampling design chosen. Our results indicate that the cost of remote sensing imagery is decisive for the cost-efficiency of a sampling design. The variability of the sample population impairs cost-efficiency, but does not reverse the pattern of cost-efficiency of the individual design alternatives. Our results clearly indicate that it is important to consider cost-efficiency in the development of forest carbon stock assessments and the selection of remote sensing techniques. The development of MRV-systems for REDD need to be based on a sound optimization process that compares different data sources and sampling designs with respect to their cost-efficiency. This helps to reduce the uncertainties related with the quantification of carbon stocks and to increase the financial benefits from adopting a REDD regime.
Johnston, James D; Magnusson, Brianna M; Eggett, Dennis; Collingwood, Scott C; Bernhardt, Scott A
2015-01-01
Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hr, and 12-days) in 9 northern Utah homes, from March-June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions.
Directory of Open Access Journals (Sweden)
Young-Doo Kwon
2013-01-01
Full Text Available This study examined the characteristics of a variable three-point Gauss quadrature using a variable set of weighting factors and corresponding optimal sampling points. The major findings were as follows. The one-point, two-point, and three-point Gauss quadratures that adopt the Legendre sampling points and the well-known Simpson’s 1/3 rule were found to be special cases of the variable three-point Gauss quadrature. In addition, the three-point Gauss quadrature may have out-of-domain sampling points beyond the domain end points. By applying the quadratically extrapolated integrals and nonlinearity index, the accuracy of the integration could be increased significantly for evenly acquired data, which is popular with modern sophisticated digital data acquisition systems, without using higher-order extrapolation polynomials.
Sexual maturity, fecundity and egg size of wild and cultured samples of Bagrus bayad macropterus
Tsadu, S.M.; Lamai, S.L.; Oladimeji, A.A.
2003-01-01
Twenty four matured samples of Bagrus bayad macropterus from the wild (Shiroro Lake, Nigeria) and under captivity, size ranging from 412.69-3300.00 g total body weight, were analysed for sexual maturity,fecundity and egg size. The average fecundity obtained were 53352.59 and 21028.32 eggs for the wild and cultured fish respectively.Positive relationship was observed between fecundity, body size and gonad weight. Fecundity increased as body size increased. A more positive and linear relatio...
Finite size scaling of the Higgs-Yukawa model near the Gaussian fixed point
Energy Technology Data Exchange (ETDEWEB)
Chu, David Y.J.; Lin, C.J. David [National Chiao-Tung Univ., Hsinchu, Taiwan (China); Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Knippschild, Bastian [HISKP, Bonn (Germany); Nagy, Attila [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Univ. Berlin (Germany)
2016-12-15
We study the scaling properties of Higgs-Yukawa models. Using the technique of Finite-Size Scaling, we are able to derive scaling functions that describe the observables of the model in the vicinity of a Gaussian fixed point. A feasibility study of our strategy is performed for the pure scalar theory in the weak-coupling regime. Choosing the on-shell renormalisation scheme gives us an advantage to fit the scaling functions against lattice data with only a small number of fit parameters. These formulae can be used to determine the universality of the observed phase transitions, and thus play an essential role in future investigations of Higgs-Yukawa models, in particular in the strong Yukawa coupling region.
Energy Technology Data Exchange (ETDEWEB)
Pettersen, Sigurd R., E-mail: sigurd.r.pettersen@ntnu.no, E-mail: jianying.he@ntnu.no; Stokkeland, August Emil; Zhang, Zhiliang; He, Jianying, E-mail: sigurd.r.pettersen@ntnu.no, E-mail: jianying.he@ntnu.no [NTNU Nanomechanical Lab, Department of Structural Engineering, Norwegian University of Science and Technology (NTNU), NO-7491 Trondheim (Norway); Kristiansen, Helge [NTNU Nanomechanical Lab, Department of Structural Engineering, Norwegian University of Science and Technology (NTNU), NO-7491 Trondheim (Norway); Conpart AS, Dragonveien 54, NO-2013 Skjetten (Norway); Njagi, John; Goia, Dan V. [Center for Advanced Materials Processing, Clarkson University, Potsdam, New York 13699-5814 (United States); Redford, Keith [Conpart AS, Dragonveien 54, NO-2013 Skjetten (Norway)
2016-07-25
Micron-sized metal-coated polymer spheres are frequently used as filler particles in conductive composites for electronic interconnects. However, the intrinsic electrical resistivity of the spherical thin films has not been attainable due to deficiency in methods that eliminate the effect of contact resistance. In this work, a four-point probing method using vacuum compatible piezo-actuated micro robots was developed to directly investigate the electric properties of individual silver-coated spheres under real-time observation in a scanning electron microscope. Poly(methyl methacrylate) spheres with a diameter of 30 μm and four different film thicknesses (270 nm, 150 nm, 100 nm, and 60 nm) were investigated. By multiplying the experimental results with geometrical correction factors obtained using finite element models, the resistivities of the thin films were estimated for the four thicknesses. These were higher than the resistivity of bulk silver.
Bouman, A C; ten Cate-Hoek, A J; Ramaekers, B L T; Joore, M A
2015-01-01
Non-inferiority trials are performed when the main therapeutic effect of the new therapy is expected to be not unacceptably worse than that of the standard therapy, and the new therapy is expected to have advantages over the standard therapy in costs or other (health) consequences. These advantages however are not included in the classic frequentist approach of sample size calculation for non-inferiority trials. In contrast, the decision theory approach of sample size calculation does include these factors. The objective of this study is to compare the conceptual and practical aspects of the frequentist approach and decision theory approach of sample size calculation for non-inferiority trials, thereby demonstrating that the decision theory approach is more appropriate for sample size calculation of non-inferiority trials. The frequentist approach and decision theory approach of sample size calculation for non-inferiority trials are compared and applied to a case of a non-inferiority trial on individually tailored duration of elastic compression stocking therapy compared to two years elastic compression stocking therapy for the prevention of post thrombotic syndrome after deep vein thrombosis. The two approaches differ substantially in conceptual background, analytical approach, and input requirements. The sample size calculated according to the frequentist approach yielded 788 patients, using a power of 80% and a one-sided significance level of 5%. The decision theory approach indicated that the optimal sample size was 500 patients, with a net value of €92 million. This study demonstrates and explains the differences between the classic frequentist approach and the decision theory approach of sample size calculation for non-inferiority trials. We argue that the decision theory approach of sample size estimation is most suitable for sample size calculation of non-inferiority trials.
Williams, C. R.; Chandra, C. V.
2017-12-01
The vertical evolution of falling raindrops is a result of evaporation, breakup, and coalescence acting upon those raindrops. Computing these processes using vertically pointing radar observations is a two-step process. First, the raindrop size distribution (DSD) and vertical air motion need to be estimated throughout the rain shaft. Then, the changes in DSD properties need to be quantified as a function of height. The change in liquid water content is a measure of evaporation, and the change in raindrop number concentration and size are indicators of net breakup or coalescence in the vertical column. The DSD and air motion can be retrieved using observations from two vertically pointing radars operating side-by-side and at two different wavelengths. While both radars are observing the same raindrop distribution, they measure different reflectivity and radial velocities due to Rayleigh and Mie scattering properties. As long as raindrops with diameters greater than approximately 2 mm are in the radar pulse volumes, the Rayleigh and Mie scattering signatures are unique enough to estimate DSD parameters using radars operating at 3- and 35-GHz (Williams et al. 2016). Vertical decomposition diagrams (Williams 2016) are used to explore the processes acting on the raindrops. Specifically, changes in liquid water content with height quantify evaporation or accretion. When the raindrops are not evaporating, net raindrop breakup and coalescence are identified by changes in the total number of raindrops and changes in the DSD effective shape as the raindrops. This presentation will focus on describing the DSD and air motion retrieval method using vertical profiling radar observations from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) central facility in Northern Oklahoma.
Placzek, Marius; Friede, Tim
2017-01-01
The importance of subgroup analyses has been increasing due to a growing interest in personalized medicine and targeted therapies. Considering designs with multiple nested subgroups and a continuous endpoint, we develop methods for the analysis and sample size determination. First, we consider the joint distribution of standardized test statistics that correspond to each (sub)population. We derive multivariate exact distributions where possible, providing approximations otherwise. Based on these results, we present sample size calculation procedures. Uncertainties about nuisance parameters which are needed for sample size calculations make the study prone to misspecifications. We discuss how a sample size review can be performed in order to make the study more robust. To this end, we implement an internal pilot study design where the variances and prevalences of the subgroups are reestimated in a blinded fashion and the sample size is recalculated accordingly. Simulations show that the procedures presented here do not inflate the type I error significantly and maintain the prespecified power as long as the sample size of the smallest subgroup is not too small. We pay special attention to the case of small sample sizes and attain a lower boundary for the size of the internal pilot study.
Optimizing the calculation of point source count-centroid in pixel size measurement
International Nuclear Information System (INIS)
Zhou Luyi; Kuang Anren; Su Xianyu
2004-01-01
Pixel size is an important parameter of gamma camera and SPECT. A number of methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source (PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (X m ) is an approximation of the true count-centroid (X p ) of the PS, i.e. X m =X p + (X b -X p )/(1+R p /R b ), where Rp is the net counting rate of the PS, X b the background count-centroid and Rb the background counting. To get accurate measurement, R p must be very big, which is unpractical, resulting in the variation of measured pixel size. R p -independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (X b -X p )/(1 + R p /R b ) by bringing X b closer to X p and by reducing R b . In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=1-(0.5) D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent X p . The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128 x 128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10 cps to 1183 cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01 (mean
Optimizing the calculation of point source count-centroid in pixel size measurement
International Nuclear Information System (INIS)
Zhou Luyi; Kuang Anren; Su Xianyu
2004-01-01
Purpose: Pixel size is an important parameter of gamma camera and SPECT. A number of Methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source(PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (Xm) is an approximation of the true count-centroid (Xp) of the PS, i.e. Xm=Xp+(Xb-Xp)/(1+Rp/Rb), where Rp is the net counting rate of the PS, Xb the background count-centroid and Rb the background counting rate. To get accurate measurement, Rp must be very big, which is unpractical, resulting in the variation of measured pixel size. Rp-independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (Xb-Xp)/(1+Rp/Rb) by bringing Xb closer to Xp and by reducing Rb. In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=I-(0.5)D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent Xp. The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128*128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10cps to 1183cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01( mean= 3.01±0.00) as Rp increased
Chaibub Neto, Elias
2015-01-01
In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.
Dang, Qianyu; Mazumdar, Sati; Houck, Patricia R
2008-08-01
The generalized linear mixed model (GLIMMIX) provides a powerful technique to model correlated outcomes with different types of distributions. The model can now be easily implemented with SAS PROC GLIMMIX in version 9.1. For binary outcomes, linearization methods of penalized quasi-likelihood (PQL) or marginal quasi-likelihood (MQL) provide relatively accurate variance estimates for fixed effects. Using GLIMMIX based on these linearization methods, we derived formulas for power and sample size calculations for longitudinal designs with attrition over time. We found that the power and sample size estimates depend on the within-subject correlation and the size of random effects. In this article, we present tables of minimum sample sizes commonly used to test hypotheses for longitudinal studies. A simulation study was used to compare the results. We also provide a Web link to the SAS macro that we developed to compute power and sample sizes for correlated binary outcomes.
On sample size estimation and re-estimation adjusting for variability in confirmatory trials.
Wu, Pei-Shien; Lin, Min; Chow, Shein-Chung
2016-01-01
Sample size estimation (SSE) is an important issue in the planning of clinical studies. While larger studies are likely to have sufficient power, it may be unethical to expose more patients than necessary to answer a scientific question. Budget considerations may also cause one to limit the study to an adequate size to answer the question at hand. Typically at the planning stage, a statistically based justification for sample size is provided. An effective sample size is usually planned under a pre-specified type I error rate, a desired power under a particular alternative and variability associated with the observations recorded. The nuisance parameter such as the variance is unknown in practice. Thus, information from a preliminary pilot study is often used to estimate the variance. However, calculating the sample size based on the estimated nuisance parameter may not be stable. Sample size re-estimation (SSR) at the interim analysis may provide an opportunity to re-evaluate the uncertainties using accrued data and continue the trial with an updated sample size. This article evaluates a proposed SSR method based on controlling the variability of nuisance parameter. A numerical study is used to assess the performance of proposed method with respect to the control of type I error. The proposed method and concepts could be extended to SSR approaches with respect to other criteria, such as maintaining effect size, achieving conditional power, and reaching a desired reproducibility probability.
Walsh, David I; Murthy, Shashi K; Russom, Aman
2016-10-01
Point-of-care (POC) microfluidic devices often lack the integration of common sample preparation steps, such as preconcentration, which can limit their utility in the field. In this technology brief, we describe a system that combines the necessary sample preparation methods to perform sample-to-result analysis of large-volume (20 mL) biopsy model samples with staining of captured cells. Our platform combines centrifugal-paper microfluidic filtration and an analysis system to process large, dilute biological samples. Utilizing commercialization-friendly manufacturing methods and materials, yielding a sample throughput of 20 mL/min, and allowing for on-chip staining and imaging bring together a practical, yet powerful approach to microfluidic diagnostics of large, dilute samples. © 2016 Society for Laboratory Automation and Screening.
Advances in paper-based sample pretreatment for point-of-care testing.
Tang, Rui Hua; Yang, Hui; Choi, Jane Ru; Gong, Yan; Feng, Shang Sheng; Pingguan-Murphy, Belinda; Huang, Qing Sheng; Shi, Jun Ling; Mei, Qi Bing; Xu, Feng
2017-06-01
In recent years, paper-based point-of-care testing (POCT) has been widely used in medical diagnostics, food safety and environmental monitoring. However, a high-cost, time-consuming and equipment-dependent sample pretreatment technique is generally required for raw sample processing, which are impractical for low-resource and disease-endemic areas. Therefore, there is an escalating demand for a cost-effective, simple and portable pretreatment technique, to be coupled with the commonly used paper-based assay (e.g. lateral flow assay) in POCT. In this review, we focus on the importance of using paper as a platform for sample pretreatment. We firstly discuss the beneficial use of paper for sample pretreatment, including sample collection and storage, separation, extraction, and concentration. We highlight the working principle and fabrication of each sample pretreatment device, the existing challenges and the future perspectives for developing paper-based sample pretreatment technique.
Analysis of intraosseous blood samples using an EPOC point of care analyzer during resuscitation.
Tallman, Crystal Ives; Darracq, Michael; Young, Megann
2017-03-01
In the early phases of resuscitation in a critically ill patient, especially those in cardiac arrest, intravenous (IV) access can be difficult to obtain. Intraosseous (IO) access is often used in these critical situations to allow medication administration. When no IV access is available, it is difficult to obtain blood for point of care analysis, yet this information can be crucial in directing the resuscitation. We hypothesized that IO samples may be used with a point of care device to obtain useful information when seconds really do matter. Patients presenting to the emergency department requiring resuscitation and IO placement were prospectively enrolled in a convenience sample. 17 patients were enrolled. IO and IV samples obtained within five minutes of one another were analyzed using separate EPOC® point of care analyzers. Analytes were compared using Bland Altman Plots and intraclass correlation coefficients. In this analysis of convenience sampled critically ill patients, the EPOC® point of care analyzer provided results from IO samples. IO and IV samples were most comparable for pH, bicarbonate, sodium and base excess, and potentially for lactic acid; single outliers for bicarbonate, sodium and base excess were observed. Intraclass correlation coefficients were excellent for sodium and reasonable for pH, pO2, bicarbonate, and glucose. Correlations for other variables measured by the EPOC® analyzer were not as robust. IO samples can be used with a bedside point of care analyzer to rapidly obtain certain laboratory information during resuscitations when IV access is difficult. Copyright © 2016 Elsevier Inc. All rights reserved.
The PowerAtlas: a power and sample size atlas for microarray experimental design and research
Directory of Open Access Journals (Sweden)
Wang Jelai
2006-02-01
Full Text Available Abstract Background Microarrays permit biologists to simultaneously measure the mRNA abundance of thousands of genes. An important issue facing investigators planning microarray experiments is how to estimate the sample size required for good statistical power. What is the projected sample size or number of replicate chips needed to address the multiple hypotheses with acceptable accuracy? Statistical methods exist for calculating power based upon a single hypothesis, using estimates of the variability in data from pilot studies. There is, however, a need for methods to estimate power and/or required sample sizes in situations where multiple hypotheses are being tested, such as in microarray experiments. In addition, investigators frequently do not have pilot data to estimate the sample sizes required for microarray studies. Results To address this challenge, we have developed a Microrarray PowerAtlas 1. The atlas enables estimation of statistical power by allowing investigators to appropriately plan studies by building upon previous studies that have similar experimental characteristics. Currently, there are sample sizes and power estimates based on 632 experiments from Gene Expression Omnibus (GEO. The PowerAtlas also permits investigators to upload their own pilot data and derive power and sample size estimates from these data. This resource will be updated regularly with new datasets from GEO and other databases such as The Nottingham Arabidopsis Stock Center (NASC. Conclusion This resource provides a valuable tool for investigators who are planning efficient microarray studies and estimating required sample sizes.
The Sample Size Influence in the Accuracy of the Image Classification of the Remote Sensing
Directory of Open Access Journals (Sweden)
Thomaz C. e C. da Costa
2004-12-01
Full Text Available Landuse/landcover maps produced by classification of remote sensing images incorporate uncertainty. This uncertainty is measured by accuracy indices using reference samples. The size of the reference sample is defined by approximation by a binomial function without the use of a pilot sample. This way the accuracy are not estimated, but fixed a priori. In case of divergency between the estimated and a priori accuracy the error of the sampling will deviate from the expected error. The size using pilot sample (theorically correct procedure justify when haven´t estimate of accuracy for work area, referent the product remote sensing utility.
Kikuchi, Takashi; Gittins, John
2011-08-01
The behavioural Bayes approach to sample size determination for clinical trials assumes that the number of subsequent patients switching to a new drug from the current drug depends on the strength of the evidence for efficacy and safety that was observed in the clinical trials. The optimal sample size is the one which maximises the expected net benefit of the trial. The approach has been developed in a series of papers by Pezeshk and the present authors (Gittins JC, Pezeshk H. A behavioral Bayes method for determining the size of a clinical trial. Drug Information Journal 2000; 34: 355-63; Gittins JC, Pezeshk H. How Large should a clinical trial be? The Statistician 2000; 49(2): 177-87; Gittins JC, Pezeshk H. A decision theoretic approach to sample size determination in clinical trials. Journal of Biopharmaceutical Statistics 2002; 12(4): 535-51; Gittins JC, Pezeshk H. A fully Bayesian approach to calculating sample sizes for clinical trials with binary responses. Drug Information Journal 2002; 36: 143-50; Kikuchi T, Pezeshk H, Gittins J. A Bayesian cost-benefit approach to the determination of sample size in clinical trials. Statistics in Medicine 2008; 27(1): 68-82; Kikuchi T, Gittins J. A behavioral Bayes method to determine the sample size of a clinical trial considering efficacy and safety. Statistics in Medicine 2009; 28(18): 2293-306; Kikuchi T, Gittins J. A Bayesian procedure for cost-benefit evaluation of a new drug in multi-national clinical trials. Statistics in Medicine 2009 (Submitted)). The purpose of this article is to provide a rationale for experimental designs which allocate more patients to the new treatment than to the control group. The model uses a logistic weight function, including an interaction term linking efficacy and safety, which determines the number of patients choosing the new drug, and hence the resulting benefit. A Monte Carlo simulation is employed for the calculation. Having a larger group of patients on the new drug in general
The endothelial sample size analysis in corneal specular microscopy clinical examinations.
Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci
2012-05-01
To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.
Directory of Open Access Journals (Sweden)
Lin Jia-Horng
2016-01-01
Full Text Available This study proposes making filter materials with polypropylene (PP and low-melting point (LPET fibers. The influences of temperatures and times of heat treatment on the morphology of thermal bonding points and average pore size of the PP/LPET filter materials. The test results indicate that the morphology of thermal bonding points is highly correlated with the average pore size. When the temperature of heat treatment is increased, the fibers are joined first with the thermal bonding points, and then with the large thermal bonding areas, thereby decreasing the average pore size of the PP/LPET filter materials. A heat treatment of 110 °C for 60 seconds can decrease the pore size from 39.6 μm to 12.0 μm.
Sample size adjustment designs with time-to-event outcomes: A caution.
Freidlin, Boris; Korn, Edward L
2017-12-01
Sample size adjustment designs, which allow increasing the study sample size based on interim analysis of outcome data from a randomized clinical trial, have been increasingly promoted in the biostatistical literature. Although it is recognized that group sequential designs can be at least as efficient as sample size adjustment designs, many authors argue that a key advantage of these designs is their flexibility; interim sample size adjustment decisions can incorporate information and business interests external to the trial. Recently, Chen et al. (Clinical Trials 2015) considered sample size adjustment applications in the time-to-event setting using a design (CDL) that limits adjustments to situations where the interim results are promising. The authors demonstrated that while CDL provides little gain in unconditional power (versus fixed-sample-size designs), there is a considerable increase in conditional power for trials in which the sample size is adjusted. In time-to-event settings, sample size adjustment allows an increase in the number of events required for the final analysis. This can be achieved by either (a) following the original study population until the additional events are observed thus focusing on the tail of the survival curves or (b) enrolling a potentially large number of additional patients thus focusing on the early differences in survival curves. We use the CDL approach to investigate performance of sample size adjustment designs in time-to-event trials. Through simulations, we demonstrate that when the magnitude of the true treatment effect changes over time, interim information on the shape of the survival curves can be used to enrich the final analysis with events from the time period with the strongest treatment effect. In particular, interested parties have the ability to make the end-of-trial treatment effect larger (on average) based on decisions using interim outcome data. Furthermore, in "clinical null" cases where there is no
Simulation of size segregation in granular flow with material point method
Directory of Open Access Journals (Sweden)
Fei Minglong
2017-01-01
Full Text Available Segregation is common in granular flows consisting of mixtures of particles differing in size or density. In gravity-driven flows, both gradients in total pressure (induced by gravity and gradients in velocity fluctuation fields (often associated with shear rate gradients work together to govern the evolution of segregation. Since the local shear rate and velocity fluctuations are dependent on the local concentration of the components, understanding the co-evolution of segregation and flow is critical for understanding and predicting flows where there can be a variety of particle sizes and densities, such as in nature and industry. Kinetic theory has proven to be a robust framework for predicting this simultaneous evolution but has a limit in its applicability to dense systems where collisions are highly correlated. In this paper, we introduce a model that captures the coevolution of these evolving dynamics for high density gravity driven granular mixtures. For the segregation dynamics we use a recently developed mixture theory (Fan & Hill 2011, New J. Phys; Hill & Tan 2014, J. Fluid Mech. which captures the combined effects of gravity and fluctuation fields on segregation evolution in high density granular flows. For the mixture flow dynamics, we use a recently proposed viscous-elastic-plastic constitutive model, which can describe the multi-state behaviors of granular materials, i.e. the granular solid, granular liquid and granular gas mechanical states (Fei et al. 2016, Powder Technol.. The platform we use for implementing this model is a modified Material Point Method (MPM, and we use discrete element method simulations of gravity-driven flow in an inclined channel to demonstrate that this new MPM model can predict the final segregation distribution as well as flow velocity profile well. We then discuss ongoing work where we are using this platform to test the effectiveness of particular segregation models under different boundary conditions.
Speciation and Determination of Low Concentration of Iron in Beer Samples by Cloud Point Extraction
Khalafi, Lida; Doolittle, Pamela; Wright, John
2018-01-01
A laboratory experiment is described in which students determine the concentration and speciation of iron in beer samples using cloud point extraction and absorbance spectroscopy. The basis of determination is the complexation between iron and 2-(5-bromo-2- pyridylazo)-5-diethylaminophenol (5-Br-PADAP) as a colorimetric reagent in an aqueous…
Recovery of putative pathogens from paper point sampling at different depths of periodontal lesions
Directory of Open Access Journals (Sweden)
Nikola Angelov
2009-01-01
Full Text Available Nikola Angelov1, Raydolfo M Aprecio1, James Kettering1, Tord Lundgren2, Matt Riggs3, Jan Egelberg11School of Dentistry, Loma Linda University, Loma Linda, CA, USA; 2Department of Periodontology, University of Florida, Gainesville, FL, USA; 3Department of Psychology, California State University, San Bernardino, CA, USABackground: The aim of this study was to compare the recovery of three putative periodontal pathogens from periodontal lesions in samples using paper points inserted to different depths of the lesions.Methods: Twenty 6–8 mm deep periodontal lesions with bleeding on probing were studied. Microbial samples were obtained using paper points inserted to three different depths of the lesions: orifice of lesion; 2 mm into the lesion; and to the base of lesion. Culturing was used for recovery and identification of Actinobacillus actinomycetemcomitans, Porphyromonas gingivalis, and Prevotella intermedia.Results: The recovery of each of the three putative periodontal pathogens was similar following sampling at the various depths of the lesions.Conclusions: The findings may be explained by the fact that the paper points become saturated as they pass through the orifice of the lesion. Absorption of microorganisms will therefore primarily occur at the orifice. It is also conceivable that the pathogens may be present in similar proportions throughout the various depths of the periodontal lesions.Keywords: paper point sampling, P. gingivalis, P. intermedia, A. actinomycetemcomitans
Bice, K.; Clement, S. C.
1981-01-01
X-ray diffraction and spectroscopy were used to investigate the mineralogical and chemical properties of the Calvert, Ball Old Mine, Ball Martin, and Jordan Sediments. The particle size distribution and index of refraction of each sample were determined. The samples are composed primarily of quartz, kaolinite, and illite. The clay minerals are most abundant in the finer particle size fractions. The chemical properties of the four samples are similar. The Calvert sample is most notably different in that it contains a relatively high amount of iron. The dominant particle size fraction in each sample is silt, with lesser amounts of clay and sand. The indices of refraction of the sediments are the same with the exception of the Calvert sample which has a slightly higher value.
Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology.
Brown, Caleb Marshall; Vavrek, Matthew J
2015-01-01
Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes.
Sample Size Determination for Estimation of Sensor Detection Probabilities Based on a Test Variable
National Research Council Canada - National Science Library
Oymak, Okan
2007-01-01
.... Army Yuma Proving Ground. Specifically, we evaluate the coverage probabilities and lengths of widely used confidence intervals for a binomial proportion and report the required sample sizes for some specified goals...
Sample size determination for logistic regression on a logit-normal distribution.
Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance
2017-06-01
Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.
Development of spatial scaling technique of forest health sample point information
Lee, J.; Ryu, J.; Choi, Y. Y.; Chung, H. I.; Kim, S. H.; Jeon, S. W.
2017-12-01
Most forest health assessments are limited to monitoring sampling sites. The monitoring of forest health in Britain in Britain was carried out mainly on five species (Norway spruce, Sitka spruce, Scots pine, Oak, Beech) Database construction using Oracle database program with density The Forest Health Assessment in GreatBay in the United States was conducted to identify the characteristics of the ecosystem populations of each area based on the evaluation of forest health by tree species, diameter at breast height, water pipe and density in summer and fall of 200. In the case of Korea, in the first evaluation report on forest health vitality, 1000 sample points were placed in the forests using a systematic method of arranging forests at 4Km × 4Km at regular intervals based on an sample point, and 29 items in four categories such as tree health, vegetation, soil, and atmosphere. As mentioned above, existing researches have been done through the monitoring of the survey sample points, and it is difficult to collect information to support customized policies for the regional survey sites. In the case of special forests such as urban forests and major forests, policy and management appropriate to the forest characteristics are needed. Therefore, it is necessary to expand the survey headquarters for diagnosis and evaluation of customized forest health. For this reason, we have constructed a method of spatial scale through the spatial interpolation according to the characteristics of each index of the main sample point table of 29 index in the four points of diagnosis and evaluation report of the first forest health vitality report, PCA statistical analysis and correlative analysis are conducted to construct the indicators with significance, and then weights are selected for each index, and evaluation of forest health is conducted through statistical grading.
Estimating sample size for a small-quadrat method of botanical ...
African Journals Online (AJOL)
... in eight plant communities in the Nylsvley Nature Reserve. Illustrates with a table. Keywords: Botanical surveys; Grass density; Grasslands; Mixed Bushveld; Nylsvley Nature Reserve; Quadrat size species density; Small-quadrat method; Species density; Species richness; botany; sample size; method; survey; south africa
The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models
Schoeneberger, Jason A.
2016-01-01
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
DEFF Research Database (Denmark)
Wang, Fei; Petersen, Dirch Hjorth; Østerberg, Frederik Westergaard
2009-01-01
In this paper, we discuss a probe spacing dependence study in order to estimate the accuracy of micro four-point probe measurements on inhomogeneous samples. Based on sensitivity calculations, both sheet resistance and Hall effect measurements are studied for samples (e.g. laser annealed samples...... the probe spacing is smaller than 1/40 of the variation wavelength, micro four-point probes can provide an accurate record of local properties with less than 1% measurement error. All the calculations agree well with previous experimental results.......) with periodic variations of sheet resistance, sheet carrier density, and carrier mobility. With a variation wavelength of Â¿, probe spacings from 0.0012 to 1002 have been applied to characterize the local variations. The calculations show that the measurement error is highly dependent on the probe spacing. When...
Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.
2018-04-01
Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.
Bergtold, Jason S.; Yeager, Elizabeth A.; Featherstone, Allen M.
2011-01-01
The logistic regression models has been widely used in the social and natural sciences and results from studies using this model can have significant impact. Thus, confidence in the reliability of inferences drawn from these models is essential. The robustness of such inferences is dependent on sample size. The purpose of this study is to examine the impact of sample size on the mean estimated bias and efficiency of parameter estimation and inference for the logistic regression model. A numbe...
Liu, P T
2001-04-01
The conventional sample-size equations based on either the precision of estimation or the power of testing a hypothesis may not be appropriate to determine sample size for a "diagnostic" testing problem, such as the eye irritant Draize test. When the animals' responses to chemical compounds are relatively uniform and extreme and the objective is to classify a compound as either irritant or nonirritant, the test using just two or three animals may be adequate.
Sensitivity of Mantel Haenszel Model and Rasch Model as Viewed From Sample Size
ALWI, IDRUS
2011-01-01
The aims of this research is to study the sensitivity comparison of Mantel Haenszel and Rasch Model for detection differential item functioning, observed from the sample size. These two differential item functioning (DIF) methods were compared using simulate binary item respon data sets of varying sample size, 200 and 400 examinees were used in the analyses, a detection method of differential item functioning (DIF) based on gender difference. These test conditions were replication 4 tim...
Sample size re-estimation incorporating prior information on a nuisance parameter.
Mütze, Tobias; Schmidli, Heinz; Friede, Tim
2017-11-27
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter-based sample size re-estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta-analytic-predictive approach. To incorporate external information into the sample size re-estimation, we propose to update the meta-analytic-predictive prior based on the results of the internal pilot study and to re-estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re-estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior-data conflict is present, incorporating external information into the sample size re-estimation improves the operating characteristics compared to the traditional approach. In the case of a prior-data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re-estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re-estimation, the potential gains should be balanced against the risks. Copyright © 2017 John Wiley & Sons, Ltd.
Sample size choices for XRCT scanning of highly unsaturated soil mixtures
Directory of Open Access Journals (Sweden)
Smith Jonathan C.
2016-01-01
Full Text Available Highly unsaturated soil mixtures (clay, sand and gravel are used as building materials in many parts of the world, and there is increasing interest in understanding their mechanical and hydraulic behaviour. In the laboratory, x-ray computed tomography (XRCT is becoming more widely used to investigate the microstructures of soils, however a crucial issue for such investigations is the choice of sample size, especially concerning the scanning of soil mixtures where there will be a range of particle and void sizes. In this paper we present a discussion (centred around a new set of XRCT scans on sample sizing for scanning of samples comprising soil mixtures, where a balance has to be made between realistic representation of the soil components and the desire for high resolution scanning, We also comment on the appropriateness of differing sample sizes in comparison to sample sizes used for other geotechnical testing. Void size distributions for the samples are presented and from these some hypotheses are made as to the roles of inter- and intra-aggregate voids in the mechanical behaviour of highly unsaturated soils.
Exact Power and Sample Size Calculations for the Two One-Sided Tests of Equivalence.
Directory of Open Access Journals (Sweden)
Gwowen Shieh
Full Text Available Equivalent testing has been strongly recommended for demonstrating the comparability of treatment effects in a wide variety of research fields including medical studies. Although the essential properties of the favorable two one-sided tests of equivalence have been addressed in the literature, the associated power and sample size calculations were illustrated mainly for selecting the most appropriate approximate method. Moreover, conventional power analysis does not consider the allocation restrictions and cost issues of different sample size choices. To extend the practical usefulness of the two one-sided tests procedure, this article describes exact approaches to sample size determinations under various allocation and cost considerations. Because the presented features are not generally available in common software packages, both R and SAS computer codes are presented to implement the suggested power and sample size computations for planning equivalence studies. The exact power function of the TOST procedure is employed to compute optimal sample sizes under four design schemes allowing for different allocation and cost concerns. The proposed power and sample size methodology should be useful for medical sciences to plan equivalence studies.
Exact Power and Sample Size Calculations for the Two One-Sided Tests of Equivalence
Shieh, Gwowen
2016-01-01
Equivalent testing has been strongly recommended for demonstrating the comparability of treatment effects in a wide variety of research fields including medical studies. Although the essential properties of the favorable two one-sided tests of equivalence have been addressed in the literature, the associated power and sample size calculations were illustrated mainly for selecting the most appropriate approximate method. Moreover, conventional power analysis does not consider the allocation restrictions and cost issues of different sample size choices. To extend the practical usefulness of the two one-sided tests procedure, this article describes exact approaches to sample size determinations under various allocation and cost considerations. Because the presented features are not generally available in common software packages, both R and SAS computer codes are presented to implement the suggested power and sample size computations for planning equivalence studies. The exact power function of the TOST procedure is employed to compute optimal sample sizes under four design schemes allowing for different allocation and cost concerns. The proposed power and sample size methodology should be useful for medical sciences to plan equivalence studies. PMID:27598468
A margin based approach to determining sample sizes via tolerance bounds.
Energy Technology Data Exchange (ETDEWEB)
Newcomer, Justin T.; Freeland, Katherine Elizabeth
2013-09-01
This paper proposes a tolerance bound approach for determining sample sizes. With this new methodology we begin to think of sample size in the context of uncertainty exceeding margin. As the sample size decreases the uncertainty in the estimate of margin increases. This can be problematic when the margin is small and only a few units are available for testing. In this case there may be a true underlying positive margin to requirements but the uncertainty may be too large to conclude we have sufficient margin to those requirements with a high level of statistical confidence. Therefore, we provide a methodology for choosing a sample size large enough such that an estimated QMU uncertainty based on the tolerance bound approach will be smaller than the estimated margin (assuming there is positive margin). This ensures that the estimated tolerance bound will be within performance requirements and the tolerance ratio will be greater than one, supporting a conclusion that we have sufficient margin to the performance requirements. In addition, this paper explores the relationship between margin, uncertainty, and sample size and provides an approach and recommendations for quantifying risk when sample sizes are limited.
Hu, Youna; Song, Peter X-K
2012-04-13
Quadratic inference functions (QIF) methodology is an important alternative to the generalized estimating equations (GEE) method in the longitudinal marginal model, as it offers higher estimation efficiency than the GEE when correlation structure is misspecified. The focus of this paper is on sample size determination and power calculation for QIF based on the Wald test in a marginal logistic model with covariates of treatment, time, and treatment-time interaction. We have made three contributions in this paper: (i) we derived formulas of sample size and power for QIF and compared their performance with those given by the GEE; (ii) we proposed an optimal scheme of sample size determination to overcome the difficulty of unknown true correlation matrix in the sense of minimal average risk; and (iii) we studied properties of both QIF and GEE sample size formulas in relation to the number of follow-up visits and found that the QIF gave more robust sample sizes than the GEE. Using numerical examples, we illustrated that without sacrificing statistical power, the QIF design leads to sample size saving and hence lower study cost in comparison with the GEE analysis. We conclude that the QIF analysis is appealing for longitudinal studies. Copyright © 2012 John Wiley & Sons, Ltd.
Guo, Jiin-Huarng; Chen, Hubert J; Luh, Wei-Ming
2011-11-01
The allocation of sufficient participants into different experimental groups for various research purposes under given constraints is an important practical problem faced by researchers. We address the problem of sample size determination between two independent groups for unequal and/or unknown variances when both the power and the differential cost are taken into consideration. We apply the well-known Welch approximate test to derive various sample size allocation ratios by minimizing the total cost or, equivalently, maximizing the statistical power. Two types of hypotheses including superiority/non-inferiority and equivalence of two means are each considered in the process of sample size planning. A simulation study is carried out and the proposed method is validated in terms of Type I error rate and statistical power. As a result, the simulation study reveals that the proposed sample size formulas are very satisfactory under various variances and sample size allocation ratios. Finally, a flowchart, tables, and figures of several sample size allocations are presented for practical reference. ©2011 The British Psychological Society.
Blinded sample size re-estimation for recurrent event data with time trends.
Schneider, S; Schmidli, H; Friede, T
2013-12-30
The use of an internal pilot study for blinded sample size re-estimation (BSSR) allows to reduce uncertainty on the appropriate sample size compared with conventional fixed sample size designs. Recently BSSR procedures for recurrent event data were proposed and investigated. These approaches assume treatment-specific constant event rates that might not always be appropriate as found in relapsing multiple sclerosis. On the basis of a proportional intensity frailty model, we propose methods for BSSR in situations where a time trend of the event rates is present. For the sample size planning and the final analysis standard negative binomial methods can be used, as long as the patient follow-up time is approximately equal in the treatment groups. To re-estimate the sample size at interim, however, a full likelihood analysis is necessary. Operating characteristics such as rejection probabilities and sample size distribution are evaluated in a simulation study motivated by a systematic review in relapsing multiple sclerosis. The key factors affecting the operating characteristics are the study duration and the length of the recruitment period. The proposed procedure for BSSR controls the type I error rate and maintains the desired power against misspecifications of the nuisance parameters. Copyright © 2013 John Wiley & Sons, Ltd.
A type of sample size design in cancer clinical trials for response rate estimation.
Liu, Junfeng
2011-01-01
During the early stage of cancer clinical trials, when it is not convenient to construct an explicit hypothesis testing, a study on a new therapy often calls for a response rate (p) estimation concurrently with or right before a typical phase II study. We consider a two-stage process, where the acquired information from Stage I (with a small sample size (m)) would be utilized for sample size (n) recommendation for Stage II study aiming for a more accurate estimation. Once a sample size design and a parameter estimation protocol are applied, we study the overall utility (cost-effectiveness) in connection with the cost due to patient recruitment and treatment as well as the loss due to mean squared error from parameter estimation. Two approaches will be investigated including the posterior mixture method (a Bayesian approach) and the empirical variance method (a frequentist approach). We also discuss response rate estimation under truncated parameter space using maximum likelihood estimation with regard to sample size and mean squared error. The profiles of p-specific expected sample size, mean squared error and risk under different approaches motivate us to introduce the concept of "admissible sample size (design)". Copyright © 2010 Elsevier Inc. All rights reserved.
Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.
Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham
2017-12-01
During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
A normative inference approach for optimal sample sizes in decisions from experience
Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph
2015-01-01
“Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720
Optimum sample size to estimate mean parasite abundance in fish parasite surveys
Directory of Open Access Journals (Sweden)
Shvydka S.
2018-03-01
Full Text Available To reach ethically and scientifically valid mean abundance values in parasitological and epidemiological studies this paper considers analytic and simulation approaches for sample size determination. The sample size estimation was carried out by applying mathematical formula with predetermined precision level and parameter of the negative binomial distribution estimated from the empirical data. A simulation approach to optimum sample size determination aimed at the estimation of true value of the mean abundance and its confidence interval (CI was based on the Bag of Little Bootstraps (BLB. The abundance of two species of monogenean parasites Ligophorus cephali and L. mediterraneus from Mugil cephalus across the Azov-Black Seas localities were subjected to the analysis. The dispersion pattern of both helminth species could be characterized as a highly aggregated distribution with the variance being substantially larger than the mean abundance. The holistic approach applied here offers a wide range of appropriate methods in searching for the optimum sample size and the understanding about the expected precision level of the mean. Given the superior performance of the BLB relative to formulae with its few assumptions, the bootstrap procedure is the preferred method. Two important assessments were performed in the present study: i based on CIs width a reasonable precision level for the mean abundance in parasitological surveys of Ligophorus spp. could be chosen between 0.8 and 0.5 with 1.6 and 1x mean of the CIs width, and ii the sample size equal 80 or more host individuals allows accurate and precise estimation of mean abundance. Meanwhile for the host sample size in range between 25 and 40 individuals, the median estimates showed minimal bias but the sampling distribution skewed to the low values; a sample size of 10 host individuals yielded to unreliable estimates.
Blinded sample size re-estimation in three-arm trials with 'gold standard' design.
Mütze, Tobias; Friede, Tim
2017-10-15
In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Rambo, Robert P
2017-01-01
The success of a SAXS experiment for structural investigations depends on two precise measurements, the sample and the buffer background. Buffer matching between the sample and background can be achieved using dialysis methods but in biological SAXS of monodisperse systems, sample preparation is routinely being performed with size exclusion chromatography (SEC). SEC is the most reliable method for SAXS sample preparation as the method not only purifies the sample for SAXS but also almost guarantees ideal buffer matching. Here, I will highlight the use of SEC for SAXS sample preparation and demonstrate using example proteins that SEC purification does not always provide for ideal samples. Scrutiny of the SEC elution peak using quasi-elastic and multi-angle light scattering techniques can reveal hidden features (heterogeneity) of the sample that should be considered during SAXS data analysis. In some cases, sample heterogeneity can be controlled using a small molecule additive and I outline a simple additive screening method for sample preparation.
SMALL SAMPLE SIZE IN 2X2 CROSS OVER DESIGNS: CONDITIONS OF DETERMINATION
Directory of Open Access Journals (Sweden)
B SOLEYMANI
2001-09-01
Full Text Available Introduction. Determination of small sample size in some clinical trials is a matter of importance. In cross-over studies which are one types of clinical trials, the matter is more significant. In this article, the conditions in which determination of small sample size in cross-over studies are possible were considered, and the effect of deviation from normality on the matter has been shown. Methods. The present study has been done on such 2x2 cross-over studies that variable of interest is quantitative one and is measurable by ratio or interval scale. The method of consideration is based on use of variable and sample mean"s distributions, central limit theorem, method of sample size determination in two groups, and cumulant or moment generating function. Results. In normal variables or transferable to normal variables, there is no restricting factors other than significant level and power of the test for determination of sample size, but in the case of non-normal variables, it should be determined such large that guarantee the normality of sample mean"s distribution. Discussion. In such cross over studies that because of existence of theoretical base, few samples can be computed, one should not do it without taking applied worth of results into consideration. While determining sample size, in addition to variance, it is necessary to consider distribution of variable, particularly through its skewness and kurtosis coefficients. the more deviation from normality, the more need of samples. Since in medical studies most of the continuous variables are closed to normal distribution, a few number of samples often seems to be adequate for convergence of sample mean to normal distribution.
Page sample size in web accessibility testing: how many pages is enough?
Velleman, Eric Martin; van der Geest, Thea
2013-01-01
Various countries and organizations use a different sampling approach and sample size of web pages in accessibility conformance tests. We are conducting a systematic analysis to determine how many pages is enough for testing whether a website is compliant with standard accessibility guidelines. This
Norm Block Sample Sizes: A Review of 17 Individually Administered Intelligence Tests
Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.
2015-01-01
The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…
Sample sizes to control error estimates in determining soil bulk density in California forest soils
Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber
2016-01-01
Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...
Generating Random Samples of a Given Size Using Social Security Numbers.
Erickson, Richard C.; Brauchle, Paul E.
1984-01-01
The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)
Directory of Open Access Journals (Sweden)
Elijah R Behr
Full Text Available Marked prolongation of the QT interval on the electrocardiogram associated with the polymorphic ventricular tachycardia Torsades de Pointes is a serious adverse event during treatment with antiarrhythmic drugs and other culprit medications, and is a common cause for drug relabeling and withdrawal. Although clinical risk factors have been identified, the syndrome remains unpredictable in an individual patient. Here we used genome-wide association analysis to search for common predisposing genetic variants. Cases of drug-induced Torsades de Pointes (diTdP, treatment tolerant controls, and general population controls were ascertained across multiple sites using common definitions, and genotyped on the Illumina 610k or 1M-Duo BeadChips. Principal Components Analysis was used to select 216 Northwestern European diTdP cases and 771 ancestry-matched controls, including treatment-tolerant and general population subjects. With these sample sizes, there is 80% power to detect a variant at genome-wide significance with minor allele frequency of 10% and conferring an odds ratio of ≥2.7. Tests of association were carried out for each single nucleotide polymorphism (SNP by logistic regression adjusting for gender and population structure. No SNP reached genome wide-significance; the variant with the lowest P value was rs2276314, a non-synonymous coding variant in C18orf21 (p = 3×10(-7, odds ratio = 2, 95% confidence intervals: 1.5-2.6. The haplotype formed by rs2276314 and a second SNP, rs767531, was significantly more frequent in controls than cases (p = 3×10(-9. Expanding the number of controls and a gene-based analysis did not yield significant associations. This study argues that common genomic variants do not contribute importantly to risk for drug-induced Torsades de Pointes across multiple drugs.
A Bayesian predictive sample size selection design for single-arm exploratory clinical trials.
Teramukai, Satoshi; Daimon, Takashi; Zohar, Sarah
2012-12-30
The aim of an exploratory clinical trial is to determine whether a new intervention is promising for further testing in confirmatory clinical trials. Most exploratory clinical trials are designed as single-arm trials using a binary outcome with or without interim monitoring for early stopping. In this context, we propose a Bayesian adaptive design denoted as predictive sample size selection design (PSSD). The design allows for sample size selection following any planned interim analyses for early stopping of a trial, together with sample size determination before starting the trial. In the PSSD, we determine the sample size using the method proposed by Sambucini (Statistics in Medicine 2008; 27:1199-1224), which adopts a predictive probability criterion with two kinds of prior distributions, that is, an 'analysis prior' used to compute posterior probabilities and a 'design prior' used to obtain prior predictive distributions. In the sample size determination of the PSSD, we provide two sample sizes, that is, N and N(max) , using two types of design priors. At each interim analysis, we calculate the predictive probabilities of achieving a successful result at the end of the trial using the analysis prior in order to stop the trial in case of low or high efficacy (Lee et al., Clinical Trials 2008; 5:93-106), and we select an optimal sample size, that is, either N or N(max) as needed, on the basis of the predictive probabilities. We investigate the operating characteristics through simulation studies, and the PSSD retrospectively applies to a lung cancer clinical trial. (243) Copyright © 2012 John Wiley & Sons, Ltd.
Constrained statistical inference: sample-size tables for ANOVA and regression.
Vanbrabant, Leonard; Van De Schoot, Rens; Rosseel, Yves
2014-01-01
Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient β1 is larger than β2 and β3. The corresponding hypothesis is H: β1 > {β2, β3} and this is known as an (order) constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a pre-specified power (say, 0.80) for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30-50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., β1 > β2) results in a higher power than assigning a positive or a negative sign to the parameters (e.g., β1 > 0).
Directory of Open Access Journals (Sweden)
Morteza Bahram
2013-01-01
Full Text Available A new and simple method for the preconcentration and spectrophotometric determination of trace amounts of nickel was developed by cloud point extraction (CPE. In the proposed work, dimethylglyoxime (DMG was used as the chelating agent and Triton X-114 was selected as a non-ionic surfactant for CPE. The parameters affecting the cloud point extraction including the pH of sample solution, concentration of the chelating agent and surfactant, equilibration temperature and time were optimized. Under the optimum conditions, the calibration graph was linear in the range of 10-150 ng mL-1 with a detection limit of 4 ng mL-1. The relative standard deviation for 9 replicates of 100 ng mL-1 Ni(II was 1.04%. The interference effect of some anions and cations was studied. The method was applied to the determination of Ni(II in water samples with satisfactory results.
Sensitivity study of micro four-point probe measurements on small samples
DEFF Research Database (Denmark)
Wang, Fei; Petersen, Dirch Hjorth; Hansen, Torben Mikael
2010-01-01
The authors calculate the sensitivities of micro four-point probe sheet resistance and Hall effect measurements to the local transport properties of nonuniform material samples. With in-line four-point probes, the measured dual configuration sheet resistance is more sensitive near the inner two...... probes than near the outer ones. The sensitive area is defined for infinite film, circular, square, and rectangular test pads, and convergent sensitivities are observed for small samples. The simulations show that the Hall sheet resistance RH in micro Hall measurements with position error suppression...... is sensitive to both local carrier density and local carrier mobility because the position calculation is affected in the two pseudo-sheet-resistance measurements needed for the position error suppression. Furthermore, they have also simulated the sensitivity for the resistance difference Delta...
Directory of Open Access Journals (Sweden)
Stefanović Milena
2013-01-01
Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007
Czech Academy of Sciences Publication Activity Database
Hušková, Marie; Meintanis, S. G.
2008-01-01
Roč. 39, - (2008), s. 235-243 ISSN 1210-3195 Grant - others:GA AV(CZ) GA201/06/0186 Institutional research plan: CEZ:AV0Z10750506 Keywords : goodness-of-fit test * symmetry test * test for independence Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2008/SI/huskova- test ing procedures based on the empirical characteristic functions ii k-sample problem change point problem.pdf
Power and sample size calculations for Mendelian randomization studies using one genetic instrument.
Freeman, Guy; Cowling, Benjamin J; Schooling, C Mary
2013-08-01
Mendelian randomization, which is instrumental variable analysis using genetic variants as instruments, is an increasingly popular method of making causal inferences from observational studies. In order to design efficient Mendelian randomization studies, it is essential to calculate the sample sizes required. We present formulas for calculating the power of a Mendelian randomization study using one genetic instrument to detect an effect of a given size, and the minimum sample size required to detect effects for given levels of significance and power, using asymptotic statistical theory. We apply the formulas to some example data and compare the results with those from simulation methods. Power and sample size calculations using these formulas should be more straightforward to carry out than simulation approaches. These formulas make explicit that the sample size needed for Mendelian randomization study is inversely proportional to the square of the correlation between the genetic instrument and the exposure and proportional to the residual variance of the outcome after removing the effect of the exposure, as well as inversely proportional to the square of the effect size.
Scott, Neil W; Fayers, Peter M; Aaronson, Neil K; Bottomley, Andrew; de Graeff, Alexander; Groenvold, Mogens; Gundy, Chad; Koller, Michael; Petersen, Morten A; Sprangers, Mirjam A G
2009-03-01
Differential item functioning (DIF) analyses are increasingly used to evaluate health-related quality of life (HRQoL) instruments, which often include relatively short subscales. Computer simulations were used to explore how various factors including scale length affect analysis of DIF by ordinal logistic regression. Simulated data, representative of HRQoL scales with four-category items, were generated. The power and type I error rates of the DIF method were then investigated when, respectively, DIF was deliberately introduced and when no DIF was added. The sample size, scale length, floor effects (FEs) and significance level were varied. When there was no DIF, type I error rates were close to 5%. Detecting moderate uniform DIF in a two-item scale required a sample size of 300 per group for adequate (>80%) power. For longer scales, a sample size of 200 was adequate. Considerably larger sample sizes were required to detect nonuniform DIF, when there were extreme FEs or when a reduced type I error rate was required. The impact of the number of items in the scale was relatively small. Ordinal logistic regression successfully detects DIF for HRQoL instruments with short scales. Sample size guidelines are provided.
A simple nomogram for sample size for estimating sensitivity and specificity of medical tests
Directory of Open Access Journals (Sweden)
Malhotra Rajeev
2010-01-01
Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.
Topkaya, Seda Nur; Kosova, Buket; Ozsoz, Mehmet
2014-02-15
Janus Kinase 2 (JAK2) gene single point mutations, which have been reported to be associated with myeloproliferative disorders, are usually detected through conventional methods such as melting curve assays, allele-specific and quantitative Polymerase Chain Reactions (PCRs). Herein, an electrochemical biosensor for the detection of a Guanine (G) to Thymine (T) transversion at nucleotide position 1849 of the JAK2 gene was reported. Due to clinical importance of this mutation, easy and sensitive tests are needed to be developed. Our aim was to design a biosensor system that is capable of detecting the mutation within less than 1h with high sensitivity. For these purposes, an electrochemical sensing system was developed based on detecting hybridization. Hybridization between probe and its target and discrimination of single point mutation was investigated by monitoring guanine oxidation signals observed at +1.0 V with Differential Pulse Voltammetry (DPV) by using synthetic oligonucleotides and Polymerase Chain Reaction (PCR) amplicons. Hybridization between probe and PCR amplicons was also determined with Electrochemical Impedance Spectroscopy (EIS). We successfully detect hybridization first in synthetic samples, and ultimately in real samples involving blood samples from patients as well as additional healthy controls. The limit of detection (S/N=3) was calculated as 44 pmol of target sequence in a 40-μl reaction volume in real samples. Copyright © 2013 Elsevier B.V. All rights reserved.
Jousi, Milla; Saikko, Simo; Nurmi, Jouni
2017-09-11
Point-of-care (POC) testing is highly useful when treating critically ill patients. In case of difficult vascular access, the intraosseous (IO) route is commonly used, and blood is aspirated to confirm the correct position of the IO-needle. Thus, IO blood samples could be easily accessed for POC analyses in emergency situations. The aim of this study was to determine whether IO values agree sufficiently with arterial values to be used for clinical decision making. Two samples of IO blood were drawn from 31 healthy volunteers and compared with arterial samples. The samples were analysed for sodium, potassium, ionized calcium, glucose, haemoglobin, haematocrit, pH, blood gases, base excess, bicarbonate, and lactate using the i-STAT® POC device. Agreement and reliability were estimated by using the Bland-Altman method and intraclass correlation coefficient calculations. Good agreement was evident between the IO and arterial samples for pH, glucose, and lactate. Potassium levels were clearly higher in the IO samples than those from arterial blood. Base excess and bicarbonate were slightly higher, and sodium and ionised calcium values were slightly lower, in the IO samples compared with the arterial values. The blood gases in the IO samples were between arterial and venous values. Haemoglobin and haematocrit showed remarkable variation in agreement. POC diagnostics of IO blood can be a useful tool to guide treatment in critical emergency care. Seeking out the reversible causes of cardiac arrest or assessing the severity of shock are examples of situations in which obtaining vascular access and blood samples can be difficult, though information about the electrolytes, acid-base balance, and lactate could guide clinical decision making. The analysis of IO samples should though be limited to situations in which no other option is available, and the results should be interpreted with caution, because there is not yet enough scientific evidence regarding the agreement of IO
Sample size for estimating average trunk diameter and plant height in eucalyptus hybrids
Directory of Open Access Journals (Sweden)
Alberto Cargnelutti Filho
2016-01-01
Full Text Available ABSTRACT: In eucalyptus crops, it is important to determine the number of plants that need to be evaluated for a reliable inference of growth. The aim of this study was to determine the sample size needed to estimate average trunk diameter at breast height and plant height of inter-specific eucalyptus hybrids. In 6,694 plants of twelve inter-specific hybrids it was evaluated trunk diameter at breast height at three (DBH3 and seven years (DBH7 and tree height at seven years (H7 of age. The statistics: minimum, maximum, mean, variance, standard deviation, standard error, and coefficient of variation were calculated. The hypothesis of variance homogeneity was tested. The sample size was determined by re sampling with replacement of 10,000 re samples. There was an increase in the sample size from DBH3 to H7 and DBH7. A sample size of 16, 59 and 31 plants is adequate to estimate DBH3, DBH7 and H7 means, respectively, of inter-specific hybrids of eucalyptus, with amplitude of confidence interval of 95% equal to 20% of the estimated mean.
Sample Size Induced Brittle-to-Ductile Transition of Single-Crystal Aluminum Nitride
2015-08-01
Aluminum Nitride by GA Gazonas and JW McCauley Weapons and Materials Research Directorate, ARL JJ Guo, KM Reddy, A Hirata, T Fujita, and MW Chen...Sample Size Induced Brittle-to-Ductile Transition of Single-Crystal Aluminum Nitride 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...their microscopic structure. In this study, we report a size induced brittle-to-ductile transition in single-crystal aluminum nitride (AlN). When the
Two-Stage Adaptive Optimal Design with Fixed First-Stage Sample Size
Directory of Open Access Journals (Sweden)
Adam Lane
2012-01-01
Full Text Available In adaptive optimal procedures, the design at each stage is an estimate of the optimal design based on all previous data. Asymptotics for regular models with fixed number of stages are straightforward if one assumes the sample size of each stage goes to infinity with the overall sample size. However, it is not uncommon for a small pilot study of fixed size to be followed by a much larger experiment. We study the large sample behavior of such studies. For simplicity, we assume a nonlinear regression model with normal errors. We show that the distribution of the maximum likelihood estimates converges to a scale mixture family of normal random variables. Then, for a one parameter exponential mean function we derive the asymptotic distribution of the maximum likelihood estimate explicitly and present a simulation to compare the characteristics of this asymptotic distribution with some commonly used alternatives.
[On the impact of sample size calculation and power in clinical research].
Held, Ulrike
2014-10-01
The aim of a clinical trial is to judge the efficacy of a new therapy or drug. In the planning phase of the study, the calculation of the necessary sample size is crucial in order to obtain a meaningful result. The study design, the expected treatment effect in outcome and its variability, power and level of significance are factors which determine the sample size. It is often difficult to fix these parameters prior to the start of the study, but related papers from the literature can be helpful sources for the unknown quantities. For scientific as well as ethical reasons it is necessary to calculate the sample size in advance in order to be able to answer the study question.
Species-genetic diversity correlations in habitat fragmentation can be biased by small sample sizes.
Nazareno, Alison G; Jump, Alistair S
2012-06-01
Predicted parallel impacts of habitat fragmentation on genes and species lie at the core of conservation biology, yet tests of this rule are rare. In a recent article in Ecology Letters, Struebig et al. (2011) report that declining genetic diversity accompanies declining species diversity in tropical forest fragments. However, this study estimates diversity in many populations through extrapolation from very small sample sizes. Using the data of this recent work, we show that results estimated from the smallest sample sizes drive the species-genetic diversity correlation (SGDC), owing to a false-positive association between habitat fragmentation and loss of genetic diversity. Small sample sizes are a persistent problem in habitat fragmentation studies, the results of which often do not fit simple theoretical models. It is essential, therefore, that data assessing the proposed SGDC are sufficient in order that conclusions be robust.
Information-based sample size re-estimation in group sequential design for longitudinal trials.
Zhou, Jing; Adewale, Adeniyi; Shentu, Yue; Liu, Jiajun; Anderson, Keaven
2014-09-28
Group sequential design has become more popular in clinical trials because it allows for trials to stop early for futility or efficacy to save time and resources. However, this approach is less well-known for longitudinal analysis. We have observed repeated cases of studies with longitudinal data where there is an interest in early stopping for a lack of treatment effect or in adapting sample size to correct for inappropriate variance assumptions. We propose an information-based group sequential design as a method to deal with both of these issues. Updating the sample size at each interim analysis makes it possible to maintain the target power while controlling the type I error rate. We will illustrate our strategy with examples and simulations and compare the results with those obtained using fixed design and group sequential design without sample size re-estimation. Copyright © 2014 John Wiley & Sons, Ltd.
van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald
2017-12-04
Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the
Threshold-dependent sample sizes for selenium assessment with stream fish tissue
Hitt, Nathaniel P.; Smith, David R.
2015-01-01
Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased
Mesh-size effects on drift sample composition as determined with a triple net sampler
Slack, K.V.; Tilley, L.J.; Kennelly, S.S.
1991-01-01
Nested nets of three different mesh apertures were used to study mesh-size effects on drift collected in a small mountain stream. The innermost, middle, and outermost nets had, respectively, 425 ??m, 209 ??m and 106 ??m openings, a design that reduced clogging while partitioning collections into three size groups. The open area of mesh in each net, from largest to smallest mesh opening, was 3.7, 5.7 and 8.0 times the area of the net mouth. Volumes of filtered water were determined with a flowmeter. The results are expressed as (1) drift retained by each net, (2) drift that would have been collected by a single net of given mesh size, and (3) the percentage of total drift (the sum of the catches from all three nets) that passed through the 425 ??m and 209 ??m nets. During a two day period in August 1986, Chironomidae larvae were dominant numerically in all 209 ??m and 106 ??m samples and midday 425 ??m samples. Large drifters (Ephemerellidae) occurred only in 425 ??m or 209 ??m nets, but the general pattern was an increase in abundance and number of taxa with decreasing mesh size. Relatively more individuals occurred in the larger mesh nets at night than during the day. The two larger mesh sizes retained 70% of the total sediment/detritus in the drift collections, and this decreased the rate of clogging of the 106 ??m net. If an objective of a sampling program is to compare drift density or drift rate between areas or sampling dates, the same mesh size should be used for all sample collection and processing. The mesh aperture used for drift collection should retain all species and life stages of significance in a study. The nested net design enables an investigator to test the adequacy of drift samples. ?? 1991 Kluwer Academic Publishers.
Predictors of Citation Rate in Psychology: Inconclusive Influence of Effect and Sample Size.
Hanel, Paul H P; Haase, Jennifer
2017-01-01
In the present article, we investigate predictors of how often a scientific article is cited. Specifically, we focus on the influence of two often neglected predictors of citation rate: effect size and sample size, using samples from two psychological topical areas. Both can be considered as indicators of the importance of an article and post hoc (or observed) statistical power, and should, especially in applied fields, predict citation rates. In Study 1, effect size did not have an influence on citation rates across a topical area, both with and without controlling for numerous variables that have been previously linked to citation rates. In contrast, sample size predicted citation rates, but only while controlling for other variables. In Study 2, sample and partly effect sizes predicted citation rates, indicating that the relations vary even between scientific topical areas. Statistically significant results had more citations in Study 2 but not in Study 1. The results indicate that the importance (or power) of scientific findings may not be as strongly related to citation rate as is generally assumed.
Richter, Veronika; Muche, Rainer; Mayer, Benjamin
2018-01-26
Statistical sample size calculation is a crucial part of planning nonhuman animal experiments in basic medical research. The 3R principle intends to reduce the number of animals to a sufficient minimum. When planning experiments, one may consider the impact of less rigorous assumptions during sample size determination as it might result in a considerable reduction in the number of required animals. Sample size calculations conducted for 111 biometrical reports were repeated. The original effect size assumptions remained unchanged, but the basic properties (type 1 error 5%, two-sided hypothesis, 80% power) were varied. The analyses showed that a less rigorous assumption on the type 1 error level (one-sided 5% instead of two-sided 5%) was associated with a savings potential of 14% regarding the original number of required animals. Animal experiments are predominantly exploratory studies. In light of the demonstrated potential reduction in the numbers of required animals, researchers should discuss whether less rigorous assumptions during the process of sample size calculation may be reasonable for the purpose of optimizing the number of animals in experiments according to the 3R principle.
Marvanová, Soňa; Kulich, Pavel; Skoupý, Radim; Hubatka, František; Ciganek, Miroslav; Bendl, Jan; Hovorka, Jan; Machala, Miroslav
2018-04-01
Size-segregated particulate matter (PM) is frequently used in chemical and toxicological studies. Nevertheless, toxicological in vitro studies working with the whole particles often lack a proper evaluation of PM real size distribution and characterization of agglomeration under the experimental conditions. In this study, changes in particle size distributions during the PM sample manipulation and also semiquantitative elemental composition of single particles were evaluated. Coarse (1-10 μm), upper accumulation (0.5-1 μm), lower accumulation (0.17-0.5 μm), and ultrafine (<0.17 μm) PM fractions were collected by high volume cascade impactor in Prague city center. Particles were examined using electron microscopy and their elemental composition was determined by energy dispersive X-ray spectroscopy. Larger or smaller particles, not corresponding to the impaction cut points, were found in all fractions, as they occur in agglomerates and are impacted according to their aerodynamic diameter. Elemental composition of particles in size-segregated fractions varied significantly. Ns-soot occurred in all size fractions. Metallic nanospheres were found in accumulation fractions, but not in ultrafine fraction where ns-soot, carbonaceous particles, and inorganic salts were identified. Dynamic light scattering was used to measure particle size distribution in water and in cell culture media. PM suspension of lower accumulation fraction in water agglomerated after freezing/thawing the sample, and the agglomerates were disrupted by subsequent sonication. Ultrafine fraction did not agglomerate after freezing/thawing the sample. Both lower accumulation and ultrafine fractions were stable in cell culture media with fetal bovine serum, while high agglomeration occurred in media without fetal bovine serum as measured during 24 h.
Morgera, S. D.; Cooper, D. B.
1976-01-01
The experimental observation that a surprisingly small sample size vis-a-vis dimension is needed to achieve good signal-to-interference ratio (SIR) performance with an adaptive predetection filter is explained. The adaptive filter requires estimates as obtained by a recursive stochastic algorithm of the inverse of the filter input data covariance matrix. The SIR performance with sample size is compared for the situations where the covariance matrix estimates are of unstructured (generalized) form and of structured (finite Toeplitz) form; the latter case is consistent with weak stationarity of the input data stochastic process.
Bayesian sample size determination for cost-effectiveness studies with censored data.
Directory of Open Access Journals (Sweden)
Daniel P Beavers
Full Text Available Cost-effectiveness models are commonly utilized to determine the combined clinical and economic impact of one treatment compared to another. However, most methods for sample size determination of cost-effectiveness studies assume fully observed costs and effectiveness outcomes, which presents challenges for survival-based studies in which censoring exists. We propose a Bayesian method for the design and analysis of cost-effectiveness data in which costs and effectiveness may be censored, and the sample size is approximated for both power and assurance. We explore two parametric models and demonstrate the flexibility of the approach to accommodate a variety of modifications to study assumptions.
Treatment Trials for Neonatal Seizures: The Effect of Design on Sample Size.
Directory of Open Access Journals (Sweden)
Nathan J Stevenson
Full Text Available Neonatal seizures are common in the neonatal intensive care unit. Clinicians treat these seizures with several anti-epileptic drugs (AEDs to reduce seizures in a neonate. Current AEDs exhibit sub-optimal efficacy and several randomized control trials (RCT of novel AEDs are planned. The aim of this study was to measure the influence of trial design on the required sample size of a RCT. We used seizure time courses from 41 term neonates with hypoxic ischaemic encephalopathy to build seizure treatment trial simulations. We used five outcome measures, three AED protocols, eight treatment delays from seizure onset (Td and four levels of trial AED efficacy to simulate different RCTs. We performed power calculations for each RCT design and analysed the resultant sample size. We also assessed the rate of false positives, or placebo effect, in typical uncontrolled studies. We found that the false positive rate ranged from 5 to 85% of patients depending on RCT design. For controlled trials, the choice of outcome measure had the largest effect on sample size with median differences of 30.7 fold (IQR: 13.7-40.0 across a range of AED protocols, Td and trial AED efficacy (p<0.001. RCTs that compared the trial AED with positive controls required sample sizes with a median fold increase of 3.2 (IQR: 1.9-11.9; p<0.001. Delays in AED administration from seizure onset also increased the required sample size 2.1 fold (IQR: 1.7-2.9; p<0.001. Subgroup analysis showed that RCTs in neonates treated with hypothermia required a median fold increase in sample size of 2.6 (IQR: 2.4-3.0 compared to trials in normothermic neonates (p<0.001. These results show that RCT design has a profound influence on the required sample size. Trials that use a control group, appropriate outcome measure, and control for differences in Td between groups in analysis will be valid and minimise sample size.
A simulation study of sample size for multilevel logistic regression models
Directory of Open Access Journals (Sweden)
Moineddin Rahim
2007-07-01
Full Text Available Abstract Background Many studies conducted in health and social sciences collect individual level data as outcome measures. Usually, such data have a hierarchical structure, with patients clustered within physicians, and physicians clustered within practices. Large survey data, including national surveys, have a hierarchical or clustered structure; respondents are naturally clustered in geographical units (e.g., health regions and may be grouped into smaller units. Outcomes of interest in many fields not only reflect continuous measures, but also binary outcomes such as depression, presence or absence of a disease, and self-reported general health. In the framework of multilevel studies an important problem is calculating an adequate sample size that generates unbiased and accurate estimates. Methods In this paper simulation studies are used to assess the effect of varying sample size at both the individual and group level on the accuracy of the estimates of the parameters and variance components of multilevel logistic regression models. In addition, the influence of prevalence of the outcome and the intra-class correlation coefficient (ICC is examined. Results The results show that the estimates of the fixed effect parameters are unbiased for 100 groups with group size of 50 or higher. The estimates of the variance covariance components are slightly biased even with 100 groups and group size of 50. The biases for both fixed and random effects are severe for group size of 5. The standard errors for fixed effect parameters are unbiased while for variance covariance components are underestimated. Results suggest that low prevalent events require larger sample sizes with at least a minimum of 100 groups and 50 individuals per group. Conclusion We recommend using a minimum group size of 50 with at least 50 groups to produce valid estimates for multi-level logistic regression models. Group size should be adjusted under conditions where the prevalence
DEFF Research Database (Denmark)
Barini, Emanuele Modesto; Tosello, Guido; De Chiffre, Leonardo
2010-01-01
The paper describes a study concerning point-by-point sampling of complex surfaces using tactile CMMs. A four factor, two level completely randomized factorial experiment was carried out, involving measurements on a complex surface configuration item comprising a sphere, a cylinder and a cone, co...
Reliable calculation in probabilistic logic: Accounting for small sample size and model uncertainty
Energy Technology Data Exchange (ETDEWEB)
Ferson, S. [Applied Biomathematics, Setauket, NY (United States)
1996-12-31
A variety of practical computational problems arise in risk and safety assessments, forensic statistics and decision analyses in which the probability of some event or proposition E is to be estimated from the probabilities of a finite list of related subevents or propositions F,G,H,.... In practice, the analyst`s knowledge may be incomplete in two ways. First, the probabilities of the subevents may be imprecisely known from statistical estimations, perhaps based on very small sample sizes. Second, relationships among the subevents may be known imprecisely. For instance, there may be only limited information about their stochastic dependencies. Representing probability estimates as interval ranges on has been suggested as a way to address the first source of imprecision. A suite of AND, OR and NOT operators defined with reference to the classical Frochet inequalities permit these probability intervals to be used in calculations that address the second source of imprecision, in many cases, in a best possible way. Using statistical confidence intervals as inputs unravels the closure properties of this approach however, requiring that probability estimates be characterized by a nested stack of intervals for all possible levels of statistical confidence, from a point estimate (0% confidence) to the entire unit interval (100% confidence). The corresponding logical operations implied by convolutive application of the logical operators for every possible pair of confidence intervals reduces by symmetry to a manageably simple level-wise iteration. The resulting calculus can be implemented in software that allows users to compute comprehensive and often level-wise best possible bounds on probabilities for logical functions of events.
Beerli, Peter
2004-04-01
Current estimators of gene flow come in two methods; those that estimate parameters assuming that the populations investigated are a small random sample of a large number of populations and those that assume that all populations were sampled. Maximum likelihood or Bayesian approaches that estimate the migration rates and population sizes directly using coalescent theory can easily accommodate datasets that contain a population that has no data, a so-called 'ghost' population. This manipulation allows us to explore the effects of missing populations on the estimation of population sizes and migration rates between two specific populations. The biases of the inferred population parameters depend on the magnitude of the migration rate from the unknown populations. The effects on the population sizes are larger than the effects on the migration rates. The more immigrants from the unknown populations that are arriving in the sample populations the larger the estimated population sizes. Taking into account a ghost population improves or at least does not harm the estimation of population sizes. Estimates of the scaled migration rate M (migration rate per generation divided by the mutation rate per generation) are fairly robust as long as migration rates from the unknown populations are not huge. The inclusion of a ghost population does not improve the estimation of the migration rate M; when the migration rates are estimated as the number of immigrants Nm then a ghost population improves the estimates because of its effect on population size estimation. It seems that for 'real world' analyses one should carefully choose which populations to sample, but there is no need to sample every population in the neighbourhood of a population of interest.
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different
Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem
International Nuclear Information System (INIS)
Reer, B.
2004-01-01
The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)
DEFF Research Database (Denmark)
Petersen, Dirch Hjorth; Lin, Rong; Hansen, Torben Mikael
2008-01-01
In this comparative study, the authors demonstrate the relationship/correlation between macroscopic and microscopic four-point sheet resistance measurements on laser annealed ultra-shallow junctions (USJs). Microfabricated cantilever four-point probes with probe pitch ranging from 1.5 to 500 mu m...... have been used to characterize the sheet resistance uniformity of millisecond laser annealed USJs. They verify, both experimentally and theoretically, that the probe pitch of a four-point probe can strongly affect the measured sheet resistance. Such effect arises from the sensitivity (or "spot size......") of an in-line four-point probe. Their study shows the benefit of the spatial resolution of the micro four-point probe technique to characterize stitching effects resulting from the laser annealing process....
Communication: Newton homotopies for sampling stationary points of potential energy landscapes
International Nuclear Information System (INIS)
Mehta, Dhagash; Chen, Tianran; Hauenstein, Jonathan D.; Wales, David J.
2014-01-01
One of the most challenging and frequently arising problems in many areas of science is to find solutions of a system of multivariate nonlinear equations. There are several numerical methods that can find many (or all if the system is small enough) solutions but they all exhibit characteristic problems. Moreover, traditional methods can break down if the system contains singular solutions. Here, we propose an efficient implementation of Newton homotopies, which can sample a large number of the stationary points of complicated many-body potentials. We demonstrate how the procedure works by applying it to the nearest-neighbor ϕ 4 model and atomic clusters
International Nuclear Information System (INIS)
Wenzel, K.B.
1982-01-01
The CPMET method in its iterative version is a prototype for unitarily invariant and size-consistent approximations. It is shown how size consistency can be verified for simpler and therefore less time-consuming iterative methods. The idea of loosely linked reference determinants allows the use of these simple methods as a basis for the definition of the energy and the CI coefficients of systems for which multi-configurational reference functions are required. (author)
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
Kaisheng Zhang; Pan Cheng; Lifeng He
2014-01-01
According to the characteristics of photovoltaic cell output power curve, this paper analyzed and explained the principle of Maximum Power Point Tracking (MPPT) and both advantages and disadvantages of constant voltage tracking method & perturbation observation method. Further, in combination with the advantages of existing maximum power tracking methods, this paper comes up with an improved tracking method which is recognized as maximum power point tracking combined with constant voltage tra...
Sampling and detection of airborne influenza virus towards point-of-care applications.
Directory of Open Access Journals (Sweden)
Laila Ladhani
Full Text Available Airborne transmission of the influenza virus contributes significantly to the spread of this infectious pathogen, particularly over large distances when carried by aerosol droplets with long survival times. Efficient sampling of virus-loaded aerosol in combination with a low limit of detection of the collected virus could enable rapid and early detection of airborne influenza virus at the point-of-care setting. Here, we demonstrate a successful sampling and detection of airborne influenza virus using a system specifically developed for such applications. Our system consists of a custom-made electrostatic precipitation (ESP-based bioaerosol sampler that is coupled with downstream quantitative polymerase chain reaction (qPCR analysis. Aerosolized viruses are sampled directly into a miniaturized collector with liquid volume of 150 μL, which constitutes a simple and direct interface with subsequent biological assays. This approach reduces sample dilution by at least one order of magnitude when compared to other liquid-based aerosol bio-samplers. Performance of our ESP-based sampler was evaluated using influenza virus-loaded sub-micron aerosols generated from both cultured and clinical samples. Despite the miniaturized collection volume, we demonstrate a collection efficiency of at least 10% and sensitive detection of a minimum of 3721 RNA copies. Furthermore, we show that an improved extraction protocol can allow viral recovery of down to 303 RNA copies and a maximum sampler collection efficiency of 47%. A device with such a performance would reduce sampling times dramatically, from a few hours with current sampling methods down to a couple of minutes with our ESP-based bioaerosol sampler.
Study of Radon and Thoron exhalation from soil samples of different grain sizes.
Chitra, N; Danalakshmi, B; Supriya, D; Vijayalakshmi, I; Sundar, S Bala; Sivasubramanian, K; Baskaran, R; Jose, M T
2018-03-01
The exhalation of radon ( 222 Rn) and thoron ( 220 Rn) from a porous matrix depends on the emanation of them from the grains by the recoil effect. The emanation factor is a quantitative estimate of the emanation phenomenon. The present study is to investigate the effect of grain size of the soil matrix on the emanation factor. Soil samples from three different locations were fractionated into different grain size categories ranging from <0.1 to 2mm. The emanation factors of each of the grain size range were estimated by measuring the mass exhalation rates of radon and thoron and the activity concentrations of 226 Ra and 232 Th. The emanation factor was found to increase with decrease in grain size. This effect was made evident by keeping the parent radium concentration constant for all grain size fractions. The governing factor is the specific surface area of the soil samples which increases with decrease in grain size. Copyright © 2017 Elsevier Ltd. All rights reserved.
Point and Fixed Plot Sampling Inventory Estimates at the Savannah River Site, South Carolina.
Energy Technology Data Exchange (ETDEWEB)
Parresol, Bernard, R.
2004-02-01
This report provides calculation of systematic point sampling volume estimates for trees greater than or equal to 5 inches diameter breast height (dbh) and fixed radius plot volume estimates for trees < 5 inches dbh at the Savannah River Site (SRS), Aiken County, South Carolina. The inventory of 622 plots was started in March 1999 and completed in January 2002 (Figure 1). Estimates are given in cubic foot volume. The analyses are presented in a series of Tables and Figures. In addition, a preliminary analysis of fuel levels on the SRS is given, based on depth measurements of the duff and litter layers on the 622 inventory plots plus line transect samples of down coarse woody material. Potential standing live fuels are also included. The fuels analyses are presented in a series of tables.
Umesh P. Agarwal; Sally A. Ralph; Carlos Baez; Richard S. Reiner; Steve P. Verrill
2017-01-01
Although X-ray diffraction (XRD) has been the most widely used technique to investigate crystallinity index (CrI) and crystallite size (L200) of cellulose materials, there are not many studies that have taken into account the role of sample moisture on these measurements. The present investigation focuses on a variety of celluloses and cellulose...
Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning
Li, Zhushan
2014-01-01
Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…
Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen
2017-01-01
Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.
Sufficient Sample Sizes for Discrete-Time Survival Analysis Mixture Models
Moerbeek, Mirjam
2014-01-01
Long-term survivors in trials with survival endpoints are subjects who will not experience the event of interest. Membership in the class of long-term survivors is unobserved and should be inferred from the data by means of a mixture model. An important question is how large the sample size should
Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics
Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas
2014-01-01
Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…
Size-Resolved Penetration Through High-Efficiency Filter Media Typically Used for Aerosol Sampling
Czech Academy of Sciences Publication Activity Database
Zíková, Naděžda; Ondráček, Jakub; Ždímal, Vladimír
2015-01-01
Roč. 49, č. 4 (2015), s. 239-249 ISSN 0278-6826 R&D Projects: GA ČR(CZ) GBP503/12/G147 Institutional support: RVO:67985858 Keywords : filters * size-resolved penetration * atmospheric aerosol sampling Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.953, year: 2015
B-graph sampling to estimate the size of a hidden population
Spreen, M.; Bogaerts, S.
2015-01-01
Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is
Sample Size Requirements for Assessing Statistical Moments of Simulated Crop Yield Distributions
Lehmann, N.; Finger, R.; Klein, T.; Calanca, P.
2013-01-01
Mechanistic crop growth models are becoming increasingly important in agricultural research and are extensively used in climate change impact assessments. In such studies, statistics of crop yields are usually evaluated without the explicit consideration of sample size requirements. The purpose of
Bolton tooth size ratio among Sudanese Population sample: A preliminary study.
Abdalla Hashim, Ala'a Hayder; Eldin, Al-Hadi Mohi; Hashim, Hayder Abdalla
2015-01-01
The study of the mesiodistal size, the morphology of teeth and dental arch may play an important role in clinical dentistry, as well as other sciences such as Forensic Dentistry and Anthropology. The aims of the present study were to establish tooth-size ratio in Sudanese sample with Class I normal occlusion, to compare the tooth-size ratio between the present study and Bolton's study and between genders. The sample consisted of dental casts of 60 subjects (30 males and 30 females). Bolton formula was used to compute the overall and anterior ratio. The correlation coefficient between the anterior ratio and overall ratio was tested, and Student's t-test was used to compare tooth-size ratios between males and females, and between the present study and Bolton's result. The results of the overall and anterior ratio was relatively similar to the mean values reported by Bolton, and there were no statistically significant differences between the mean values of the anterior ratio and the overall ratio between males and females. The correlation coefficient was (r = 0.79). The result obtained was similar to the Caucasian race. However, the reality indicates that the Sudanese population consisted of different racial groups; therefore, the firm conclusion is difficult to draw. Since this sample is not representative for the Sudanese population, hence, a further study with a large sample collected from the different parts of the Sudan is required.
Fan, Xitao; Wang, Lin; Thompson, Bruce
1999-01-01
A Monte Carlo simulation study investigated the effects on 10 structural equation modeling fit indexes of sample size, estimation method, and model specification. Some fit indexes did not appear to be comparable, and it was apparent that estimation method strongly influenced almost all fit indexes examined, especially for misspecified models. (SLD)
How Big Is Big Enough? Sample Size Requirements for CAST Item Parameter Estimation
Chuah, Siang Chee; Drasgow, Fritz; Luecht, Richard
2006-01-01
Adaptive tests offer the advantages of reduced test length and increased accuracy in ability estimation. However, adaptive tests require large pools of precalibrated items. This study looks at the development of an item pool for 1 type of adaptive administration: the computer-adaptive sequential test. An important issue is the sample size required…
International Nuclear Information System (INIS)
Sampson, T.E.
1991-01-01
Recent advances in segmented gamma scanning have emphasized software corrections for gamma-ray self-adsorption in particulates or lumps of special nuclear material in the sample. another feature of this software is an attenuation correction factor formalism that explicitly accounts for differences in sample container size and composition between the calibration standards and the individual items being measured. Software without this container-size correction produces biases when the unknowns are not packaged in the same containers as the calibration standards. This new software allows the use of different size and composition containers for standards and unknowns, as enormous savings considering the expense of multiple calibration standard sets otherwise needed. This paper presents calculations of the bias resulting from not using this new formalism. These calculations may be used to estimate bias corrections for segmented gamma scanners that do not incorporate these advanced concepts
Tu, Xinjun; Du, Xiaoxia; Singh, Vijay P.; Chen, Xiaohong; Du, Yiliang; Li, Kun
2017-11-01
Constructing a joint distribution of low flows between the donor and recipient basins and analyzing their joint risk are commonly required for implementing interbasin water transfer. In this study, daily streamflow data of bi-basin low flows were sampled at window sizes from 3 to183 days by using the annual minimum method. The stationarity of low flows was tested by a change point analysis and non-stationary low flows were reconstructed by using the moving mean method. Three bivariate Archimedean copulas and five common univariate distributions were applied to fit the joint and marginal distributions of bi-basin low flows. Then, by considering the window size of sampling low flows under environmental change, the change in the joint risk of interbasin water transfer was investigated. Results showed that the non-stationarity of low flows in the recipient basin at all window sizes was significant due to the regulation of water reservoirs. The general extreme value distribution was found to fit the marginal distributions of bi-basin low flows. Three Archimedean copulas satisfactorily fitted the joint distribution of bi-basin low flows and then the Frank copula was found to be the comparatively better. The moving mean method differentiated the location parameter of the GEV distribution, but did not differentiate the scale and shape parameters, and the copula parameters. Due to environmental change, in particular the regulation of water reservoirs in the recipient basin, the decrease of the joint synchronous risk of bi-basin water shortage was slight, but those of the synchronous assurance of water transfer from the donor were remarkable. With the enlargement of window size of sampling low flows, both the joint synchronous risk of bi-basin water shortage, and the joint synchronous assurance of water transfer from the donor basin when there was a water shortage in the recipient basin exhibited a decreasing trend, but their changes were with a slight fluctuation, in
Directory of Open Access Journals (Sweden)
Margaret Penner
2015-11-01
Full Text Available Airborne Laser Scanning (ALS metrics have been used to develop area-based forest inventories; these metrics generally include estimates of stand-level, per hectare values and mean tree attributes. Tree-based ALS inventories contain desirable information on individual tree dimensions and how much they vary within a stand. Adding size class distribution information to area-based inventories helps to bridge the gap between area- and tree-based inventories. This study examines the potential of ALS and stereo-imagery point clouds to predict size class distributions in a boreal forest. With an accurate digital terrain model, both ALS and imagery point clouds can be used to estimate size class distributions with comparable accuracy. Nonparametric imputations were generally superior to parametric imputations; this may be related to the limitation of using a unimodal Weibull function on a relatively small prediction unit (e.g., 400 m2.
Ashoori, A. R.; Vanini, S. A. Sadough; Salari, E.
2017-04-01
In the present paper, vibration behavior of size-dependent functionally graded (FG) circular microplates subjected to thermal loading are carried out in pre/post-buckling of bifurcation/limit-load instability for the first time. Two kinds of frequently used thermal loading, i.e., uniform temperature rise and heat conduction across the thickness direction are considered. Thermo-mechanical material properties of FG plate are supposed to vary smoothly and continuously throughout the thickness based on power law model. Modified couple stress theory is exploited to describe the size dependency of microplate. The nonlinear governing equations of motion and associated boundary conditions are extracted through generalized form of Hamilton's principle and von-Karman geometric nonlinearity for the vibration analysis of circular FG plates including size effects. Ritz finite element method is then employed to construct the matrix representation of governing equations which are solved by two different strategies including Newton-Raphson scheme and cylindrical arc-length method. Moreover, in the following a parametric study is accompanied to examine the effects of the several parameters such as material length scale parameter, temperature distributions, type of buckling, thickness to radius ratio, boundary conditions and power law index on the dimensionless frequency of post-buckled/snapped size-dependent FG plates in detail. It is found that the material length scale parameter and thermal loading have a significant effect on vibration characteristics of size-dependent circular FG plates.
Development of a cloud-point extraction method for copper and nickel determination in food samples
International Nuclear Information System (INIS)
Azevedo Lemos, Valfredo; Selis Santos, Moacy; Teixeira David, Graciete; Vasconcelos Maciel, Mardson; Almeida Bezerra, Marcos de
2008-01-01
A new, simple and versatile cloud-point extraction (CPE) methodology has been developed for the separation and preconcentration of copper and nickel. The metals in the initial aqueous solution were complexed with 2-(2'-benzothiazolylazo)-5-(N,N-diethyl)aminophenol (BDAP) and Triton X-114 was added as surfactant. Dilution of the surfactant-rich phase with acidified methanol was performed after phase separation, and the copper and nickel contents were measured by flame atomic absorption spectrometry. The variables affecting the cloud-point extraction were optimized using a Box-Behnken design. Under the optimum experimental conditions, enrichment factors of 29 and 25 were achieved for copper and nickel, respectively. The accuracy of the method was evaluated and confirmed by analysis of the followings certified reference materials: Apple Leaves, Spinach Leaves and Tomato Leaves. The limits of detection expressed to solid sample analysis were 0.1 μg g -1 (Cu) and 0.4 μg g -1 (Ni). The precision for 10 replicate measurements of 75 μg L -1 Cu or Ni was 6.4 and 1.0, respectively. The method has been successfully applied to the analysis of food samples
Jarvis, Nicholas; Larsbo, Mats; Koestel, John; Keck, Hannes
2017-04-01
The long-range connectivity of macropore networks may exert a strong control on near-saturated and saturated hydraulic conductivity and the occurrence of preferential flow through soil. It has been suggested that percolation concepts may provide a suitable theoretical framework to characterize and quantify macropore connectivity, although this idea has not yet been thoroughly investigated. We tested the applicability of percolation concepts to describe macropore networks quantified by X-ray scanning at a resolution of 0.24 mm in eighteen cylinders (20 cm diameter and height) sampled from the ploughed layer of four soils of contrasting texture in east-central Sweden. The analyses were performed for sample sizes ("regions of interest", ROI) varying between 3 and 12 cm in cube side-length and for minimum pore thicknesses ranging between image resolution and 1 mm. Finite sample size effects were clearly found for ROI's of cube side-length smaller than ca. 6 cm. For larger sample sizes, the results showed the relevance of percolation concepts to soil macropore networks, with a close relationship found between imaged porosity and the fraction of the pore space which percolated (i.e. was connected from top to bottom of the ROI). The percolating fraction increased rapidly as a function of porosity above a small percolation threshold (1-4%). This reflects the ordered nature of the pore networks. The percolation relationships were similar for all four soils. Although pores larger than 1 mm appeared to be somewhat better connected, only small effects of minimum pore thickness were noted across the range of tested pore sizes. The utility of percolation concepts to describe the connectivity of more anisotropic macropore networks (e.g. in subsoil horizons) should also be tested, although with current X-ray scanning equipment it may prove difficult in many cases to analyze sufficiently large samples that would avoid finite size effects.
Model choice and sample size in item response theory analysis of aphasia tests.
Hula, William D; Fergadiotis, Gerasimos; Martin, Nadine
2012-05-01
The purpose of this study was to identify the most appropriate item response theory (IRT) measurement model for aphasia tests requiring 2-choice responses and to determine whether small samples are adequate for estimating such models. Pyramids and Palm Trees (Howard & Patterson, 1992) test data that had been collected from individuals with aphasia were analyzed, and the resulting item and person estimates were used to develop simulated test data for 3 sample size conditions. The simulated data were analyzed using a standard 1-parameter logistic (1-PL) model and 3 models that accounted for the influence of guessing: augmented 1-PL and 2-PL models and a 3-PL model. The model estimates obtained from the simulated data were compared to their known true values. With small and medium sample sizes, an augmented 1-PL model was the most accurate at recovering the known item and person parameters; however, no model performed well at any sample size. Follow-up simulations confirmed that the large influence of guessing and the extreme easiness of the items contributed substantially to the poor estimation of item difficulty and person ability. Incorporating the assumption of guessing into IRT models improves parameter estimation accuracy, even for small samples. However, caution should be exercised in interpreting scores obtained from easy 2-choice tests, regardless of whether IRT modeling or percentage correct scoring is used.
Size selective isocyanate aerosols personal air sampling using porous plastic foams
Khanh Huynh, Cong; Duc, Trinh Vu
2009-02-01
As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.
10Be measurements at MALT using reduced-size samples of bulk sediments
Horiuchi, Kazuho; Oniyanagi, Itsumi; Wasada, Hiroshi; Matsuzaki, Hiroyuki
2013-01-01
In order to establish 10Be measurements on reduced-size (1-10 mg) samples of bulk sediments, we investigated four different pretreatment designs using lacustrine and marginal-sea sediments and the AMS system of the Micro Analysis Laboratory, Tandem accelerator (MALT) at The University of Tokyo. The 10Be concentrations obtained from the samples of 1-10 mg agreed within a precision of 3-5% with the values previously determined using corresponding ordinary-size (∼200 mg) samples and the same AMS system. This fact demonstrates reliable determinations of 10Be with milligram levels of recent bulk sediments at MALT. On the other hand, a clear decline of the BeO- beam with tens of micrograms of 9Be carrier suggests that the combination of ten milligrams of sediments and a few hundred micrograms of the 9Be carrier is more convenient at this stage.
{sup 10}Be measurements at MALT using reduced-size samples of bulk sediments
Energy Technology Data Exchange (ETDEWEB)
Horiuchi, Kazuho, E-mail: kh@cc.hirosaki-u.ac.jp [Graduate School of Science and Technology, Hirosaki University, 3, Bunkyo-chou, Hirosaki, Aomori 036-8561 (Japan); Oniyanagi, Itsumi [Graduate School of Science and Technology, Hirosaki University, 3, Bunkyo-chou, Hirosaki, Aomori 036-8561 (Japan); Wasada, Hiroshi [Institute of Geology and Paleontology, Graduate school of Science, Tohoku University, 6-3, Aramaki Aza-Aoba, Aoba-ku, Sendai 980-8578 (Japan); Matsuzaki, Hiroyuki [MALT, School of Engineering, University of Tokyo, 2-11-16, Yayoi, Bunkyo-ku, Tokyo 113-0032 (Japan)
2013-01-15
In order to establish {sup 10}Be measurements on reduced-size (1-10 mg) samples of bulk sediments, we investigated four different pretreatment designs using lacustrine and marginal-sea sediments and the AMS system of the Micro Analysis Laboratory, Tandem accelerator (MALT) at University of Tokyo. The {sup 10}Be concentrations obtained from the samples of 1-10 mg agreed within a precision of 3-5% with the values previously determined using corresponding ordinary-size ({approx}200 mg) samples and the same AMS system. This fact demonstrates reliable determinations of {sup 10}Be with milligram levels of recent bulk sediments at MALT. On the other hand, a clear decline of the BeO{sup -} beam with tens of micrograms of {sup 9}Be carrier suggests that the combination of ten milligrams of sediments and a few hundred micrograms of the {sup 9}Be carrier is more convenient at this stage.
Size selective isocyanate aerosols personal air sampling using porous plastic foams
International Nuclear Information System (INIS)
Cong Khanh Huynh; Trinh Vu Duc
2009-01-01
As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.
Czech Academy of Sciences Publication Activity Database
Netopilík, Miloš
2012-01-01
Roč. 1260, 19 October (2012), s. 97-101 ISSN 0021-9673 R&D Projects: GA ČR GCP205/11/J043 Institutional research plan: CEZ:AV0Z40500505 Institutional support: RVO:61389013 Keywords : size exclusion chromatography * local dispersity * random branching Subject RIV: CD - Macromolecular Chemistry Impact factor: 4.612, year: 2012
Size effect studies on geometrically scaled three point bend type specimens with U-notches
Energy Technology Data Exchange (ETDEWEB)
Krompholz, K.; Kalkhof, D.; Groth, E
2001-02-01
One of the objectives of the REVISA project (REactor Vessel Integrity in Severe Accidents) is to assess size and scale effects in plastic flow and failure. This includes an experimental programme devoted to characterising the influence of specimen size, strain rate, and strain gradients at various temperatures. One of the materials selected was the forged reactor pressure vessel material 20 MnMoNi 55, material number 1.6310 (heat number 69906). Among others, a size effect study of the creep response of this material was performed, using geometrically similar smooth specimens with 5 mm and 20 mm diameter. The tests were done under constant load in an inert atmosphere at 700 {sup o}C, 800 {sup o}C, and 900 {sup o}C, close to and within the phase transformation regime. The mechanical stresses varied from 10 MPa to 30 MPa, depending on temperature. Prior to creep testing the temperature and time dependence of scale oxidation as well as the temperature regime of the phase transformation was determined. The creep tests were supplemented by metallographical investigations.The test results are presented in form of creep curves strain versus time from which characteristic creep data were determined as a function of the stress level at given temperatures. The characteristic data are the times to 5% and 15% strain and to rupture, the secondary (minimum) creep rate, the elongation at fracture within the gauge length, the type of fracture and the area reduction after fracture. From metallographical investigations the accent's phase contents at different temperatures could be estimated. From these data also the parameters of the regression calculation (e.g. Norton's creep law) were obtained. The evaluation revealed that the creep curves and characteristic data are size dependent of varying degree, depending on the stress and temperature level, but the size influence cannot be related to corrosion or orientation effects or to macroscopic heterogeneity (position effect
Stress state analysis of sub-sized pre-cracked three-point-bend specimen
Czech Academy of Sciences Publication Activity Database
Stratil, Luděk; Kozák, Vladislav; Hadraba, Hynek; Dlouhý, Ivo
2012-01-01
Roč. 19, 2/3 (2012), s. 121-129 ISSN 1802-1484 R&D Projects: GA ČR GD106/09/H035; GA ČR(CZ) GAP107/10/0361 Institutional support: RVO:68081723 Keywords : KLST * three-point bending * side grooving * Eurofer97 * J-integral Subject RIV: JL - Materials Fatigue, Friction Mechanics
PIXE–PIGE analysis of size-segregated aerosol samples from remote areas
Energy Technology Data Exchange (ETDEWEB)
Calzolai, G., E-mail: calzolai@fi.infn.it [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Chiari, M.; Lucarelli, F.; Nava, S.; Taccetti, F. [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Becagli, S.; Frosini, D.; Traversi, R.; Udisti, R. [Department of Chemistry, University of Florence, Via della Lastruccia 3, 50019 Sesto Fiorentino (Italy)
2014-01-01
The chemical characterization of size-segregated samples is helpful to study the aerosol effects on both human health and environment. The sampling with multi-stage cascade impactors (e.g., Small Deposit area Impactor, SDI) produces inhomogeneous samples, with a multi-spot geometry and a non-negligible particle stratification. At LABEC (Laboratory of nuclear techniques for the Environment and the Cultural Heritage), an external beam line is fully dedicated to PIXE–PIGE analysis of aerosol samples. PIGE is routinely used as a sidekick of PIXE to correct the underestimation of PIXE in quantifying the concentration of the lightest detectable elements, like Na or Al, due to X-ray absorption inside the individual aerosol particles. In this work PIGE has been used to study proper attenuation correction factors for SDI samples: relevant attenuation effects have been observed also for stages collecting smaller particles, and consequent implications on the retrieved aerosol modal structure have been evidenced.
Sample size calculation for microarray experiments with blocked one-way design
Directory of Open Access Journals (Sweden)
Jung Sin-Ho
2009-05-01
Full Text Available Abstract Background One of the main objectives of microarray analysis is to identify differentially expressed genes for different types of cells or treatments. Many statistical methods have been proposed to assess the treatment effects in microarray experiments. Results In this paper, we consider discovery of the genes that are differentially expressed among K (> 2 treatments when each set of K arrays consists of a block. In this case, the array data among K treatments tend to be correlated because of block effect. We propose to use the blocked one-way ANOVA F-statistic to test if each gene is differentially expressed among K treatments. The marginal p-values are calculated using a permutation method accounting for the block effect, adjusting for the multiplicity of the testing procedure by controlling the false discovery rate (FDR. We propose a sample size calculation method for microarray experiments with a blocked one-way design. With FDR level and effect sizes of genes specified, our formula provides a sample size for a given number of true discoveries. Conclusion The calculated sample size is shown via simulations to provide an accurate number of true discoveries while controlling the FDR at the desired level.
Elnoby, Rasha M.; Mourad, M. Hussein; Elnaby, Salah L. Hassab; Abou Kana, Maram T. H.
2018-05-01
Solar based cells coated by nanoparticles (NPs) acknowledge potential utilizing as a part of photovoltaic innovation. The acquired silicon solar cells (Si-SCs) coated with different sizes of silver nanoparticles (Ag NPs) as well as uncoated were fabricated in our lab. The sizes and optical properties of prepared NPs were characterized by spectroscopic techniques and Mie theory respectively. The reflectivity of Si-SCs showed reduction of this property as the size of NPs increased. Electrical properties as open circuit current, fill factor and output power density were assessed and discussed depending on point of view of Mie theory for the optical properties of NPs. Also, photostabilities of SCs were assessed using diode laser of wavelength 450 nm and power 300 mW. Coated SCs with the largest Ag NPs size showed the highest Photostability due to its highest scattering efficiency according to Mie theory concept.
Dong, Nianbo; Maynard, Rebecca
2013-01-01
This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…
Designing image segmentation studies: Statistical power, sample size and reference standard quality.
Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C
2017-12-01
Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Nili, Samaun; Park, Chanyoung; Haftka, Raphael T.; Kim, Nam H.; Balachandar, S.
2017-11-01
Point particle methods are extensively used in simulating Euler-Lagrange multiphase dispersed flow. When particles are much smaller than the Eulerian grid the point particle model is on firm theoretical ground. However, this standard approach of evaluating the gas-particle coupling at the particle center fails to converge as the Eulerian grid is reduced below particle size. We present an approach to model the interaction between particles and fluid for finite size particles that permits convergence. We use the generalized Faxen form to compute the force on a particle and compare the results against traditional point particle method. We apportion the different force components on the particle to fluid cells based on the fraction of particle volume or surface in the cell. The application is to a one-dimensional model of shock propagation through a particle-laden field at moderate volume fraction, where the convergence is achieved for a well-formulated force model and back coupling for finite size particles. Comparison with 3D direct fully resolved numerical simulations will be used to check if the approach also improves accuracy compared to the point particle model. Work supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.
John C. Brissette; Mark J. Ducey; Jeffrey H. Gove
2003-01-01
We field tested a new method for sampling down coarse woody material (CWM) using an angle gauge and compared it with the more traditional line intersect sampling (LIS) method. Permanent sample locations in stands managed with different silvicultural treatments within the Penobscot Experimental Forest (Maine, USA) were used as the sampling locations. Point relascope...
Collery, Olivier; Guyader, Jean-Louis
2010-03-01
In the context of better understanding and predicting sound transmission through heterogeneous fluid-loaded aircraft structures, this paper presents a method of solving the vibroacoustic problem of plates. The present work considers fluid-structure coupling and is applied to simply supported rectangular plates excited mechanically. The proposed method is based on the minimization of the error of verification of the plate vibroacoustic equation of motion on a sample of points. From sampling comes an aliasing effect; this phenomenon is described and solved using a wavelet-based filter. The proposed approach is validated in presenting very accurate results of sound radiation immersed in heavy and light fluids. The fluid-structure interaction appears to be very well described avoiding time-consuming classical calculations of the modal radiation impedances. The focus is also put on different samplings to observe the aliasing effect. As perspectives sound radiation from a non-homogeneous plate is solved and compared with reference results proving all the power of this method.
Galbraith, Niall D; Manktelow, Ken I; Morris, Neil G
2010-11-01
Previous studies demonstrate that people high in delusional ideation exhibit a data-gathering bias on inductive reasoning tasks. The current study set out to investigate the factors that may underpin such a bias by examining healthy individuals, classified as either high or low scorers on the Peters et al. Delusions Inventory (PDI). More specifically, whether high PDI scorers have a relatively poor appreciation of sample size and heterogeneity when making statistical judgments. In Expt 1, high PDI scorers made higher probability estimates when generalizing from a sample of 1 with regard to the heterogeneous human property of obesity. In Expt 2, this effect was replicated and was also observed in relation to the heterogeneous property of aggression. The findings suggest that delusion-prone individuals are less appreciative of the importance of sample size when making statistical judgments about heterogeneous properties; this may underpin the data gathering bias observed in previous studies. There was some support for the hypothesis that threatening material would exacerbate high PDI scorers' indifference to sample size.
Evaluation of Approaches to Analyzing Continuous Correlated Eye Data When Sample Size Is Small.
Huang, Jing; Huang, Jiayan; Chen, Yong; Ying, Gui-Shuang
2018-02-01
To evaluate the performance of commonly used statistical methods for analyzing continuous correlated eye data when sample size is small. We simulated correlated continuous data from two designs: (1) two eyes of a subject in two comparison groups; (2) two eyes of a subject in the same comparison group, under various sample size (5-50), inter-eye correlation (0-0.75) and effect size (0-0.8). Simulated data were analyzed using paired t-test, two sample t-test, Wald test and score test using the generalized estimating equations (GEE) and F-test using linear mixed effects model (LMM). We compared type I error rates and statistical powers, and demonstrated analysis approaches through analyzing two real datasets. In design 1, paired t-test and LMM perform better than GEE, with nominal type 1 error rate and higher statistical power. In design 2, no test performs uniformly well: two sample t-test (average of two eyes or a random eye) achieves better control of type I error but yields lower statistical power. In both designs, the GEE Wald test inflates type I error rate and GEE score test has lower power. When sample size is small, some commonly used statistical methods do not perform well. Paired t-test and LMM perform best when two eyes of a subject are in two different comparison groups, and t-test using the average of two eyes performs best when the two eyes are in the same comparison group. When selecting the appropriate analysis approach the study design should be considered.
Simulations of HXR Foot-point Source Sizes for Modified Thick-target Models
Czech Academy of Sciences Publication Activity Database
Moravec, Z.; Varady, Michal; Karlický, Marian; Kašparová, Jana
2013-01-01
Roč. 37, č. 2 (2013), s. 535-540 ISSN 1845-8319. [Hvar Astrophysical Colloquium /12./. Hvar, 03.09.2012-07.09.2012] R&D Projects: GA ČR GAP209/10/1680; GA ČR GAP209/12/0103 Institutional support: RVO:67985815 Keywords : solar flares * hard X-rays * foot-point sources Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics
Goujon, Florent; Ghoufi, Aziz; Malfreyt, Patrice
2018-02-01
We report Monte Carlo (MC) simulations of the Lennard-Jones (LJ) fluid at the liquid-vapor interface in the critical region. A slab-based tail method is associated with the MC simulations to approach as close as possible the critical point (T∗ = 0.98 TC∗) . We investigate then the impact of system-sizes on the surface tension and coexisting densities by considering very large box dimensions for which the surface tension is independent of system-sizes at low temperatures.
Self-organizing adaptive map: autonomous learning of curves and surfaces from point samples.
Piastra, Marco
2013-05-01
Competitive Hebbian Learning (CHL) (Martinetz, 1993) is a simple and elegant method for estimating the topology of a manifold from point samples. The method has been adopted in a number of self-organizing networks described in the literature and has given rise to related studies in the fields of geometry and computational topology. Recent results from these fields have shown that a faithful reconstruction can be obtained using the CHL method only for curves and surfaces. Within these limitations, these findings constitute a basis for defining a CHL-based, growing self-organizing network that produces a faithful reconstruction of an input manifold. The SOAM (Self-Organizing Adaptive Map) algorithm adapts its local structure autonomously in such a way that it can match the features of the manifold being learned. The adaptation process is driven by the defects arising when the network structure is inadequate, which cause a growth in the density of units. Regions of the network undergo a phase transition and change their behavior whenever a simple, local condition of topological regularity is met. The phase transition is eventually completed across the entire structure and the adaptation process terminates. In specific conditions, the structure thus obtained is homeomorphic to the input manifold. During the adaptation process, the network also has the capability to focus on the acquisition of input point samples in critical regions, with a substantial increase in efficiency. The behavior of the network has been assessed experimentally with typical data sets for surface reconstruction, including suboptimal conditions, e.g. with undersampling and noise. Copyright © 2012 Elsevier Ltd. All rights reserved.
Crystallite size variation of TiO2 samples depending time heat treatment
International Nuclear Information System (INIS)
Galante, A.G.M.; Paula, F.R. de; Montanhera, M.A.; Pereira, E.A.; Spada, E.R.
2016-01-01
Titanium dioxide (TiO 2 ) is an oxide semiconductor that may be found in mixed phase or in distinct phases: brookite, anatase and rutile. In this work was carried out the study of the residence time influence at a given temperature in the TiO 2 powder physical properties. After the powder synthesis, the samples were divided and heat treated at 650 °C with a ramp up to 3 °C/min and a residence time ranging from 0 to 20 hours and subsequently characterized by x-ray diffraction. Analyzing the obtained diffraction patterns, it was observed that, from 5-hour residence time, began the two-distinct phase coexistence: anatase and rutile. It also calculated the average crystallite size of each sample. The results showed an increase in average crystallite size with increasing residence time of the heat treatment. (author)
A contemporary decennial global Landsat sample of changing agricultural field sizes
White, Emma; Roy, David
2014-05-01
Agriculture has caused significant human induced Land Cover Land Use (LCLU) change, with dramatic cropland expansion in the last century and significant increases in productivity over the past few decades. Satellite data have been used for agricultural applications including cropland distribution mapping, crop condition monitoring, crop production assessment and yield prediction. Satellite based agricultural applications are less reliable when the sensor spatial resolution is small relative to the field size. However, to date, studies of agricultural field size distributions and their change have been limited, even though this information is needed to inform the design of agricultural satellite monitoring systems. Moreover, the size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLU change. In many parts of the world field sizes may have increased. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, and impacts on the diffusion of herbicides, pesticides, disease pathogens, and pests. The Landsat series of satellites provide the longest record of global land observations, with 30m observations available since 1982. Landsat data are used to examine contemporary field size changes in a period (1980 to 2010) when significant global agricultural changes have occurred. A multi-scale sampling approach is used to locate global hotspots of field size change by examination of a recent global agricultural yield map and literature review. Nine hotspots are selected where significant field size change is apparent and where change has been driven by technological advancements (Argentina and U.S.), abrupt societal changes (Albania and Zimbabwe), government land use and agricultural policy changes (China, Malaysia, Brazil), and/or constrained by
Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc
2015-07-07
The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient
Evaluating the performance of species richness estimators: sensitivity to sample grain size
DEFF Research Database (Denmark)
Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara
2006-01-01
and several recent estimators [proposed by Rosenzweig et al. (Conservation Biology, 2003, 17, 864-874), and Ugland et al. (Journal of Animal Ecology, 2003, 72, 888-897)] performed poorly. 3. Estimations developed using the smaller grain sizes (pair of traps, traps, records and individuals) presented similar....... Data obtained with standardized sampling of 78 transects in natural forest remnants of five islands were aggregated in seven different grains (i.e. ways of defining a single sample): islands, natural areas, transects, pairs of traps, traps, database records and individuals to assess the effect of using...
Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence
International Nuclear Information System (INIS)
Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A.
2013-01-01
Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)
Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence
Energy Technology Data Exchange (ETDEWEB)
Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A., E-mail: dan-paiva@hotmail.com, E-mail: ejfranca@cnen.gov.br, E-mail: marcelo_rlm@hotmail.com, E-mail: maensoal@yahoo.com.br, E-mail: chazin@cnen.gov.b [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)
2013-07-01
Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)
Czech Academy of Sciences Publication Activity Database
Duintjer Tebbens, Jurjen; Schlesinger, P.
2007-01-01
Roč. 52, č. 1 (2007), s. 423-437 ISSN 0167-9473 R&D Projects: GA AV ČR 1ET400300415; GA MŠk LC536 Institutional research plan: CEZ:AV0Z10300504 Keywords : linear discriminant analysis * numerical aspects of FLDA * small sample size problem * dimension reduction * sparsity Subject RIV: BA - General Mathematics Impact factor: 1.029, year: 2007
Dziak, John J.; Nahum-Shani, Inbal; Collins, Linda M.
2012-01-01
Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions, by helping investigators to screen several candidate intervention components simultaneously and decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the populatio...
Point of impact: the effect of size and speed on puncture mechanics.
Anderson, P S L; LaCosse, J; Pankow, M
2016-06-06
The use of high-speed puncture mechanics for prey capture has been documented across a wide range of organisms, including vertebrates, arthropods, molluscs and cnidarians. These examples span four phyla and seven orders of magnitude difference in size. The commonality of these puncture systems offers an opportunity to explore how organisms at different scales and with different materials, morphologies and kinematics perform the same basic function. However, there is currently no framework for combining kinematic performance with cutting mechanics in biological puncture systems. Our aim here is to establish this framework by examining the effects of size and velocity in a series of controlled ballistic puncture experiments. Arrows of identical shape but varying in mass and speed were shot into cubes of ballistic gelatine. Results from high-speed videography show that projectile velocity can alter how the target gel responds to cutting. Mixed models comparing kinematic variables and puncture patterns indicate that the kinetic energy of a projectile is a better predictor of penetration than either momentum or velocity. These results form a foundation for studying the effects of impact on biological puncture, opening the door for future work to explore the influence of morphology and material organization on high-speed cutting dynamics.
Directory of Open Access Journals (Sweden)
Shengyu eJiang
2016-02-01
Full Text Available Likert types of rating scales in which a respondent chooses a response from an ordered set of response options are used to measure a wide variety of psychological, educational, and medical outcome variables. The most appropriate item response theory model for analyzing and scoring these instruments when they provide scores on multiple scales is the multidimensional graded response model (MGRM. A simulation study was conducted to investigate the variables that might affect item parameter recovery for the MGRM. Data were generated based on different sample sizes, test lengths, and scale intercorrelations. Parameter estimates were obtained through the flexiMIRT software. The quality of parameter recovery was assessed by the correlation between true and estimated parameters as well as bias and root- mean-square-error. Results indicated that for the vast majority of cases studied a sample size of N = 500 provided accurate parameter estimates, except for tests with 240 items when 1,000 examinees were necessary to obtain accurate parameter estimates. Increasing sample size beyond N = 1,000 did not increase the accuracy of MGRM parameter estimates.
Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses
Lanfear, Robert; Hua, Xia; Warren, Dan L.
2016-01-01
Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794
Sample size and power determination when limited preliminary information is available
Directory of Open Access Journals (Sweden)
Christine E. McLaren
2017-04-01
Full Text Available Abstract Background We describe a novel strategy for power and sample size determination developed for studies utilizing investigational technologies with limited available preliminary data, specifically of imaging biomarkers. We evaluated diffuse optical spectroscopic imaging (DOSI, an experimental noninvasive imaging technique that may be capable of assessing changes in mammographic density. Because there is significant evidence that tamoxifen treatment is more effective at reducing breast cancer risk when accompanied by a reduction of breast density, we designed a study to assess the changes from baseline in DOSI imaging biomarkers that may reflect fluctuations in breast density in premenopausal women receiving tamoxifen. Method While preliminary data demonstrate that DOSI is sensitive to mammographic density in women about to receive neoadjuvant chemotherapy for breast cancer, there is no information on DOSI in tamoxifen treatment. Since the relationship between magnetic resonance imaging (MRI and DOSI has been established in previous studies, we developed a statistical simulation approach utilizing information from an investigation of MRI assessment of breast density in 16 women before and after treatment with tamoxifen to estimate the changes in DOSI biomarkers due to tamoxifen. Results Three sets of 10,000 pairs of MRI breast density data with correlation coefficients of 0.5, 0.8 and 0.9 were simulated and generated and were used to simulate and generate a corresponding 5,000,000 pairs of DOSI values representing water, ctHHB, and lipid. Minimum sample sizes needed per group for specified clinically-relevant effect sizes were obtained. Conclusion The simulation techniques we describe can be applied in studies of other experimental technologies to obtain the important preliminary data to inform the power and sample size calculations.
Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A
2017-06-30
Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
The Effect of Sterilization on Size and Shape of Fat Globules in Model Processed Cheese Samples
Directory of Open Access Journals (Sweden)
B. Tremlová
2006-01-01
Full Text Available Model cheese samples from 4 independent productions were heat sterilized (117 °C, 20 minutes after the melting process and packing with an aim to prolong their durability. The objective of the study was to assess changes in the size and shape of fat globules due to heat sterilization by using image analysis methods. The study included a selection of suitable methods of preparation mounts, taking microphotographs and making overlays for automatic processing of photographs by image analyser, ascertaining parameters to determine the size and shape of fat globules and statistical analysis of results obtained. The results of the experiment suggest that changes in shape of fat globules due to heat sterilization are not unequivocal. We found that the size of fat globules was significantly increased (p < 0.01 due to heat sterilization (117 °C, 20 min, and the shares of small fat globules (up to 500 μm2, or 100 μm2 in the samples of heat sterilized processed cheese were decreased. The results imply that the image analysis method is very useful when assessing the effect of technological process on the quality of processed cheese quality.
Methodology for sample preparation and size measurement of commercial ZnO nanoparticles
Directory of Open Access Journals (Sweden)
Pei-Jia Lu
2018-04-01
Full Text Available This study discusses the strategies on sample preparation to acquire images with sufficient quality for size characterization by scanning electron microscope (SEM using two commercial ZnO nanoparticles of different surface properties as a demonstration. The central idea is that micrometer sized aggregates of ZnO in powdered forms need to firstly be broken down to nanosized particles through an appropriate process to generate nanoparticle dispersion before being deposited on a flat surface for SEM observation. Analytical tools such as contact angle, dynamic light scattering and zeta potential have been utilized to optimize the procedure for sample preparation and to check the quality of the results. Meanwhile, measurements of zeta potential values on flat surfaces also provide critical information and save lots of time and efforts in selection of suitable substrate for particles of different properties to be attracted and kept on the surface without further aggregation. This simple, low-cost methodology can be generally applied on size characterization of commercial ZnO nanoparticles with limited information from vendors. Keywords: Zinc oxide, Nanoparticles, Methodology
International Nuclear Information System (INIS)
Sampson, T.E.
1991-01-01
Recent advances in segmented gamma scanning have emphasized software corrections for gamma-ray self-absorption in particulates or lumps of special nuclear material in the sample. Another feature of this software is an attenuation correction factor formalism that explicitly accounts for differences in sample container size and composition between the calibration standards and the individual items being measured. Software without this container-size correction produces biases when the unknowns are not packaged in the same containers as the calibration standards. This new software allows the use of different size and composition containers for standards and unknowns, an enormous savings considering the expense of multiple calibration standard sets otherwise needed. This report presents calculations of the bias resulting from not using this new formalism. The calculations may be used to estimate bias corrections for segmented gamma scanners that do not incorporate these advanced concepts. This paper describes this attenuation-correction-factor formalism in more detail and illustrates the magnitude of the biases that may arise if it is not used. 5 refs., 7 figs
Hemery, Lenaïg G; Politano, Kristin K; Henkel, Sarah K
2017-08-01
With increasing cascading effects of climate change on the marine environment, as well as pollution and anthropogenic utilization of the seafloor, there is increasing interest in tracking changes to benthic communities. Macrofaunal surveys are traditionally conducted as part of pre-incident environmental assessment studies and post-incident monitoring studies when there is a potential impact to the seafloor. These surveys usually characterize the structure and/or spatiotemporal distribution of macrofaunal assemblages collected with sediment cores; however, many different sampling protocols have been used. An assessment of the comparability of past and current survey methods was in need to facilitate future surveys and comparisons. This was the aim of the present study, conducted off the Oregon coast in waters 25-35 m deep. Our results show that the use of a sieve with a 1.0-mm mesh size gives results for community structure comparable to results obtained from a 0.5-mm mesh size, which allows reliable comparisons of recent and past spatiotemporal surveys of macroinfauna. In addition to our primary objective of comparing methods, we also found interacting effects of seasons and depths of collection. Seasonal differences (summer and fall) were seen in infaunal assemblages in the wave-induced sediment motion zone but not deeper. Thus, studies where wave-induced sediment motion can structure the benthic communities, especially during the winter months, should consider this effect when making temporal comparisons. In addition, some macrofauna taxa-like polychaetes and amphipods show high interannual variabilities, so spatiotemporal studies should make sure to cover several years before drawing any conclusions.
Ulusoy, Halil İbrahim; Gürkan, Ramazan; Ulusoy, Songül
2012-01-15
A new micelle-mediated separation and preconcentration method was developed for ultra-trace quantities of mercury ions prior to spectrophotometric determination. The method is based on cloud point extraction (CPE) of Hg(II) ions with polyethylene glycol tert-octylphenyl ether (Triton X-114) in the presence of chelating agents such as 1-(2-pyridylazo)-2-naphthol (PAN) and 4-(2-thiazolylazo) resorcinol (TAR). Hg(II) ions react with both PAN and TAR in a surfactant solution yielding a hydrophobic complex at pH 9.0 and 8.0, respectively. The phase separation was accomplished by centrifugation for 5 min at 3500 rpm. The calibration graphs obtained from Hg(II)-PAN and Hg(II)-TAR complexes were linear in the concentration ranges of 10-1000 μg L(-1) and 50-2500 μg L(-1) with detection limits of 1.65 and 14.5 μg L(-1), respectively. The relative standard deviations (RSDs) were 1.85% and 2.35% in determinations of 25 and 250 μg L(-1) Hg(II), respectively. The interference effect of several ions were studied and seen commonly present ions in water samples had no significantly effect on determination of Hg(II). The developed methods were successfully applied to determine mercury concentrations in environmental water samples. The accuracy and validity of the proposed methods were tested by means of five replicate analyses of the certified standard materials such as QC Metal LL3 (VWR, drinking water) and IAEA W-4 (NIST, simulated fresh water). Copyright © 2011 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Mathew W. Alldredge
2007-12-01
Full Text Available The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture-recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence, which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low homogenous rates per interval with those singing at (high and low heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant
McClure, Leslie A; Szychowski, Jeff M; Benavente, Oscar; Hart, Robert G; Coffey, Christopher S
2016-10-01
The use of adaptive designs has been increasing in randomized clinical trials. Sample size re-estimation is a type of adaptation in which nuisance parameters are estimated at an interim point in the trial and the sample size re-computed based on these estimates. The Secondary Prevention of Small Subcortical Strokes study was a randomized clinical trial assessing the impact of single- versus dual-antiplatelet therapy and control of systolic blood pressure to a higher (130-149 mmHg) versus lower (size re-estimation was performed during the Secondary Prevention of Small Subcortical Strokes study resulting in an increase from the planned sample size of 2500-3020, and we sought to determine the impact of the sample size re-estimation on the study results. We assessed the results of the primary efficacy and safety analyses with the full 3020 patients and compared them to the results that would have been observed had randomization ended with 2500 patients. The primary efficacy outcome considered was recurrent stroke, and the primary safety outcomes were major bleeds and death. We computed incidence rates for the efficacy and safety outcomes and used Cox proportional hazards models to examine the hazard ratios for each of the two treatment interventions (i.e. the antiplatelet and blood pressure interventions). In the antiplatelet intervention, the hazard ratio was not materially modified by increasing the sample size, nor did the conclusions regarding the efficacy of mono versus dual-therapy change: there was no difference in the effect of dual- versus monotherapy on the risk of recurrent stroke hazard ratios (n = 3020 HR (95% confidence interval): 0.92 (0.72, 1.2), p = 0.48; n = 2500 HR (95% confidence interval): 1.0 (0.78, 1.3), p = 0.85). With respect to the blood pressure intervention, increasing the sample size resulted in less certainty in the results, as the hazard ratio for higher versus lower systolic blood pressure target approached, but did not
Sampling and chemical analysis by TXRF of size-fractionated ambient aerosols and emissions
International Nuclear Information System (INIS)
John, A.C.; Kuhlbusch, T.A.J.; Fissan, H.; Schmidt, K.-G-; Schmidt, F.; Pfeffer, H.-U.; Gladtke, D.
2000-01-01
Results of recent epidemiological studies led to new European air quality standards which require the monitoring of particles with aerodynamic diameters ≤ 10 μm (PM 10) and ≤ 2.5 μm (PM 2.5) instead of TSP (total suspended particulate matter). As these ambient air limit values will be exceeded most likely at several locations in Europe, so-called 'action plans' have to be set up to reduce particle concentrations, which requires information about sources and processes of PMx aerosols. For chemical characterization of the aerosols, different samplers were used and total reflection x-ray fluorescence analysis (TXRF) was applied beside other methods (elemental and organic carbon analysis, ion chromatography, atomic absorption spectrometry). For TXRF analysis, a specially designed sampling unit was built where the particle size classes 10-2.5 μm and 2.5-1.0 μm were directly impacted on TXRF sample carriers. An electrostatic precipitator (ESP) was used as a back-up filter to collect particles <1 μm directly on a TXRF sample carrier. The sampling unit was calibrated in the laboratory and then used for field measurements to determine the elemental composition of the mentioned particle size fractions. One of the field campaigns was carried out at a measurement site in Duesseldorf, Germany, in November 1999. As the composition of the ambient aerosols may have been influenced by a large construction site directly in the vicinity of the station during the field campaign, not only the aerosol particles, but also construction material was sampled and analyzed by TXRF. As air quality is affected by natural and anthropogenic sources, the emissions of particles ≤ 10 μm and ≤ 2.5 μm, respectively, have to be determined to estimate their contributions to the so called coarse and fine particle modes of ambient air. Therefore, an in-stack particle sampling system was developed according to the new ambient air quality standards. This PM 10/PM 2.5 cascade impactor was
Sample size effect on the determination of the irreversibility line of high-Tc superconductors
International Nuclear Information System (INIS)
Li, Q.; Suenaga, M.; Li, Q.; Freltoft, T.
1994-01-01
The irreversibility lines of a high-J c superconducting Bi 2 Sr 2 Ca 2 Cu 3 O x /Ag tape were systematically measured upon a sequence of subdivisions of the sample. The irreversibility field H r (T) (parallel to the c axis) was found to change approximately as L 0.13 , where L is the effective dimension of the superconducting tape. Furthermore, it was found that the irreversibility line for a grain-aligned Bi 2 Sr 2 Ca 2 Cu 3 O x specimen can be approximately reproduced by the extrapolation of this relation down to a grain size of a few tens of micrometers. The observed size effect could significantly obscure the real physical meaning of the irreversibility lines. In addition, this finding surprisingly indicated that the Bi 2 Sr 2 Ca 2 Cu 2 O x /Ag tape and grain-aligned specimen may have similar flux line pinning strength
Development of a Cloud-Point Extraction Method for Cobalt Determination in Natural Water Samples
Directory of Open Access Journals (Sweden)
Mohammad Reza Jamali
2013-01-01
Full Text Available A new, simple, and versatile cloud-point extraction (CPE methodology has been developed for the separation and preconcentration of cobalt. The cobalt ions in the initial aqueous solution were complexed with 4-Benzylpiperidinedithiocarbamate, and Triton X-114 was added as surfactant. Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation, and the cobalt content was measured by flame atomic absorption spectrometry. The main factors affecting CPE procedure, such as pH, concentration of ligand, amount of Triton X-114, equilibrium temperature, and incubation time were investigated and optimized. Under the optimal conditions, the limit of detection (LOD for cobalt was 0.5 μg L-1, with sensitivity enhancement factor (EF of 67. Calibration curve was linear in the range of 2–150 μg L-1, and relative standard deviation was 3.2% (c=100 μg L-1; n=10. The proposed method was applied to the determination of trace cobalt in real water samples with satisfactory analytical results.
Influence of secular trends and sample size on reference equations for lung function tests.
Quanjer, P H; Stocks, J; Cole, T J; Hall, G L; Stanojevic, S
2011-03-01
The aim of our study was to determine the contribution of secular trends and sample size to lung function reference equations, and establish the number of local subjects required to validate published reference values. 30 spirometry datasets collected between 1978 and 2009 provided data on healthy, white subjects: 19,291 males and 23,741 females aged 2.5-95 yrs. The best fit for forced expiratory volume in 1 s (FEV(1)), forced vital capacity (FVC) and FEV(1)/FVC as functions of age, height and sex were derived from the entire dataset using GAMLSS. Mean z-scores were calculated for individual datasets to determine inter-centre differences. This was repeated by subdividing one large dataset (3,683 males and 4,759 females) into 36 smaller subsets (comprising 18-227 individuals) to preclude differences due to population/technique. No secular trends were observed and differences between datasets comprising >1,000 subjects were small (maximum difference in FEV(1) and FVC from overall mean: 0.30- -0.22 z-scores). Subdividing one large dataset into smaller subsets reproduced the above sample size-related differences and revealed that at least 150 males and 150 females would be necessary to validate reference values to avoid spurious differences due to sampling error. Use of local controls to validate reference equations will rarely be practical due to the numbers required. Reference equations derived from large or collated datasets are recommended.
Walton, Emily; Casey, Christy; Mitsch, Jurgen; Vázquez-Diosdado, Jorge A.; Yan, Juan; Dottorini, Tania; Ellis, Keith A.; Winterlich, Anthony
2018-01-01
Automated behavioural classification and identification through sensors has the potential to improve health and welfare of the animals. Position of a sensor, sampling frequency and window size of segmented signal data has a major impact on classification accuracy in activity recognition and energy needs for the sensor, yet, there are no studies in precision livestock farming that have evaluated the effect of all these factors simultaneously. The aim of this study was to evaluate the effects of position (ear and collar), sampling frequency (8, 16 and 32 Hz) of a triaxial accelerometer and gyroscope sensor and window size (3, 5 and 7 s) on the classification of important behaviours in sheep such as lying, standing and walking. Behaviours were classified using a random forest approach with 44 feature characteristics. The best performance for walking, standing and lying classification in sheep (accuracy 95%, F-score 91%–97%) was obtained using combination of 32 Hz, 7 s and 32 Hz, 5 s for both ear and collar sensors, although, results obtained with 16 Hz and 7 s window were comparable with accuracy of 91%–93% and F-score 88%–95%. Energy efficiency was best at a 7 s window. This suggests that sampling at 16 Hz with 7 s window will offer benefits in a real-time behavioural monitoring system for sheep due to reduced energy needs. PMID:29515862
Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size
Directory of Open Access Journals (Sweden)
Zhihua Wang
2014-01-01
Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.
Sample-size calculations for multi-group comparison in population pharmacokinetic experiments.
Ogungbenro, Kayode; Aarons, Leon
2010-01-01
This paper describes an approach for calculating sample size for population pharmacokinetic experiments that involve hypothesis testing based on multi-group comparison detecting the difference in parameters between groups under mixed-effects modelling. This approach extends what has been described for generalized linear models and nonlinear population pharmacokinetic models that involve only binary covariates to more complex nonlinear population pharmacokinetic models. The structural nonlinear model is linearized around the random effects to obtain the marginal model and the hypothesis testing involving model parameters is based on Wald's test. This approach provides an efficient and fast method for calculating sample size for hypothesis testing in population pharmacokinetic models. The approach can also handle different design problems such as unequal allocation of subjects to groups and unbalanced sampling times between and within groups. The results obtained following application to a one compartment intravenous bolus dose model that involved three different hypotheses under different scenarios showed good agreement between the power obtained from NONMEM simulations and nominal power. Copyright © 2009 John Wiley & Sons, Ltd.
Sample Size Estimation for Detection of Splicing Events in Transcriptome Sequencing Data.
Kaisers, Wolfgang; Schwender, Holger; Schaal, Heiner
2017-09-05
Merging data from multiple samples is required to detect low expressed transcripts or splicing events that might be present only in a subset of samples. However, the exact number of required replicates enabling the detection of such rare events often remains a mystery but can be approached through probability theory. Here, we describe a probabilistic model, relating the number of observed events in a batch of samples with observation probabilities. Therein, samples appear as a heterogeneous collection of events, which are observed with some probability. The model is evaluated in a batch of 54 transcriptomes of human dermal fibroblast samples. The majority of putative splice-sites (alignment gap-sites) are detected in (almost) all samples or only sporadically, resulting in an U-shaped pattern for observation probabilities. The probabilistic model systematically underestimates event numbers due to a bias resulting from finite sampling. However, using an additional assumption, the probabilistic model can predict observed event numbers within a events (mean 7122 in alignments from TopHat alignments and 86,215 in alignments from STAR). We conclude that the probabilistic model provides an adequate description for observation of gap-sites in transcriptome data. Thus, the calculation of required sample sizes can be done by application of a simple binomial model to sporadically observed random events. Due to the large number of uniquely observed putative splice-sites and the known stochastic noise in the splicing machinery, it appears advisable to include observation of rare splicing events into analysis objectives. Therefore, it is beneficial to take scores for the validation of gap-sites into account.
Sample size for estimating average trunk diameter and plant height in eucalyptus hybrids
Alberto Cargnelutti Filho; Rafael Beltrame; Dilson Antônio Bisognin; Marília Lazarotto; Clovis Roberto Haselein; Darci Alberto Gatto; Gleison Augusto dos Santos
2016-01-01
ABSTRACT: In eucalyptus crops, it is important to determine the number of plants that need to be evaluated for a reliable inference of growth. The aim of this study was to determine the sample size needed to estimate average trunk diameter at breast height and plant height of inter-specific eucalyptus hybrids. In 6,694 plants of twelve inter-specific hybrids it was evaluated trunk diameter at breast height at three (DBH3) and seven years (DBH7) and tree height at seven years (H7) of age. The ...
Magnetic response and critical current properties of mesoscopic-size YBCO superconducting samples
Energy Technology Data Exchange (ETDEWEB)
Lisboa-Filho, P N [UNESP - Universidade Estadual Paulista, Grupo de Materiais Avancados, Departamento de Fisica, Bauru (Brazil); Deimling, C V; Ortiz, W A, E-mail: plisboa@fc.unesp.b [Grupo de Supercondutividade e Magnetismo, Departamento de Fisica, Universidade Federal de Sao Carlos, Sao Carlos (Brazil)
2010-01-15
In this contribution superconducting specimens of YBa{sub 2}Cu{sub 3}O{sub 7-{delta}} were synthesized by a modified polymeric precursor method, yielding a ceramic powder with particles of mesoscopic-size. Samples of this powder were then pressed into pellets and sintered under different conditions. The critical current density was analyzed by isothermal AC-susceptibility measurements as a function of the excitation field, as well as with isothermal DC-magnetization runs at different values of the applied field. Relevant features of the magnetic response could be associated to the microstructure of the specimens and, in particular, to the superconducting intra- and intergranular critical current properties.
An Updated Survey on Statistical Thresholding and Sample Size of fMRI Studies
Directory of Open Access Journals (Sweden)
Andy W. K. Yeung
2018-01-01
Full Text Available Background: Since the early 2010s, the neuroimaging field has paid more attention to the issue of false positives. Several journals have issued guidelines regarding statistical thresholds. Three papers have reported the statistical analysis of the thresholds used in fMRI literature, but they were published at least 3 years ago and surveyed papers published during 2007–2012. This study revisited this topic to evaluate the changes in this field.Methods: The PubMed database was searched to identify the task-based (not resting-state fMRI papers published in 2017 and record their sample sizes, inferential methods (e.g., voxelwise or clusterwise, theoretical methods (e.g., parametric or non-parametric, significance level, cluster-defining primary threshold (CDT, volume of analysis (whole brain or region of interest and software used.Results: The majority (95.6% of the 388 analyzed articles reported statistics corrected for multiple comparisons. A large proportion (69.6% of the 388 articles reported main results by clusterwise inference. The analyzed articles mostly used software Statistical Parametric Mapping (SPM, Analysis of Functional NeuroImages (AFNI, or FMRIB Software Library (FSL to conduct statistical analysis. There were 70.9%, 37.6%, and 23.1% of SPM, AFNI, and FSL studies, respectively, that used a CDT of p ≤ 0.001. The statistical sample size across the articles ranged between 7 and 1,299 with a median of 33. Sample size did not significantly correlate with the level of statistical threshold.Conclusion: There were still around 53% (142/270 studies using clusterwise inference that chose a more liberal CDT than p = 0.001 (n = 121 or did not report their CDT (n = 21, down from around 61% reported by Woo et al. (2014. For FSL studies, it seemed that the CDT practice had no improvement since the survey by Woo et al. (2014. A few studies chose unconventional CDT such as p = 0.0125 or 0.004. Such practice might create an impression that the
International Nuclear Information System (INIS)
Makino, Kenichi; Masuda, Yasuhiko; Gotoh, Satoshi
1998-01-01
The experimental subjects were 189 patients with cerebrovascular disorders. 123 I-IMP, 222 MBq, was administered by intravenous infusion. Continuous arterial blood sampling was carried out for 5 minutes, and arterial blood was also sampled once at 5 minutes after 123 I-IMP administration. Then the whole blood count of the one-point arterial sampling was compared with the octanol-extracted count of the continuous arterial sampling. A positive correlation was found between the two values. The ratio of the continuous sampling octanol-extracted count (OC) to the one-point sampling whole blood count (TC5) was compared with the whole brain count ratio (5:29 ratio, Cn) using 1-minute planar SPECT images, centering on 5 and 29 minutes after 123 I-IMP administration. Correlation was found between the two values. The following relationship was shown from the correlation equation. OC/TC5=0.390969 x Cn-0.08924. Based on this correlation equation, we calculated the theoretical continuous arterial sampling octanol-extracted count (COC). COC=TC5 x (0.390969 x Cn-0.08924). There was good correlation between the value calculated with this equation and the actually measured value. The coefficient improved to r=0.94 from the r=0.87 obtained before using the 5:29 ratio for correction. For 23 of these 189 cases, another one-point arterial sampling was carried out at 6, 7, 8, 9 and 10 minutes after the administration of 123 I-IMP. The correlation coefficient was also improved for these other point samplings when this correction method using the 5:29 ratio was applied. It was concluded that it is possible to obtain highly accurate input functions, i.e., calculated continuous arterial sampling octanol-extracted counts, using one-point arterial sampling whole blood counts by performing correction using the 5:29 ratio. (K.H.)
Strategies for informed sample size reduction in adaptive controlled clinical trials
Arandjelović, Ognjen
2017-12-01
Clinical trial adaptation refers to any adjustment of the trial protocol after the onset of the trial. The main goal is to make the process of introducing new medical interventions to patients more efficient. The principal challenge, which is an outstanding research problem, is to be found in the question of how adaptation should be performed so as to minimize the chance of distorting the outcome of the trial. In this paper, we propose a novel method for achieving this. Unlike most of the previously published work, our approach focuses on trial adaptation by sample size adjustment, i.e. by reducing the number of trial participants in a statistically informed manner. Our key idea is to select the sample subset for removal in a manner which minimizes the associated loss of information. We formalize this notion and describe three algorithms which approach the problem in different ways, respectively, using (i) repeated random draws, (ii) a genetic algorithm, and (iii) what we term pair-wise sample compatibilities. Experiments on simulated data demonstrate the effectiveness of all three approaches, with a consistently superior performance exhibited by the pair-wise sample compatibilities-based method.
Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden
Energy Technology Data Exchange (ETDEWEB)
Wagner, Annemarie [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden)], E-mail: wagnera@chalmers.se; Boman, Johan [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden); Gatari, Michael J. [Institute of Nuclear Science and Technology, University of Nairobi, P.O. Box 30197-00100, Nairobi (Kenya)
2008-12-15
The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 {mu}m aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers.
Directory of Open Access Journals (Sweden)
Ya Li
2012-12-01
Full Text Available In order to prepare a high capacity packing material for solid-phase extraction with specific recognition ability of trace ractopamine in biological samples, uniformly-sized, molecularly imprinted polymers (MIPs were prepared by a multi-step swelling and polymerization method using methacrylic acid as a functional monomer, ethylene glycol dimethacrylate as a cross-linker, and toluene as a porogen respectively. Scanning electron microscope and specific surface area were employed to identify the characteristics of MIPs. Ultraviolet spectroscopy, Fourier transform infrared spectroscopy, Scatchard analysis and kinetic study were performed to interpret the specific recognition ability and the binding process of MIPs. The results showed that, compared with other reports, MIPs synthetized in this study showed high adsorption capacity besides specific recognition ability. The adsorption capacity of MIPs was 0.063Â mmol/g at 1Â mmol/L ractopamine concentration with the distribution coefficient 1.70. The resulting MIPs could be used as solid-phase extraction materials for separation and enrichment of trace ractopamine in biological samples. Keywords: Ractopamine, Uniformly-sized molecularly imprinted polymers, Solid-phase extraction, Multi-step swelling and polymerization, Separation and enrichment
Dziak, John J.; Nahum-Shani, Inbal; Collins, Linda M.
2012-01-01
Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions, by helping investigators to screen several candidate intervention components simultaneously and decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the population of interest is multilevel (e.g., when students are nested within schools, or employees within organizations). In this article we examine the feasibility of factorial experimental designs with multiple factors in a multilevel, clustered setting (i.e., of multilevel multifactor experiments). We conduct Monte Carlo simulations to demonstrate how design elements such as the number of clusters, the number of lower-level units, and the intraclass correlation affect power. Our results suggest that multilevel, multifactor experiments are feasible for factor-screening purposes, because of the economical properties of complete and fractional factorial experimental designs. We also discuss resources for sample size planning and power estimation for multilevel factorial experiments. These results are discussed from a resource management perspective, in which the goal is to choose a design that maximizes the scientific benefit using the resources available for an investigation. PMID:22309956
Dziak, John J; Nahum-Shani, Inbal; Collins, Linda M
2012-06-01
Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions by helping investigators to screen several candidate intervention components simultaneously and to decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the population of interest is multilevel (e.g., when students are nested within schools, or when employees are nested within organizations). In this article, we examine the feasibility of factorial experimental designs with multiple factors in a multilevel, clustered setting (i.e., of multilevel, multifactor experiments). We conduct Monte Carlo simulations to demonstrate how design elements-such as the number of clusters, the number of lower-level units, and the intraclass correlation-affect power. Our results suggest that multilevel, multifactor experiments are feasible for factor-screening purposes because of the economical properties of complete and fractional factorial experimental designs. We also discuss resources for sample size planning and power estimation for multilevel factorial experiments. These results are discussed from a resource management perspective, in which the goal is to choose a design that maximizes the scientific benefit using the resources available for an investigation. (c) 2012 APA, all rights reserved
Friede, Tim; Kieser, Meinhard
2013-01-01
The internal pilot study design allows for modifying the sample size during an ongoing study based on a blinded estimate of the variance thus maintaining the trial integrity. Various blinded sample size re-estimation procedures have been proposed in the literature. We compare the blinded sample size re-estimation procedures based on the one-sample variance of the pooled data with a blinded procedure using the randomization block information with respect to bias and variance of the variance estimators, and the distribution of the resulting sample sizes, power, and actual type I error rate. For reference, sample size re-estimation based on the unblinded variance is also included in the comparison. It is shown that using an unbiased variance estimator (such as the one using the randomization block information) for sample size re-estimation does not guarantee that the desired power is achieved. Moreover, in situations that are common in clinical trials, the variance estimator that employs the randomization block length shows a higher variability than the simple one-sample estimator and in turn the sample size resulting from the related re-estimation procedure. This higher variability can lead to a lower power as was demonstrated in the setting of noninferiority trials. In summary, the one-sample estimator obtained from the pooled data is extremely simple to apply, shows good performance, and is therefore recommended for application. Copyright © 2013 John Wiley & Sons, Ltd.
Dong, H.; Zhang, H.; Zuo, Y.; Gao, P.; Ye, G.
2018-01-01
Mercury intrusion porosimetry (MIP) measurements are widely used to determine pore throat size distribution (PSD) curves of porous materials. The pore throat size of porous materials has been used to estimate their compressive strength and air permeability. However, the effect of sample size on
Candel, Math J J M; Van Breukelen, Gerard J P
2010-06-30
Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.
Feng, Hao-chuan; Zhang, Wei; Zhu, Yu-liang; Lei, Zhi-yi; Ji, Xiao-mei
2017-06-01
Particle size distributions (PSDs) of bottom sediments in a coastal zone are generally multimodal due to the complexity of the dynamic environment. In this paper, bottom sediments along the deep channel of the Pearl River Estuary (PRE) are used to understand the multimodal PSDs' characteristics and the corresponding depositional environment. The results of curve-fitting analysis indicate that the near-bottom sediments in the deep channel generally have a bimodal distribution with a fine component and a relatively coarse component. The particle size distribution of bimodal sediment samples can be expressed as the sum of two lognormal functions and the parameters for each component can be determined. At each station of the PRE, the fine component makes up less volume of the sediments and is relatively poorly sorted. The relatively coarse component, which is the major component of the sediments, is even more poorly sorted. The interrelations between the dynamics and particle size of the bottom sediment in the deep channel of the PRE have also been investigated by the field measurement and simulated data. The critical shear velocity and the shear velocity are calculated to study the stability of the deep channel. The results indicate that the critical shear velocity has a similar distribution over large part of the deep channel due to the similar particle size distribution of sediments. Based on a comparison between the critical shear velocities derived from sedimentary parameters and the shear velocities obtained by tidal currents, it is likely that the depositional area is mainly distributed in the northern part of the channel, while the southern part of the deep channel has to face higher erosion risk.
Saccenti, Edoardo; Timmerman, Marieke E.
2016-01-01
Sample size determination is a fundamental step in the design of experiments. Methods for sample size determination are abundant for univariate analysis methods, but scarce in the multivariate case. Omics data are multivariate in nature and are commonly investi-gated using multivariate statistical
Foley, Brett Patrick
2010-01-01
The 3PL model is a flexible and widely used tool in assessment. However, it suffers from limitations due to its need for large sample sizes. This study introduces and evaluates the efficacy of a new sample size augmentation technique called Duplicate, Erase, and Replace (DupER) Augmentation through a simulation study. Data are augmented using…
Energy Technology Data Exchange (ETDEWEB)
Baldwin, J.M. [Sandia National Labs., Livermore, CA (United States). Integrated Manufacturing Systems
1996-04-01
The Dimensional Inspection Techniques Specification (DITS) Project is an ongoing effort to produce tools and guidelines for optimum sampling and data analysis of machined parts, when measured using point-sample methods of dimensional metrology. This report is a compilation of results of a literature survey, conducted in support of the DITS. Over 160 citations are included, with author abstracts where available.
Wellek, Stefan
2017-09-10
In clinical trials using lifetime as primary outcome variable, it is more the rule than the exception that even for patients who are failing in the course of the study, survival time does not become known exactly since follow-up takes place according to a restricted schedule with fixed, possibly long intervals between successive visits. In practice, the discreteness of the data obtained under such circumstances is plainly ignored both in data analysis and in sample size planning of survival time studies. As a framework for analyzing the impact of making no difference between continuous and discrete recording of failure times, we use a scenario in which the partially observed times are assigned to the points of the grid of inspection times in the natural way. Evaluating the treatment effect in a two-arm trial fitting into this framework by means of ordinary methods based on Cox's relative risk model is shown to produce biased estimates and/or confidence bounds whose actual coverage exhibits marked discrepancies from the nominal confidence level. Not surprisingly, the amount of these distorting effects turns out to be the larger the coarser the grid of inspection times has been chosen. As a promising approach to correctly analyzing and planning studies generating discretely recorded failure times, we use large-sample likelihood theory for parametric models accommodating the key features of the scenario under consideration. The main result is an easily implementable representation of the expected information and hence of the asymptotic covariance matrix of the maximum likelihood estimators of all parameters contained in such a model. In two real examples of large-scale clinical trials, sample size calculation based on this result is contrasted with the traditional approach, which consists of applying the usual methods for exactly observed failure times. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Effects of sample size on estimation of rainfall extremes at high temperatures
Boessenkool, Berry; Bürger, Gerd; Heistermann, Maik
2017-09-01
High precipitation quantiles tend to rise with temperature, following the so-called Clausius-Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.
Mixed modeling and sample size calculations for identifying housekeeping genes.
Dai, Hongying; Charnigo, Richard; Vyhlidal, Carrie A; Jones, Bridgette L; Bhandary, Madhusudan
2013-08-15
Normalization of gene expression data using internal control genes that have biologically stable expression levels is an important process for analyzing reverse transcription polymerase chain reaction data. We propose a three-way linear mixed-effects model to select optimal housekeeping genes. The mixed-effects model can accommodate multiple continuous and/or categorical variables with sample random effects, gene fixed effects, systematic effects, and gene by systematic effect interactions. We propose using the intraclass correlation coefficient among gene expression levels as the stability measure to select housekeeping genes that have low within-sample variation. Global hypothesis testing is proposed to ensure that selected housekeeping genes are free of systematic effects or gene by systematic effect interactions. A gene combination with the highest lower bound of 95% confidence interval for intraclass correlation coefficient and no significant systematic effects is selected for normalization. Sample size calculation based on the estimation accuracy of the stability measure is offered to help practitioners design experiments to identify housekeeping genes. We compare our methods with geNorm and NormFinder by using three case studies. A free software package written in SAS (Cary, NC, U.S.A.) is available at http://d.web.umkc.edu/daih under software tab. Copyright © 2013 John Wiley & Sons, Ltd.
Effects of sample size on estimation of rainfall extremes at high temperatures
Directory of Open Access Journals (Sweden)
B. Boessenkool
2017-09-01
Full Text Available High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.
Han, Taewon; O'Neal, Dennis L; Ortiz, Carlos A
2007-01-01
The ANSI/HPS-N13.1-1999 standard is based on the concept of obtaining a single point representative sample from a location where the velocity and contaminant profiles are relatively uniform. It is difficult to predict the level of mixing in an arbitrary stack or duct without experimental data to meet the ANSI/HPS N13.1-1999 requirements. The goal of this study was to develop experimental data for a range of conditions in "S" (S-shaped configuration) duct systems with different mixing elements and "S" systems having one or two mixing elements. Results were presented in terms of the coefficients of variation (COVs) for velocity, tracer gas, and 10-mum aerodynamic diameter (AD) aerosol particle profiles at different downstream locations for each mixing element. Five mixing elements were tested, including a 90 degrees elbow, a commercial static mixer, a Small-Horizontal Generic-Tee-Plenum (SH-GTP), a Small-Vertical Generic-Tee-Plenum (SV-GTP), and a Large-Horizontal Generic-Tee-Plenum (LH-GTP) system. The COVs for velocity, gas concentration, and aerosol particles for the three GTP systems were all determined to be less than 8%. Tests with two different sizes of GTPs were conducted, and the results showed the performance of the GTPs was relatively unaffected by either size or velocity as reflected by the Reynolds number. The pressure coefficients were 0.59, 0.57, and 0.65, respectively, for the SH-GTP, SV-GTP, and LH-GTP. The pressure drop for the GTPs was approximately twice that of the round elbow, but a factor of 5 less than a Type IV Air Blender. The GTP was developed to provide a sampling location less than 4-duct diameters downstream of a mixing element with low pressure drop condition. The object of the developmental effort was to provide a system that could be employed in new stack; however, the concept of GTPs could also be retrofitted onto existing system applications as well. Results from these tests show that the system performance is well within the ANSI
Bayesian assurance and sample size determination in the process validation life-cycle.
Faya, Paul; Seaman, John W; Stamey, James D
2017-01-01
Validation of pharmaceutical manufacturing processes is a regulatory requirement and plays a key role in the assurance of drug quality, safety, and efficacy. The FDA guidance on process validation recommends a life-cycle approach which involves process design, qualification, and verification. The European Medicines Agency makes similar recommendations. The main purpose of process validation is to establish scientific evidence that a process is capable of consistently delivering a quality product. A major challenge faced by manufacturers is the determination of the number of batches to be used for the qualification stage. In this article, we present a Bayesian assurance and sample size determination approach where prior process knowledge and data are used to determine the number of batches. An example is presented in which potency uniformity data is evaluated using a process capability metric. By using the posterior predictive distribution, we simulate qualification data and make a decision on the number of batches required for a desired level of assurance.
A bootstrap test for comparing two variances: simulation of size and power in small samples.
Sun, Jiajing; Chernick, Michael R; LaBudde, Robert A
2011-11-01
An F statistic was proposed by Good and Chernick ( 1993 ) in an unpublished paper, to test the hypothesis of the equality of variances from two independent groups using the bootstrap; see Hall and Padmanabhan ( 1997 ), for a published reference where Good and Chernick ( 1993 ) is discussed. We look at various forms of bootstrap tests that use the F statistic to see whether any or all of them maintain the nominal size of the test over a variety of population distributions when the sample size is small. Chernick and LaBudde ( 2010 ) and Schenker ( 1985 ) showed that bootstrap confidence intervals for variances tend to provide considerably less coverage than their theoretical asymptotic coverage for skewed population distributions such as a chi-squared with 10 degrees of freedom or less or a log-normal distribution. The same difficulties may be also be expected when looking at the ratio of two variances. Since bootstrap tests are related to constructing confidence intervals for the ratio of variances, we simulated the performance of these tests when the population distributions are gamma(2,3), uniform(0,1), Student's t distribution with 10 degrees of freedom (df), normal(0,1), and log-normal(0,1) similar to those used in Chernick and LaBudde ( 2010 ). We find, surprisingly, that the results for the size of the tests are valid (reasonably close to the asymptotic value) for all the various bootstrap tests. Hence we also conducted a power comparison, and we find that bootstrap tests appear to have reasonable power for testing equivalence of variances.
Lai, Xiaoming; Zhu, Qing; Zhou, Zhiwen; Liao, Kaihua
2017-12-01
In this study, seven random combination sampling strategies were applied to investigate the uncertainties in estimating the hillslope mean soil water content (SWC) and correlation coefficients between the SWC and soil/terrain properties on a tea + bamboo hillslope. One of the sampling strategies is the global random sampling and the other six are the stratified random sampling on the top, middle, toe, top + mid, top + toe and mid + toe slope positions. When each sampling strategy was applied, sample sizes were gradually reduced and each sampling size contained 3000 replicates. Under each sampling size of each sampling strategy, the relative errors (REs) and coefficients of variation (CVs) of the estimated hillslope mean SWC and correlation coefficients between the SWC and soil/terrain properties were calculated to quantify the accuracy and uncertainty. The results showed that the uncertainty of the estimations decreased as the sampling size increasing. However, larger sample sizes were required to reduce the uncertainty in correlation coefficient estimation than in hillslope mean SWC estimation. Under global random sampling, 12 randomly sampled sites on this hillslope were adequate to estimate the hillslope mean SWC with RE and CV ≤10%. However, at least 72 randomly sampled sites were needed to ensure the estimated correlation coefficients with REs and CVs ≤10%. Comparing with all sampling strategies, reducing sampling sites on the middle slope had the least influence on the estimation of hillslope mean SWC and correlation coefficients. Under this strategy, 60 sites (10 on the middle slope and 50 on the top and toe slopes) were enough to ensure the estimated correlation coefficients with REs and CVs ≤10%. This suggested that when designing the SWC sampling, the proportion of sites on the middle slope can be reduced to 16.7% of the total number of sites. Findings of this study will be useful for the optimal SWC sampling design.
Hua, Xue; Hibar, Derrek P; Ching, Christopher R K; Boyle, Christina P; Rajagopalan, Priya; Gutman, Boris A; Leow, Alex D; Toga, Arthur W; Jack, Clifford R; Harvey, Danielle; Weiner, Michael W; Thompson, Paul M
2013-02-01
Various neuroimaging measures are being evaluated for tracking Alzheimer's disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. Copyright © 2012 Elsevier Inc. All rights reserved.
Trefz, Phillip; Rösner, Lisa; Hein, Dietmar; Schubert, Jochen K; Miekisch, Wolfram
2013-04-01
Needle trap devices (NTDs) have shown many advantages such as improved detection limits, reduced sampling time and volume, improved stability, and reproducibility if compared with other techniques used in breath analysis such as solid-phase extraction and solid-phase micro-extraction. Effects of sampling flow (2-30 ml/min) and volume (10-100 ml) were investigated in dry gas standards containing hydrocarbons, aldehydes, and aromatic compounds and in humid breath samples. NTDs contained (single-bed) polymer packing and (triple-bed) combinations of divinylbenzene/Carbopack X/Carboxen 1000. Substances were desorbed from the NTDs by means of thermal expansion and analyzed by gas chromatography-mass spectrometry. An automated CO2-controlled sampling device for direct alveolar sampling at the point-of-care was developed and tested in pilot experiments. Adsorption efficiency for small volatile organic compounds decreased and breakthrough increased when sampling was done with polymer needles from a water-saturated matrix (breath) instead from dry gas. Humidity did not affect analysis with triple-bed NTDs. These NTDs showed only small dependencies on sampling flow and low breakthrough from 1-5 %. The new sampling device was able to control crucial parameters such as sampling flow and volume. With triple-bed NTDs, substance amounts increased linearly with increasing sample volume when alveolar breath was pre-concentrated automatically. When compared with manual sampling, automatic sampling showed comparable or better results. Thorough control of sampling and adequate choice of adsorption material is mandatory for application of needle trap micro-extraction in vivo. The new CO2-controlled sampling device allows direct alveolar sampling at the point-of-care without the need of any additional sampling, storage, or pre-concentration steps.
Directory of Open Access Journals (Sweden)
Shaukat S. Shahid
2016-06-01
Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.
Li, Ya; Fu, Qiang; Liu, Meng; Jiao, Yuan-Yuan; Du, Wei; Yu, Chong; Liu, Jing; Chang, Chun; Lu, Jian
2012-01-01
In order to prepare a high capacity packing material for solid-phase extraction with specific recognition ability of trace ractopamine in biological samples, uniformly-sized, molecularly imprinted polymers (MIPs) were prepared by a multi-step swelling and polymerization method using methacrylic acid as a functional monomer, ethylene glycol dimethacrylate as a cross-linker, and toluene as a porogen respectively. Scanning electron microscope and specific surface area were employed to identify the characteristics of MIPs. Ultraviolet spectroscopy, Fourier transform infrared spectroscopy, Scatchard analysis and kinetic study were performed to interpret the specific recognition ability and the binding process of MIPs. The results showed that, compared with other reports, MIPs synthetized in this study showed high adsorption capacity besides specific recognition ability. The adsorption capacity of MIPs was 0.063 mmol/g at 1 mmol/L ractopamine concentration with the distribution coefficient 1.70. The resulting MIPs could be used as solid-phase extraction materials for separation and enrichment of trace ractopamine in biological samples. PMID:29403774
Directory of Open Access Journals (Sweden)
Łącka Katarzyna
2016-03-01
Full Text Available Introduction: The aim of this study was to propose the optimal methodology for stallion semen morphology analysis while taking into consideration the staining method, the microscopic techniques, and the workload generated by a number of samples. Material and Methods: Ejaculates from eight pure-bred Arabian horses were tested microscopically for the incidence of morphological defects in the spermatozoa. Two different staining methods (eosin-nigrosin and eosin-gentian dye, two different techniques of microscopic analysis (1000× and 400× magnifications, and two sample sizes (200 and 500 spermatozoa were used. Results: Well-formed spermatozoa and those with major and minor defects according to Blom’s classification were identified. The applied staining methods gave similar results and could be used in stallion sperm morphology analysis. However, the eosin-nigrosin method was more recommendable, because it allowed to limit the number of visible artefacts without hindering the identification of protoplasm drops and enables the differentiation of living and dead spermatozoa. Conclusion: The applied microscopic techniques proved to be equally efficacious. Therefore, it is practically possible to opt for the simpler and faster 400x technique of analysing sperm morphology to examine stallion semen. We also found that the number of spermatozoa clearly affects the results of sperm morphology evaluation. Reducing the number of spermatozoa from 500 to 200 causes a decrease in the percentage of spermatozoa identified as normal and an increase in the percentage of spermatozoa determined as morphologically defective.
Weighted piecewise LDA for solving the small sample size problem in face verification.
Kyperountas, Marios; Tefas, Anastasios; Pitas, Ioannis
2007-03-01
A novel algorithm that can be used to boost the performance of face-verification methods that utilize Fisher's criterion is presented and evaluated. The algorithm is applied to similarity, or matching error, data and provides a general solution for overcoming the "small sample size" (SSS) problem, where the lack of sufficient training samples causes improper estimation of a linear separation hyperplane between the classes. Two independent phases constitute the proposed method. Initially, a set of weighted piecewise discriminant hyperplanes are used in order to provide a more accurate discriminant decision than the one produced by the traditional linear discriminant analysis (LDA) methodology. The expected classification ability of this method is investigated throughout a series of simulations. The second phase defines proper combinations for person-specific similarity scores and describes an outlier removal process that further enhances the classification ability. The proposed technique has been tested on the M2VTS and XM2VTS frontal face databases. Experimental results indicate that the proposed framework greatly improves the face-verification performance.
DEFF Research Database (Denmark)
Gerke, Oke; Poulsen, Mads Hvid; Bouchelouche, Kirsten
2009-01-01
/CT also performs well in adjacent areas, then sample sizes in accuracy studies can be reduced. PROCEDURES: Traditional standard power calculations for demonstrating sensitivities of both 80% and 90% are shown. The argument is then described in general terms and demonstrated by an ongoing study...... of metastasized prostate cancer. RESULTS: An added value in accuracy of PET/CT in adjacent areas can outweigh a downsized target level of accuracy in the gold standard region, justifying smaller sample sizes. CONCLUSIONS: If PET/CT provides an accuracy benefit in adjacent regions, then sample sizes can be reduced...
Multistage point relascope and randomized branch sampling for downed coarse woody debris estimation
Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine
2002-01-01
New sampling methods have recently been introduced that allow estimation of downed coarse woody debris using an angle gauge, or relascope. The theory behind these methods is based on sampling straight pieces of downed coarse woody debris. When pieces deviate from this ideal situation, auxillary methods must be employed. We describe a two-stage procedure where the...
Size-exclusion chromatography-based enrichment of extracellular vesicles from urine samples.
Lozano-Ramos, Inés; Bancu, Ioana; Oliveira-Tercero, Anna; Armengol, María Pilar; Menezes-Neto, Armando; Del Portillo, Hernando A; Lauzurica-Valdemoros, Ricardo; Borràs, Francesc E
2015-01-01
Renal biopsy is the gold-standard procedure to diagnose most of renal pathologies. However, this invasive method is of limited repeatability and often describes an irreversible renal damage. Urine is an easily accessible fluid and urinary extracellular vesicles (EVs) may be ideal to describe new biomarkers associated with renal pathologies. Several methods to enrich EVs have been described. Most of them contain a mixture of proteins, lipoproteins and cell debris that may be masking relevant biomarkers. Here, we evaluated size-exclusion chromatography (SEC) as a suitable method to isolate urinary EVs. Following a conventional centrifugation to eliminate cell debris and apoptotic bodies, urine samples were concentrated using ultrafiltration and loaded on a SEC column. Collected fractions were analysed by protein content and flow cytometry to determine the presence of tetraspanin markers (CD63 and CD9). The highest tetraspanin content was routinely detected in fractions well before the bulk of proteins eluted. These tetraspanin-peak fractions were analysed by cryo-electron microscopy (cryo-EM) and nanoparticle tracking analysis revealing the presence of EVs.When analysed by sodium dodecyl sulphate-polyacrylamide gel electrophoresis, tetraspanin-peak fractions from urine concentrated samples contained multiple bands but the main urine proteins (such as Tamm-Horsfall protein) were absent. Furthermore, a preliminary proteomic study of these fractions revealed the presence of EV-related proteins, suggesting their enrichment in concentrated samples. In addition, RNA profiling also showed the presence of vesicular small RNA species.To summarize, our results demonstrated that concentrated urine followed by SEC is a suitable option to isolate EVs with low presence of soluble contaminants. This methodology could permit more accurate analyses of EV-related biomarkers when further characterized by -omics technologies compared with other approaches.
Size-exclusion chromatography-based enrichment of extracellular vesicles from urine samples
Directory of Open Access Journals (Sweden)
Inés Lozano-Ramos
2015-05-01
Full Text Available Renal biopsy is the gold-standard procedure to diagnose most of renal pathologies. However, this invasive method is of limited repeatability and often describes an irreversible renal damage. Urine is an easily accessible fluid and urinary extracellular vesicles (EVs may be ideal to describe new biomarkers associated with renal pathologies. Several methods to enrich EVs have been described. Most of them contain a mixture of proteins, lipoproteins and cell debris that may be masking relevant biomarkers. Here, we evaluated size-exclusion chromatography (SEC as a suitable method to isolate urinary EVs. Following a conventional centrifugation to eliminate cell debris and apoptotic bodies, urine samples were concentrated using ultrafiltration and loaded on a SEC column. Collected fractions were analysed by protein content and flow cytometry to determine the presence of tetraspanin markers (CD63 and CD9. The highest tetraspanin content was routinely detected in fractions well before the bulk of proteins eluted. These tetraspanin-peak fractions were analysed by cryo-electron microscopy (cryo-EM and nanoparticle tracking analysis revealing the presence of EVs.When analysed by sodium dodecyl sulphate–polyacrylamide gel electrophoresis, tetraspanin-peak fractions from urine concentrated samples contained multiple bands but the main urine proteins (such as Tamm–Horsfall protein were absent. Furthermore, a preliminary proteomic study of these fractions revealed the presence of EV-related proteins, suggesting their enrichment in concentrated samples. In addition, RNA profiling also showed the presence of vesicular small RNA species.To summarize, our results demonstrated that concentrated urine followed by SEC is a suitable option to isolate EVs with low presence of soluble contaminants. This methodology could permit more accurate analyses of EV-related biomarkers when further characterized by -omics technologies compared with other approaches.
Clark, Timothy; Berger, Ursula; Mansmann, Ulrich
2013-03-21
To assess the completeness of reporting of sample size determinations in unpublished research protocols and to develop guidance for research ethics committees and for statisticians advising these committees. Review of original research protocols. Unpublished research protocols for phase IIb, III, and IV randomised clinical trials of investigational medicinal products submitted to research ethics committees in the United Kingdom during 1 January to 31 December 2009. Completeness of reporting of the sample size determination, including the justification of design assumptions, and disagreement between reported and recalculated sample size. 446 study protocols were reviewed. Of these, 190 (43%) justified the treatment effect and 213 (48%) justified the population variability or survival experience. Only 55 (12%) discussed the clinical importance of the treatment effect sought. Few protocols provided a reasoned explanation as to why the design assumptions were plausible for the planned study. Sensitivity analyses investigating how the sample size changed under different design assumptions were lacking; six (1%) protocols included a re-estimation of the sample size in the study design. Overall, 188 (42%) protocols reported all of the information to accurately recalculate the sample size; the assumed withdrawal or dropout rate was not given in 177 (40%) studies. Only 134 of the 446 (30%) sample size calculations could be accurately reproduced. Study size tended to be over-estimated rather than under-estimated. Studies with non-commercial sponsors justified the design assumptions used in the calculation more often than studies with commercial sponsors but less often reported all the components needed to reproduce the sample size calculation. Sample sizes for studies with non-commercial sponsors were less often reproduced. Most research protocols did not contain sufficient information to allow the sample size to be reproduced or the plausibility of the design assumptions to
Single point aerosol sampling: Evaluation of mixing and probe performance in a nuclear stack
Energy Technology Data Exchange (ETDEWEB)
Rodgers, J.C.; Fairchild, C.I.; Wood, G.O. [Los Alamos National Laboratory, NM (United States)] [and others
1995-02-01
Alternative Reference Methodologies (ARMs) have been developed for sampling of radionuclides from stacks and ducts that differ from the methods required by the U.S. EPA. The EPA methods are prescriptive in selection of sampling locations and in design of sampling probes whereas the alternative methods are performance driven. Tests were conducted in a stack at Los Alamos National Laboratory to demonstrate the efficacy of the ARMs. Coefficients of variation of the velocity tracer gas, and aerosol particle profiles were determined at three sampling locations. Results showed numerical criteria placed upon the coefficients of variation by the ARMs were met at sampling stations located 9 and 14 stack diameters from flow entrance, but not at a location that is 1.5 diameters downstream from the inlet. Experiments were conducted to characterize the transmission of 10 {mu}m aerodynamic equivalent diameter liquid aerosol particles through three types of sampling probes. The transmission ratio (ratio of aerosol concentration at the probe exit plane to the concentration in the free stream) was 107% for a 113 L/min (4-cfm) anisokinetic shrouded probe, but only 20% for an isokinetic probe that follows the EPA requirements. A specially designed isokinetic probe showed a transmission ratio of 63%. The shrouded probe performance would conform to the ARM criteria; however, the isokinetic probes would not.
Lihong, Huang; Jianling, Bai; Hao, Yu; Feng, Chen
2017-06-20
Sample size re-estimation is essential in oncology studies. However, the use of blinded sample size reassessment for survival data has been rarely reported. Based on the density function of the exponential distribution, an expectation-maximization (EM) algorithm of the hazard ratio was derived, and several simulation studies were used to verify its applications. The method had obvious variation in the hazard ratio estimates and overestimation for the relatively small hazard ratios. Our studies showed that the stability of the EM estimation results directly correlated with the sample size, the convergence of the EM algorithm was impacted by the initial values, and a balanced design produced the best estimates. No reliable blinded sample size re-estimation inference can be made in our studies, but the results provide useful information to steer the practitioners in this field from repeating the same endeavor..
DEFF Research Database (Denmark)
Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.
2013-01-01
measure of the data heterogeneity, has been used to modify formulae for individual sample size estimation. However, subgroups of animals sharing common characteristics, may exhibit excessively less or more heterogeneity. Hence, sample size estimates based on the ICC may not achieve the desired precision...... and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different heterogeneity......, thus, optimizing resource allocation. A VPC-based predictive simulation method for sample size estimation to substantiate freedom from disease is presented. To illustrate the benefits of the proposed approach we give two examples with the analysis of data from a risk factor study on Mycobacterium avium...
Varekar, Vikas; Karmakar, Subhankar; Jha, Ramakar; Ghosh, N C
2015-06-01
The design of a water quality monitoring network (WQMN) is a complicated decision-making process because each sampling involves high installation, operational, and maintenance costs. Therefore, data with the highest information content should be collected. The effect of seasonal variation in point and diffuse pollution loadings on river water quality may have a significant impact on the optimal selection of sampling locations, but this possible effect has never been addressed in the evaluation and design of monitoring networks. The present study proposes a systematic approach for siting an optimal number and location of river water quality sampling stations based on seasonal or monsoonal variations in both point and diffuse pollution loadings. The proposed approach conceptualizes water quality monitoring as a two-stage process; the first stage of which is to consider all potential water quality sampling sites, selected based on the existing guidelines or frameworks, and the locations of both point and diffuse pollution sources. The monitoring at all sampling sites thus identified should be continued for an adequate period of time to account for the effect of the monsoon season. In the second stage, the monitoring network is then designed separately for monsoon and non-monsoon periods by optimizing the number and locations of sampling sites, using a modified Sanders approach. The impacts of human interventions on the design of the sampling net are quantified geospatially by estimating diffuse pollution loads and verified with land use map. To demonstrate the proposed methodology, the Kali River basin in the western Uttar Pradesh state of India was selected as a study area. The final design suggests consequential pre- and post-monsoonal changes in the location and priority of water quality monitoring stations based on the seasonal variation of point and diffuse pollution loadings.
Sampling Point Compliance Tests for 325 Building at Set-Back Flow Conditions
Energy Technology Data Exchange (ETDEWEB)
Ballinger, Marcel Y.; Glissmeyer, John A.; Barnett, J. Matthew; Recknagle, Kurtis P.; Yokuda, Satoru T.
2011-05-31
The stack sampling system at the 325 Building (Radiochemical Processing Laboratory [RPL]) was constructed to comply with the American National Standards Institute’s (ANSI’s) Guide to Sampling Airborne Radioactive Materials in Nuclear Facilities (ANSI N13.1-1969). This standard provided prescriptive criteria for the location of radionuclide air-sampling systems. In 1999, the standard was revised (Sampling and Monitoring Releases of Airborne Radioactive Substances From the Stacks and Ducts of Nuclear Facilities [ANSI/Health Physics Society [HPS] 13.1-1999]) to provide performance-based criteria for the location of sampling systems. Testing was conducted for the 325 Building stack to determine whether the sampling system would meet the updated criteria for uniform air velocity and contaminant concentration in the revised ANSI/HPS 13.1-1999 standard under normal operating conditions (Smith et al. 2010). Measurement results were within criteria for all tests. Additional testing and modeling was performed to determine whether the sampling system would meet criteria under set-back flow conditions. This included measurements taken from a scale model with one-third of the exhaust flow and computer modeling of the system with two-thirds of the exhaust flow. This report documents the results of the set-back flow condition measurements and modeling. Tests performed included flow angularity, uniformity of velocity, gas concentration, and particle concentration across the duct at the sampling location. Results are within ANSI/HPS 13.1-1999 criteria for all tests. These tests are applicable for the 325 Building stack under set-back exhaust flow operating conditions (980 - 45,400 cubic feet per minute [cfm]) with one fan running. The modeling results show that criteria are met for all tests using a two-fan configuration exhaust (flow modeled at 104,000 cfm). Combined with the results from the earlier normal operating conditions, the ANSI/HPS 13.1-1999 criteria for all tests
Simple and efficient way of speeding up transmission calculations with k-point sampling
Directory of Open Access Journals (Sweden)
Jesper Toft Falkenberg
2015-07-01
Full Text Available The transmissions as functions of energy are central for electron or phonon transport in the Landauer transport picture. We suggest a simple and computationally “cheap” post-processing scheme to interpolate transmission functions over k-points to get smooth well-converged average transmission functions. This is relevant for data obtained using typical “expensive” first principles calculations where the leads/electrodes are described by periodic boundary conditions. We show examples of transport in graphene structures where a speed-up of an order of magnitude is easily obtained.
International Nuclear Information System (INIS)
Masuda, Yasuhiko; Makino, Kenichi; Gotoh, Satoshi
1999-01-01
In our previous paper regarding determination of the regional cerebral blood flow (rCBF) using the 123 I-IMP microsphere model, we reported that the accuracy of determination of the integrated value of the input function from one-point arterial blood sampling can be increased by performing correction using the 5 min: 29 min ratio for the whole-brain count. However, failure to carry out the arterial blood collection at exactly 5 minutes after 123 I-IMP injection causes errors with this method, and there is thus a time limitation. We have now revised out method so that the one-point arterial blood sampling can be performed at any time during the interval between 5 minutes and 20 minutes after 123 I-IMP injection, with addition of a correction step for the sampling time. This revised method permits more accurate estimation of the integral of the input functions. This method was then applied to 174 experimental subjects: one-point blood samples collected at random times between 5 and 20 minutes, and the estimated values for the continuous arterial octanol extraction count (COC) were determined. The mean error rate between the COC and the actual measured continuous arterial octanol extraction count (OC) was 3.6%, and the standard deviation was 12.7%. Accordingly, in 70% of the cases, the rCBF was able to be estimated within an error rate of 13%, while estimation was possible in 95% of the cases within an error rate of 25%. This improved method is a simple technique for determination of the rCBF by 123 I-IMP microsphere model and one-point arterial blood sampling which no longer shows a time limitation and does not require any octanol extraction step. (author)
Curvature computation in volume-of-fluid method based on point-cloud sampling
Kassar, Bruno B. M.; Carneiro, João N. E.; Nieckele, Angela O.
2018-01-01
This work proposes a novel approach to compute interface curvature in multiphase flow simulation based on Volume of Fluid (VOF) method. It is well documented in the literature that curvature and normal vector computation in VOF may lack accuracy mainly due to abrupt changes in the volume fraction field across the interfaces. This may cause deterioration on the interface tension forces estimates, often resulting in inaccurate results for interface tension dominated flows. Many techniques have been presented over the last years in order to enhance accuracy in normal vectors and curvature estimates including height functions, parabolic fitting of the volume fraction, reconstructing distance functions, coupling Level Set method with VOF, convolving the volume fraction field with smoothing kernels among others. We propose a novel technique based on a representation of the interface by a cloud of points. The curvatures and the interface normal vectors are computed geometrically at each point of the cloud and projected onto the Eulerian grid in a Front-Tracking manner. Results are compared to benchmark data and significant reduction on spurious currents as well as improvement in the pressure jump are observed. The method was developed in the open source suite OpenFOAM® extending its standard VOF implementation, the interFoam solver.
Tang, Yongqiang
2015-01-01
A sample size formula is derived for negative binomial regression for the analysis of recurrent events, in which subjects can have unequal follow-up time. We obtain sharp lower and upper bounds on the required size, which is easy to compute. The upper bound is generally only slightly larger than the required size, and hence can be used to approximate the sample size. The lower and upper size bounds can be decomposed into two terms. The first term relies on the mean number of events in each group, and the second term depends on two factors that measure, respectively, the extent of between-subject variability in event rates, and follow-up time. Simulation studies are conducted to assess the performance of the proposed method. An application of our formulae to a multiple sclerosis trial is provided.
rCBF measurement by one-point venous sampling with the ARG method
International Nuclear Information System (INIS)
Yoshida, Nobuhiro; Okamoto, Toshiaki; Takahashi, Hidekado; Hattori, Teruo
1997-01-01
We investigated the possibility of using venous blood sampling instead of arterial blood sampling for the current method of ARG (autoradiography) used to determine regional cerebral blood flow (rCBF) on the basis of one session of arterial blood sampling and SPECT. For this purpose, the ratio of the arterial blood radioactivity count to the venous blood radioactivity count, the coefficient of variation, and the correlation and differences between arterial blood-based rCBF and venous blood-based rCBF were analyzed. The coefficient of variation was lowest (4.1%) 20 minutes after injection into the dorsum manus. When the relationship between venous and arterial blood counts was analyzed, arterial blood counts correlated well with venous blood counts collected at the dorsum manus 20 or 30 minutes after intravenous injection and with venous blood counts collected at the wrist 20 minutes after intravenous injection (r=0.97 or higher). The difference from rCBF determined on the basis of arterial blood was smallest (0.7) for rCBF determined on the basis of venous blood collected at the dorsum manus 20 minutes after intravenous injection. (author)
Directory of Open Access Journals (Sweden)
Christopher Ryan Penton
2016-06-01
Full Text Available We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5 and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungal community structure, replicate dispersion and the number of operational taxonomic units (OTUs retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation.
Size and shape characteristics of drumlins, derived from a large sample, and associated scaling laws
Clark, Chris D.; Hughes, Anna L. C.; Greenwood, Sarah L.; Spagnolo, Matteo; Ng, Felix S. L.
2009-04-01
Ice sheets flowing across a sedimentary bed usually produce a landscape of blister-like landforms streamlined in the direction of the ice flow and with each bump of the order of 10 2 to 10 3 m in length and 10 1 m in relief. Such landforms, known as drumlins, have mystified investigators for over a hundred years. A satisfactory explanation for their formation, and thus an appreciation of their glaciological significance, has remained elusive. A recent advance has been in numerical modelling of the land-forming process. In anticipation of future modelling endeavours, this paper is motivated by the requirement for robust data on drumlin size and shape for model testing. From a systematic programme of drumlin mapping from digital elevation models and satellite images of Britain and Ireland, we used a geographic information system to compile a range of statistics on length L, width W, and elongation ratio E (where E = L/ W) for a large sample. Mean L, is found to be 629 m ( n = 58,983), mean W is 209 m and mean E is 2.9 ( n = 37,043). Most drumlins are between 250 and 1000 metres in length; between 120 and 300 metres in width; and between 1.7 and 4.1 times as long as they are wide. Analysis of such data and plots of drumlin width against length reveals some new insights. All frequency distributions are unimodal from which we infer that the geomorphological label of 'drumlin' is fair in that this is a true single population of landforms, rather than an amalgam of different landform types. Drumlin size shows a clear minimum bound of around 100 m (horizontal). Maybe drumlins are generated at many scales and this is the minimum, or this value may be an indication of the fundamental scale of bump generation ('proto-drumlins') prior to them growing and elongating. A relationship between drumlin width and length is found (with r2 = 0.48) and that is approximately W = 7 L 1/2 when measured in metres. A surprising and sharply-defined line bounds the data cloud plotted in E- W
Sample size requirements to detect gene-environment interactions in genome-wide association studies.
Murcray, Cassandra E; Lewinger, Juan Pablo; Conti, David V; Thomas, Duncan C; Gauderman, W James
2011-04-01
Many complex diseases are likely to be a result of the interplay of genes and environmental exposures. The standard analysis in a genome-wide association study (GWAS) scans for main effects and ignores the potentially useful information in the available exposure data. Two recently proposed methods that exploit environmental exposure information involve a two-step analysis aimed at prioritizing the large number of SNPs tested to highlight those most likely to be involved in a GE interaction. For example, Murcray et al. ([2009] Am J Epidemiol 169:219–226) proposed screening on a test that models the G-E association induced by an interaction in the combined case-control sample. Alternatively, Kooperberg and LeBlanc ([2008] Genet Epidemiol 32:255–263) suggested screening on genetic marginal effects. In both methods, SNPs that pass the respective screening step at a pre-specified significance threshold are followed up with a formal test of interaction in the second step. We propose a hybrid method that combines these two screening approaches by allocating a proportion of the overall genomewide significance level to each test. We show that the Murcray et al. approach is often the most efficient method, but that the hybrid approach is a powerful and robust method for nearly any underlying model. As an example, for a GWAS of 1 million markers including a single true disease SNP with minor allele frequency of 0.15, and a binary exposure with prevalence 0.3, the Murcray, Kooperberg and hybrid methods are 1.90, 1.27, and 1.87 times as efficient, respectively, as the traditional case-control analysis to detect an interaction effect size of 2.0.
Huang, Yu
Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.
Duncanson, L.; Dubayah, R.
2015-12-01
Lidar remote sensing is widely applied for mapping forest carbon stocks, and technological advances have improved our ability to capture structural details from forests, even resolving individual trees. Despite these advancements, the accuracy of forest aboveground biomass models remains limited by the quality of field estimates of biomass. The accuracies of field estimates are inherently dependent on the accuracy of the allometric equations used to relate measurable attributes to biomass. These equations are calibrated with relatively small samples of often spatially clustered trees. This research focuses on one of many issues involving allometric equations - understanding how sensitive allometric parameters are to the sample sizes used to fit them. We capitalize on recent advances in lidar remote sensing to extract individual tree structural information from six high-resolution airborne lidar datasets in the United States. We remotely measure millions of tree heights and crown radii, and fit allometric equations to the relationship between tree height and radius at a 'population' level, in each site. We then extract samples from our tree database, and build allometries on these smaller samples of trees, with varying sample sizes. We show that for the allometric relationship between tree height and crown radius, small sample sizes produce biased allometric equations that overestimate height for a given crown radius. We extend this analysis using translations from the literature to address potential implications for biomass, showing that site-level biomass may be greatly overestimated when applying allometric equations developed with the typically small sample sizes used in popular allometric equations for biomass.
Quality assuring HIV point of care testing using whole blood samples.
Dare-Smith, Raellene; Badrick, Tony; Cunningham, Philip; Kesson, Alison; Badman, Susan
2016-08-01
The Royal College of Pathologists Australasia Quality Assurance Programs (RCPAQAP), have offered dedicated external quality assurance (EQA) for HIV point of care testing (PoCT) since 2011. Prior to this, EQA for these tests was available within the comprehensive human immunodeficiency virus (HIV) module. EQA testing for HIV has typically involved the supply of serum or plasma, while in the clinic or community based settings HIV PoCT is generally performed using whole blood obtained by capillary finger-stick collection. RCPAQAP has offered EQA for HIV PoCT using stabilised whole blood since 2014. A total of eight surveys have been undertaken over a period of 2 years from 2014 to 2015. Of the 962 responses received, the overall consensus rate was found to be 98% (941/962). A total of 21 errors were detected. The majority of errors were attributable to false reactive HIV p24 antigen results (9/21, 43%), followed by false reactive HIV antibody results (8/21, 38%). There were 4/21 (19%) false negative HIV antibody results and no false negative HIV p24 antigen results reported. Overall performance was observed to vary minimally between surveys, from a low of 94% up to 99% concordant. Encouraging levels of testing proficiency for HIV PoCT are indicated by these data, but they also confirm the need for HIV PoCT sites to participate in external quality assurance programs to ensure the ongoing provision of high quality patient care. Copyright © 2016 Royal College of Pathologists of Australasia. All rights reserved.
Hoefgen, Barbara; Schulze, Thomas G; Ohlraun, Stephanie; von Widdern, Olrik; Höfels, Susanne; Gross, Magdalena; Heidmann, Vivien; Kovalenko, Svetlana; Eckermann, Anita; Kölsch, Heike; Metten, Martin; Zobel, Astrid; Becker, Tim; Nöthen, Markus M; Propping, Peter; Heun, Reinhard; Maier, Wolfgang; Rietschel, Marcella
2005-02-01
Several lines of evidence indicate that abnormalities in the functioning of the central serotonergic system are involved in the pathogenesis of affective illness. A 44-base-pair insertion/deletion polymorphism in the 5' regulatory region of the serotonin transporter gene (5-HTTLPR), which influences expression of the serotonin transporter, has been the focus of intensive research since an initial report on an association between 5-HTTLPR and depression-related personality traits. Consistently replicated evidence for an involvement of this polymorphism in the etiology of mood disorders, particularly in major depressive disorder (MDD), remains scant. We assessed a potential association between 5-HTTLPR and MDD, using the largest reported sample to date (466 patients, 836 control subjects). Individuals were all of German descent. Patients were systematically recruited from consecutive inpatient admissions. Control subjects were drawn from random lists of the local Census Bureau and screened for psychiatric disorders. The short allele of 5-HTTLPR was significantly more frequent in patients than in control subjects (45.5% vs. 39.9%; p = .006; odds ratio = 1.26). These results support an involvement of 5-HTTLPR in the etiology of MDD. They also demonstrate that the detection of small genetic effects requires very large and homogenous samples.
Meldrum, R J; Ellis, P W; Mannion, P T; Halstead, D; Garside, J
2010-08-01
A survey of Listeria in ready-to-eat food took place in Wales, United Kingdom, between February 2008 and January 2009. In total, 5,840 samples were taken and examined for the presence of Listeria species, including L. monocytogenes. Samples were tested using detection and enumeration methods, and the results were compared with current United Kingdom guidelines for the microbiological quality of ready-to-eat foods. The majority of samples were negative for Listeria by both direct plating and enriched culture. Seventeen samples (0.29%) had countable levels of Listeria species (other than L. monocytogenes), and another 11 samples (0.19%) had countable levels of L. monocytogenes. Nine samples (0.15%) were unsatisfactory or potentially hazardous when compared with United Kingdom guideline limits; six (0.10%) were in the unsatisfactory category (>100 CFU/g) for Listeria species (other than L. monocytogenes), and three (0.05%) were in the unacceptable or potentially hazardous category (>100 CFU/g) for L. monocytogenes. All three of these samples were from sandwiches (two chicken sandwiches and one ham-and-cheese sandwich). The most commonly isolated serotype of L. monocytogenes was 1/2a. This survey was used to determine the current prevalence of Listeria species and L. monocytogenes in ready-to-eat foods sampled from the point of sale in Wales.
International Nuclear Information System (INIS)
Zegarra Pisconti, Marixa; Cjuno Huanca, Jesus
2015-01-01
A methodology was developed about lead preconcentration in water samples that were added dithizone as complexing agent, previously dissolved in the nonionic surfactant Triton X-114, until the formation of the critical micelle concentration and the cloud point temperature. The centrifuged system gave a precipitate with high concentrations of Pb (II) that was measured by atomic absorption spectroscopy with flame (EAAF). The method has proved feasible to be implemented as a method of preconcentration and analysis of Pb in aqueous samples with concentrations less than 1 ppm. Several parameters were evaluated to obtain a percentage recovery of 89.8%. (author)
International Nuclear Information System (INIS)
Durani, Smeer; Mathur, Neerja; Chowdary, G.S.
2007-01-01
The cloud point extraction behavior (CPE) of vanadium (V) using 5,7 dibromo 8-hydroxyquinoline (DBHQ) and triton X 100 was investigated. Vanadium (V) was extracted with 4 ml of 0.5 mg/ml DBHQ and 6 ml of 8% (V/V) triton X 100 at the pH 3.7. A few hydrogeochemical samples were analysed for vanadium using the above method. (author)
Ash Dieback on Sample Points of the National Forest Inventory in South-Western Germany
Directory of Open Access Journals (Sweden)
Rasmus Enderle
2018-01-01
Full Text Available The alien invasive pathogen Hymenoscyphus fraxineus causes large-scale decline of European ash (Fraxinus excelsior. We assessed ash dieback in Germany and identified factors that were associated with this disease. Our assessment was based on a 2015 sampling of national forest inventory plots that represent a supra-regional area. In the time from 2012 to 2015, the number of regrown ash trees corresponded to only 42% of the number of trees that had been harvested or died. Severe defoliation was recorded for almost 40% of the living trees in 2015, and more than half of the crowns mainly consisted of epicormic shoots. Necroses were present in 24% of root collars. A total of 14% of the trees were in sound condition, which sum up to only 7% of the timber volume. On average, trees of a higher social status or with a larger diameter at breast height were healthier. Collar necroses were less prevalent at sites with a higher inclination of terrain, but there was no evidence for an influence of climatic variables on collar necroses. The disease was less severe at sites with smaller proportions of the basal area of ash compared to the total basal area of all trees and in the north-eastern part of the area of investigation. The regeneration of ash decreased drastically.
Specific Skin Lesions of Sarcoidosis Located at Venipuncture Points for Blood Sample Collection.
Marcoval, Joaquim; Penín, Rosa M; Mañá, Juan
2017-07-08
It has been suggested that the predilection of sarcoidosis to affect scars is due to the presence of antigens or foreign bodies that can serve as a stimulus for granuloma formation. Several patients with sarcoidosis-specific skin lesions in venous puncture sites have been reported. However, in these patients the pathogenesis of the cutaneous lesions is not clear because the presence of foreign bodies is not to be expected. Our objective was to describe 3 patients who developed specific lesions of sarcoidosis in areas of venipuncture and to discuss their possible pathogenesis. The database of the Sarcoid Clinic of Bellvitge Hospital (an 800-bed university referral center providing tertiary care to approximately 1 million people in Barcelona, Spain) was reviewed to detect those patients with specific cutaneous lesions of systemic sarcoidosis in areas of venipuncture. Three patients with biopsy-proven specific cutaneous lesions of systemic sarcoidosis in areas of venipuncture for blood collection were detected (3 women, mean age 56 years). In one case, the histopathological image shows the hypothetical path of a needle through the skin. In 2 cases, an amorphous birefringent material was detected under polarized light. This material was consistent with silicone. In patients who are developing sarcoidosis, the smallest amount of oil used as lubricant in the needle for sample blood collection may induce the formation of granulomas. In addition to exploring scars, it is advisable to explore the cubital folds to detect specific cutaneous lesions of sarcoidosis.
Uyaguari-Diaz, Miguel I; Slobodan, Jared R; Nesbitt, Matthew J; Croxen, Matthew A; Isaac-Renton, Judith; Prystajecky, Natalie A; Tang, Patrick
2015-04-17
Next-generation sequencing of environmental samples can be challenging because of the variable DNA quantity and quality in these samples. High quality DNA libraries are needed for optimal results from next-generation sequencing. Environmental samples such as water may have low quality and quantities of DNA as well as contaminants that co-precipitate with DNA. The mechanical and enzymatic processes involved in extraction and library preparation may further damage the DNA. Gel size selection enables purification and recovery of DNA fragments of a defined size for sequencing applications. Nevertheless, this task is one of the most time-consuming steps in the DNA library preparation workflow. The protocol described here enables complete automation of agarose gel loading, electrophoretic analysis, and recovery of targeted DNA fragments. In this study, we describe a high-throughput approach to prepare high quality DNA libraries from freshwater samples that can be applied also to other environmental samples. We used an indirect approach to concentrate bacterial cells from environmental freshwater samples; DNA was extracted using a commercially available DNA extraction kit, and DNA libraries were prepared using a commercial transposon-based protocol. DNA fragments of 500 to 800 bp were gel size selected using Ranger Technology, an automated electrophoresis workstation. Sequencing of the size-selected DNA libraries demonstrated significant improvements to read length and quality of the sequencing reads.
Respondent driven sampling: determinants of recruitment and a method to improve point estimation.
Directory of Open Access Journals (Sweden)
Nicky McCreesh
Full Text Available Respondent-driven sampling (RDS is a variant of a link-tracing design intended for generating unbiased estimates of the composition of hidden populations that typically involves giving participants several coupons to recruit their peers into the study. RDS may generate biased estimates if coupons are distributed non-randomly or if potential recruits present for interview non-randomly. We explore if biases detected in an RDS study were due to either of these mechanisms, and propose and apply weights to reduce bias due to non-random presentation for interview.Using data from the total population, and the population to whom recruiters offered their coupons, we explored how age and socioeconomic status were associated with being offered a coupon, and, if offered a coupon, with presenting for interview. Population proportions were estimated by weighting by the assumed inverse probabilities of being offered a coupon (as in existing RDS methods, and also of presentation for interview if offered a coupon by age and socioeconomic status group.Younger men were under-recruited primarily because they were less likely to be offered coupons. The under-recruitment of higher socioeconomic status men was due in part to them being less likely to present for interview. Consistent with these findings, weighting for non-random presentation for interview by age and socioeconomic status group greatly improved the estimate of the proportion of men in the lowest socioeconomic group, reducing the root-mean-squared error of RDS estimates of socioeconomic status by 38%, but had little effect on estimates for age. The weighting also improved estimates for tribe and religion (reducing root-mean-squared-errors by 19-29%, but had little effect for sexual activity or HIV status.Data collected from recruiters on the characteristics of men to whom they offered coupons may be used to reduce bias in RDS studies. Further evaluation of this new method is required.
International Nuclear Information System (INIS)
Hoo, Christopher M.; Doan, Trang; Starostin, Natasha; West, Paul E.; Mecartney, Martha L.
2010-01-01
Optimal deposition procedures are determined for nanoparticle size characterization by atomic force microscopy (AFM). Accurate nanoparticle size distribution analysis with AFM requires non-agglomerated nanoparticles on a flat substrate. The deposition of polystyrene (100 nm), silica (300 and 100 nm), gold (100 nm), and CdSe quantum dot (2-5 nm) nanoparticles by spin coating was optimized for size distribution measurements by AFM. Factors influencing deposition include spin speed, concentration, solvent, and pH. A comparison using spin coating, static evaporation, and a new fluid cell deposition method for depositing nanoparticles is also made. The fluid cell allows for a more uniform and higher density deposition of nanoparticles on a substrate at laminar flow rates, making nanoparticle size analysis via AFM more efficient and also offers the potential for nanoparticle analysis in liquid environments.
Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S
2013-04-15
Adaptive clinical trial design has been proposed as a promising new approach that may improve the drug discovery process. Proponents of adaptive sample size re-estimation promote its ability to avoid 'up-front' commitment of resources, better address the complicated decisions faced by data monitoring committees, and minimize accrual to studies having delayed ascertainment of outcomes. We investigate aspects of adaptation rules, such as timing of the adaptation analysis and magnitude of sample size adjustment, that lead to greater or lesser statistical efficiency. Owing in part to the recent Food and Drug Administration guidance that promotes the use of pre-specified sampling plans, we evaluate alternative approaches in the context of well-defined, pre-specified adaptation. We quantify the relative costs and benefits of fixed sample, group sequential, and pre-specified adaptive designs with respect to standard operating characteristics such as type I error, maximal sample size, power, and expected sample size under a range of alternatives. Our results build on others' prior research by demonstrating in realistic settings that simple and easily implemented pre-specified adaptive designs provide only very small efficiency gains over group sequential designs with the same number of analyses. In addition, we describe optimal rules for modifying the sample size, providing efficient adaptation boundaries on a variety of scales for the interim test statistic for adaptation analyses occurring at several different stages of the trial. We thus provide insight into what are good and bad choices of adaptive sampling plans when the added flexibility of adaptive designs is desired. Copyright © 2012 John Wiley & Sons, Ltd.
Bovens, M; Csesztregi, T; Franc, A; Nagy, J; Dujourdy, L
2014-01-01
The basic goal in sampling for the quantitative analysis of illicit drugs is to maintain the average concentration of the drug in the material from its original seized state (the primary sample) all the way through to the analytical sample, where the effect of particle size is most critical. The size of the largest particles of different authentic illicit drug materials, in their original state and after homogenisation, using manual or mechanical procedures, was measured using a microscope with a camera attachment. The comminution methods employed included pestle and mortar (manual) and various ball and knife mills (mechanical). The drugs investigated were amphetamine, heroin, cocaine and herbal cannabis. It was shown that comminution of illicit drug materials using these techniques reduces the nominal particle size from approximately 600 μm down to between 200 and 300 μm. It was demonstrated that the choice of 1 g increments for the primary samples of powdered drugs and cannabis resin, which were used in the heterogeneity part of our study (Part I) was correct for the routine quantitative analysis of illicit seized drugs. For herbal cannabis we found that the appropriate increment size was larger. Based on the results of this study we can generally state that: An analytical sample weight of between 20 and 35 mg of an illicit powdered drug, with an assumed purity of 5% or higher, would be considered appropriate and would generate an RSDsampling in the same region as the RSDanalysis for a typical quantitative method of analysis for the most common, powdered, illicit drugs. For herbal cannabis, with an assumed purity of 1% THC (tetrahydrocannabinol) or higher, an analytical sample weight of approximately 200 mg would be appropriate. In Part III we will pull together our homogeneity studies and particle size investigations and use them to devise sampling plans and sample preparations suitable for the quantitative instrumental analysis of the most common illicit
Grain size of loess and paleosol samples: what are we measuring?
Varga, György; Kovács, János; Szalai, Zoltán; Újvári, Gábor
2017-04-01
Particle size falling into a particularly narrow range is among the most important properties of windblown mineral dust deposits. Therefore, various aspects of aeolian sedimentation and post-depositional alterations can be reconstructed only from precise grain size data. Present study is aimed at (1) reviewing grain size data obtained from different measurements, (2) discussing the major reasons for disagreements between data obtained by frequently applied particle sizing techniques, and (3) assesses the importance of particle shape in particle sizing. Grain size data of terrestrial aeolian dust deposits (loess and paleosoil) were determined by laser scattering instruments (Fritsch Analysette 22 Microtec Plus, Horiba Partica La-950 v2 and Malvern Mastersizer 3000 with a Hydro Lv unit), while particles size and shape distributions were acquired by Malvern Morphologi G3-ID. Laser scattering results reveal that the optical parameter settings of the measurements have significant effects on the grain size distributions, especially for the fine-grained fractions (slide with a consistent orientation with their largest area facing to the camera. However, this is only one outcome of infinite possible projections of a three-dimensional object and it cannot be regarded as a representative one. The third (height) dimension of the particles remains unknown, so the volume-based weightings are fairly dubious in the case of platy particles. Support of the National Research, Development and Innovation Office (Hungary) under contract NKFI 120620 is gratefully acknowledged. It was additionally supported (for G. Varga) by the Bolyai János Research Scholarship of the Hungarian Academy of Sciences.
DEFF Research Database (Denmark)
Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.
2008-01-01
OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved...... in 1994-5 by the scientific-ethics committees for Copenhagen and Frederiksberg, Denmark (n=70). MAIN OUTCOME MEASURE: Proportion of protocols and publications that did not provide key information about sample size calculations and statistical methods; proportion of trials with discrepancies between...... information presented in the protocol and the publication. RESULTS: Only 11/62 trials described existing sample size calculations fully and consistently in both the protocol and the publication. The method of handling protocol deviations was described in 37 protocols and 43 publications. The method...
Ding, Chen-Li; Ma, Yan-Tao; Huang, Qiang-Min; Liu, Qing-Guang; Zhao, Jia-Min
2018-02-25
To attempt to establish an objective quantitative indicator to characterize the trigger point activity, so as to evaluate the effect of dry needling on myofascial trigger point activity. Twenty-four male Sprague-Dawley rats were randomly divided into blank control group, dry needling (needling) group, stretching exercise (stretching) group and needling plus stretching group ( n ＝6 per group). The chronic myofascial pain (trigger point) model was established by freedom vertical fall of a wooden striking device onto the mid-point of gastrocnemius belly of the left hind-limb to induce contusion, followed by forcing the rat to make a continuous downgrade running exercise at a speed of 16 m/min for 90 min on the next day which was conducted once a week for 8 weeks. Electromyography (EMG) of the regional myofascial injured point was monitored and recorded using an EMG recorder via electrodes. It was considered success of the model if spontaneous electrical activities appeared in the injured site. After a 4 weeks' recovery, rats of the needling group were treated by filiform needle stimulation (lifting-thrusting-rotating) of the central part of the injured gastrocnemius belly (about 10 mm deep) for 6 min, and those of the stretching group treated by holding the rat's limb to make the hip and knee joints to an angle of about 180°, and the ankle-joint about 90° for 1 min every time, 3 times altogether (with an interval of 1 min between every 2 times). The activity of the trigger point was estimated by the sample entropy of the EMG signal sequence in reference to Richman's and Moorman's methods to estimate the curative effect of both needling and exercise. After the modeling cycle, the mean sample entropies of EMG signals was significantly decreased in the model groups (needling group [0.034±0.010], stretching group [0.045±0.023], needling plus stretching group [0.047±0.034]) relevant to the blank control group (0.985±0.196, P 0.05), suggesting a better efficacy of
Bi, Ran; Liu, Peng
2016-03-31
RNA-Sequencing (RNA-seq) experiments have been popularly applied to transcriptome studies in recent years. Such experiments are still relatively costly. As a result, RNA-seq experiments often employ a small number of replicates. Power analysis and sample size calculation are challenging in the context of differential expression analysis with RNA-seq data. One challenge is that there are no closed-form formulae to calculate power for the popularly applied tests for differential expression analysis. In addition, false discovery rate (FDR), instead of family-wise type I error rate, is controlled for the multiple testing error in RNA-seq data analysis. So far, there are very few proposals on sample size calculation for RNA-seq experiments. In this paper, we propose a procedure for sample size calculation while controlling FDR for RNA-seq experimental design. Our procedure is based on the weighted linear model analysis facilitated by the voom method which has been shown to have competitive performance in terms of power and FDR control for RNA-seq differential expression analysis. We derive a method that approximates the average power across the differentially expressed genes, and then calculate the sample size to achieve a desired average power while controlling FDR. Simulation results demonstrate that the actual power of several popularly applied tests for differential expression is achieved and is close to the desired power for RNA-seq data with sample size calculated based on our method. Our proposed method provides an efficient algorithm to calculate sample size while controlling FDR for RNA-seq experimental design. We also provide an R package ssizeRNA that implements our proposed method and can be downloaded from the Comprehensive R Archive Network ( http://cran.r-project.org ).
Directory of Open Access Journals (Sweden)
Marcos Adami
2010-06-01
Full Text Available O objetivo deste trabalho foi avaliar o desempenho de um modelo probabilístico de amostragem estratificada por pontos, e definir um tamanho de amostra adequado para estimar a área cultivada com soja no Rio Grande do Sul. A área foi estratificada de acordo com a percentagem de soja cultivada em cada município do estado: menor que 20, de 20 a 40 e maior que 40%. Foram avaliadas estimativas obtidas por meio de seis tamanhos de amostras, resultantes da combinação de três níveis de significância (10, 5 e 1% e dois valores de erro amostral (5 e 2,5%. Para cada tamanho de amostra, foram realizados 400 sorteios aleatórios. As estimativas foram avaliadas com base na área de soja obtida de um mapa temático de referência proveniente de uma cuidadosa classificação automática e visual de imagens multitemporais dos satélites TM/Landsat-5 e ETM+/Landsat-7 disponível para a safra 2000/2001. A área de soja no Rio Grande do Sul pode ser estimada por meio de um modelo de amostragem probabilística estratificada por pontos, sendo que a melhor estimativa é obtida para o maior tamanho amostral (1.990 pontos, com diferença de apenas -0,14% em relação à estimativa do mapa de referência e um coeficiente de variação de 6,98%.The objective of this work was to evaluate the performance of a probabilistic sampling model stratified by points and to define an appropriate sample size to estimate the cultivated soybean area in the state of Rio Grande do Sul, Brazil. The area was stratified according to the percentage of soybean cultivated in each state municipality: less than 20, from 20 to 40 and more than 40%. Estimates were evaluated based on six sample sizes, resulting from the combination of three significance levels (10, 5 and 1% and two sampling errors (5 and 2,5%, choosing 400 random samples for each sample size. The estimates were compared to a reference soybean thematic map available for the crop year 2000/2001 that was derived from a careful
Amundson, Courtney L.; Royle, J. Andrew; Handel, Colleen M.
2014-01-01
Imperfect detection during animal surveys biases estimates of abundance and can lead to improper conclusions regarding distribution and population trends. Farnsworth et al. (2005) developed a combined distance-sampling and time-removal model for point-transect surveys that addresses both availability (the probability that an animal is available for detection; e.g., that a bird sings) and perceptibility (the probability that an observer detects an animal, given that it is available for detection). We developed a hierarchical extension of the combined model that provides an integrated analysis framework for a collection of survey points at which both distance from the observer and time of initial detection are recorded. Implemented in a Bayesian framework, this extension facilitates evaluating covariates on abundance and detection probability, incorporating excess zero counts (i.e. zero-inflation), accounting for spatial autocorrelation, and estimating population density. Species-specific characteristics, such as behavioral displays and territorial dispersion, may lead to different patterns of availability and perceptibility, which may, in turn, influence the performance of such hierarchical models. Therefore, we first test our proposed model using simulated data under different scenarios of availability and perceptibility. We then illustrate its performance with empirical point-transect data for a songbird that consistently produces loud, frequent, primarily auditory signals, the Golden-crowned Sparrow (Zonotrichia atricapilla); and for 2 ptarmigan species (Lagopus spp.) that produce more intermittent, subtle, and primarily visual cues. Data were collected by multiple observers along point transects across a broad landscape in southwest Alaska, so we evaluated point-level covariates on perceptibility (observer and habitat), availability (date within season and time of day), and abundance (habitat, elevation, and slope), and included a nested point
Krõlov, Katrin; Uusna, Julia; Grellier, Tiia; Andresen, Liis; Jevtuševskaja, Jekaterina; Tulp, Indrek; Langel, Ülo
2017-12-01
A variety of sample preparation techniques are used prior to nucleic acid amplification. However, their efficiency is not always sufficient and nucleic acid purification remains the preferred method for template preparation. Purification is difficult and costly to apply in point-of-care (POC) settings and there is a strong need for more robust, rapid, and efficient biological sample preparation techniques in molecular diagnostics. Here, the authors applied antimicrobial peptides (AMPs) for urine sample preparation prior to isothermal loop-mediated amplification (LAMP). AMPs bind to many microorganisms such as bacteria, fungi, protozoa and viruses causing disruption of their membrane integrity and facilitate nucleic acid release. The authors show that incubation of E. coli with antimicrobial peptide cecropin P1 for 5 min had a significant effect on the availability of template DNA compared with untreated or even heat treated samples resulting in up to six times increase of the amplification efficiency. These results show that AMPs treatment is a very efficient sample preparation technique that is suitable for application prior to nucleic acid amplification directly within biological samples. Furthermore, the entire process of AMPs treatment was performed at room temperature for 5 min thereby making it a good candidate for use in POC applications.
Directory of Open Access Journals (Sweden)
Lurdes Borges Silva
2017-01-01
Full Text Available Tree density is an important parameter affecting ecosystems functions and management decisions, while tree distribution patterns affect sampling design. Pittosporum undulatum stands in the Azores are being targeted with a biomass valorization program, for which efficient tree density estimators are required. We compared T-Square sampling, Point Centered Quarter Method (PCQM, and N-tree sampling with benchmark quadrat (QD sampling in six 900 m2 plots established at P. undulatum stands in São Miguel Island. A total of 15 estimators were tested using a data resampling approach. The estimated density range (344–5056 trees/ha was found to agree with previous studies using PCQM only. Although with a tendency to underestimate tree density (in comparison with QD, overall, T-Square sampling appeared to be the most accurate and precise method, followed by PCQM. Tree distribution pattern was found to be slightly aggregated in 4 of the 6 stands. Considering (1 the low level of bias and high precision, (2 the consistency among three estimators, (3 the possibility of use with aggregated patterns, and (4 the possibility of obtaining a larger number of independent tree parameter estimates, we recommend the use of T-Square sampling in P. undulatum stands within the framework of a biomass valorization program.
Directory of Open Access Journals (Sweden)
Simon Boitard
2016-03-01
Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.
Yaren, Ozlem; Alto, Barry W; Gangodkar, Priyanka V; Ranade, Shatakshi R; Patil, Kunal N; Bradley, Kevin M; Yang, Zunyi; Phadke, Nikhil; Benner, Steven A
2017-04-20
Zika, dengue, and chikungunya are three mosquito-borne viruses having overlapping transmission vectors. They cause diseases having similar symptoms in human patients, but requiring different immediate management steps. Therefore, rapid (viruses in patient samples and trapped mosquitoes is needed. The need for speed precludes any assay that requires complex up-front sample preparation, such as extraction of nucleic acids from the sample. Also precluded in robust point-of-sampling assays is downstream release of the amplicon mixture, as this risks contamination of future samples that will give false positives. Procedures are reported that directly test urine and plasma (for patient diagnostics) or crushed mosquito carcasses (for environmental surveillance). Carcasses are captured on paper samples carrying quaternary ammonium groups (Q-paper), which may be directly introduced into the assay. To avoid the time and instrumentation requirements of PCR, the procedure uses loop-mediated isothermal amplification (LAMP). Downstream detection is done in sealed tubes, with dTTP-dUTP mixtures in the LAMP with a thermolabile uracil DNA glycosylase (UDG); this offers a second mechanism to prevent forward contamination. Reverse transcription LAMP (RT-LAMP) reagents are distributed dry without requiring a continuous chain of refrigeration. The tests detect viral RNA in unprocessed urine and other biological samples, distinguishing Zika, chikungunya, and dengue in urine and in mosquitoes infected with live Zika and chikungunya viruses. The limits of detection (LODs) are ~0.71 pfu equivalent viral RNAs for Zika, ~1.22 pfu equivalent viral RNAs for dengue, and ~38 copies of chikungunya viral RNA. A handheld, battery-powered device with an orange filter was constructed to visualize the output. Preliminary data showed that this architecture, working with pre-prepared tubes holding lyophilized reagent/enzyme mixtures and shipped without a chain of refrigeration, also worked with human
Hamilton, A J; Waters, E K; Kim, H J; Pak, W S; Furlong, M J
2009-06-01
The combined action of two lepidoteran pests, Plutella xylostella L. (Plutellidae) and Pieris rapae L. (Pieridae),causes significant yield losses in cabbage (Brassica oleracea variety capitata) crops in the Democratic People's Republic of Korea. Integrated pest management (IPM) strategies for these cropping systems are in their infancy, and sampling plans have not yet been developed. We used statistical resampling to assess the performance of fixed sample size plans (ranging from 10 to 50 plants). First, the precision (D = SE/mean) of the plans in estimating the population mean was assessed. There was substantial variation in achieved D for all sample sizes, and sample sizes of at least 20 and 45 plants were required to achieve the acceptable precision level of D < or = 0.3 at least 50 and 75% of the time, respectively. Second, the performance of the plans in classifying the population density relative to an economic threshold (ET) was assessed. To account for the different damage potentials of the two species the ETs were defined in terms of standard insects (SIs), where 1 SI = 1 P. rapae = 5 P. xylostella larvae. The plans were implemented using different economic thresholds (ETs) for the three growth stages of the crop: precupping (1 SI/plant), cupping (0.5 SI/plant), and heading (4 SI/plant). Improvement in the classification certainty with increasing sample sizes could be seen through the increasing steepness of operating characteristic curves. Rather than prescribe a particular plan, we suggest that the results of these analyses be used to inform practitioners of the relative merits of the different sample sizes.
Diel differences in 0+ fish samples: effect of river size and habitat
Czech Academy of Sciences Publication Activity Database
Janáč, Michal; Jurajda, Pavel
2013-01-01
Roč. 29, č. 1 (2013), s. 90-98 ISSN 1535-1459 R&D Projects: GA MŠk LC522 Institutional research plan: CEZ:AV0Z60930519 Keywords : young-of-the-year fish * diurnal * nocturnal * habitat complexity * stream size Subject RIV: EG - Zoology Impact factor: 1.971, year: 2013
General power and sample size calculations for high-dimensional genomic data
van Iterson, M.; van de Wiel, M.; Boer, J.M.; Menezes, R.
2013-01-01
In the design of microarray or next-generation sequencing experiments it is crucial to choose the appropriate number of biological replicates. As often the number of differentially expressed genes and their effect sizes are small and too few replicates will lead to insufficient power to detect
Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach
Rotondi, Michael A.; Donner, Allan
2009-01-01
The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…
Family size, birth order, and intelligence in a large South American sample.
Velandia, W; Grandon, G M; Page, E B
1978-01-01
The confluence theory, which hypothesizes a relationship between intellectual development birth order, and family size, was examined in a colombian study of more than 36,000 college applicants. The results of the study did not support the confluence theory. The confluence theory states that the intellectual development of a child is related to average mental age of the members of his family at the time of his birth. The mental age of the parents is always assigned a value of 30 and siblings are given scores equivalent to their chronological age at the birth of the subject. Therefore, the average mental age of family members for a 1st born child is 30, or 60 divided by 2. If a subject is born into a family consisting of 2 parents and a 6-year old sibling, the average mental age of family members tends, therefore, to decrease with each birth order. The hypothesis derived from the confluence theory states that there is a positive relationship between average mental age of a subject's family and the subject's performance on intelligence tests. In the Colombian study, data on family size, birth order and socioeconomic status was derived from college application forms. Intelligence test scores for each subject was obtained from college entrance exams. The mental age of each applicant's family at the time of the applicant's birth was calculated. Multiple correlation analysis and path analysis were used to assess the relationship. Results were 1) the test scores of subjects from families with 2,3,4, and 5 children were higher than test scores of the 1st born subjects; 2) the rank order of intelligence by family size was 3,4,5,2,6,1 instead of the hypothesized 1,2,3,4,5,6; and 3) only 1% of the variability in test scores was explained by the variables of birth order and family size. Further analysis indicated that socioeconomic status was a far more powerful explanatory variable than family size.
Directory of Open Access Journals (Sweden)
Cynthia Stretch
Full Text Available Top differentially expressed gene lists are often inconsistent between studies and it has been suggested that small sample sizes contribute to lack of reproducibility and poor prediction accuracy in discriminative models. We considered sex differences (69♂, 65 ♀ in 134 human skeletal muscle biopsies using DNA microarray. The full dataset and subsamples (n = 10 (5 ♂, 5 ♀ to n = 120 (60 ♂, 60 ♀ thereof were used to assess the effect of sample size on the differential expression of single genes, gene rank order and prediction accuracy. Using our full dataset (n = 134, we identified 717 differentially expressed transcripts (p<0.0001 and we were able predict sex with ~90% accuracy, both within our dataset and on external datasets. Both p-values and rank order of top differentially expressed genes became more variable using smaller subsamples. For example, at n = 10 (5 ♂, 5 ♀, no gene was considered differentially expressed at p<0.0001 and prediction accuracy was ~50% (no better than chance. We found that sample size clearly affects microarray analysis results; small sample sizes result in unstable gene lists and poor prediction accuracy. We anticipate this will apply to other phenotypes, in addition to sex.
Boef, Anna G C; Dekkers, Olaf M; Vandenbroucke, Jan P; le Cessie, Saskia
2014-11-01
Instrumental variable (IV) analysis is promising for estimation of therapeutic effects from observational data as it can circumvent unmeasured confounding. However, even if IV assumptions hold, IV analyses will not necessarily provide an estimate closer to the true effect than conventional analyses as this depends on the estimates' bias and variance. We investigated how estimates from standard regression (ordinary least squares [OLS]) and IV (two-stage least squares) regression compare on mean squared error (MSE). We derived an equation for approximation of the threshold sample size, above which IV estimates have a smaller MSE than OLS estimates. Next, we performed simulations, varying sample size, instrument strength, and level of unmeasured confounding. IV assumptions were fulfilled by design. Although biased, OLS estimates were closer on average to the true effect than IV estimates at small sample sizes because of their smaller variance. The threshold sample size above which IV analysis outperforms OLS regression depends on instrument strength and strength of unmeasured confounding but will usually be large given the typical moderate instrument strength in medical research. IV methods are of most value in large studies if considerable unmeasured confounding is likely and a strong and plausible instrument is available. Copyright © 2014 Elsevier Inc. All rights reserved.
Heymann, D.; Lakatos, S.; Walton, J. R.
1973-01-01
Review of the results of inert gas measurements performed on six grain-size fractions and two single particles from four samples of Luna 20 material. Presented and discussed data include the inert gas contents, element and isotope systematics, radiation ages, and Ar-36/Ar-40 systematics.
Algina, James; Keselman, H. J.
2008-01-01
Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)
Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying
2011-01-01
Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…
Hans T. Schreuder; Jin-Mann S. Lin; John Teply
2000-01-01
The Forest Inventory and Analysis units in the USDA Forest Service have been mandated by Congress to go to an annualized inventory where a certain percentage of plots, say 20 percent, will be measured in each State each year. Although this will result in an annual sample size that will be too small for reliable inference for many areas, it is a sufficiently large...
Tang, Yongqiang
2017-05-25
We derive the sample size formulae for comparing two negative binomial rates based on both the relative and absolute rate difference metrics in noninferiority and equivalence trials with unequal follow-up times, and establish an approximate relationship between the sample sizes required for the treatment comparison based on the two treatment effect metrics. The proposed method allows the dispersion parameter to vary by treatment groups. The accuracy of these methods is assessed by simulations. It is demonstrated that ignoring the between-subject variation in the follow-up time by setting the follow-up time for all individuals to be the mean follow-up time may greatly underestimate the required size, resulting in underpowered studies. Methods are provided for back-calculating the dispersion parameter based on the published summary results.
Basic distribution free identification tests for small size samples of environmental data
International Nuclear Information System (INIS)
Federico, A.G.; Musmeci, F.
1998-01-01
Testing two or more data sets for the hypothesis that they are sampled form the same population is often required in environmental data analysis. Typically the available samples have a small number of data and often then assumption of normal distributions is not realistic. On the other hand the diffusion of the days powerful Personal Computers opens new possible opportunities based on a massive use of the CPU resources. The paper reviews the problem introducing the feasibility of two non parametric approaches based on intrinsic equi probability properties of the data samples. The first one is based on a full re sampling while the second is based on a bootstrap approach. A easy to use program is presented. A case study is given based on the Chernobyl children contamination data [it
Measuring proteins with greater speed and resolution while reducing sample size
Hsieh, Vincent H.; Wyatt, Philip J.
2017-01-01
A multi-angle light scattering (MALS) system, combined with chromatographic separation, directly measures the absolute molar mass, size and concentration of the eluate species. The measurement of these crucial properties in solution is essential in basic macromolecular characterization and all research and production stages of bio-therapeutic products. We developed a new MALS methodology that has overcome the long-standing, stubborn barrier to microliter-scale peak volumes and achieved the hi...
Flaw-size measurement in a weld samples by ultrasonic frequency analysis
International Nuclear Information System (INIS)
Adler, L.; Cook, K.V.; Whaley, H.L. Jr.; McClung, R.W.
1975-01-01
An ultrasonic frequency-analysis technique was developed and applies to characterize flaws in an 8-in. (203-mm) thick heavy-section steel weld specimen. The technique applies a multitransducer system. The spectrum of the received broad-band signal is frequency analyzed at two different receivers for each of the flaws. From the two spectra, the size and orientation of the flaw are determined by the use of an analytic model proposed earlier. (auth)
Ulusoy, Halil Ibrahim
2014-01-01
A new micelle-mediated extraction method was developed for preconcentration of ultratrace Hg(II) ions prior to spectrophotometric determination. 2-(2'-Thiazolylazo)-p-cresol (TAC) and Ponpe 7.5 were used as the chelating agent and nonionic surfactant, respectively. Hg(II) ions form a hydrophobic complex with TAC in a micelle medium. The main factors affecting cloud point extraction efficiency, such as pH of the medium, concentrations of TAC and Ponpe 7.5, and equilibration temperature and time, were investigated in detail. An overall preconcentration factor of 33.3 was obtained upon preconcentration of a 50 mL sample. The LOD obtained under the optimal conditions was 0.86 microg/L, and the RSD for five replicate measurements of 100 microg/L Hg(II) was 3.12%. The method was successfully applied to the determination of Hg in environmental water samples.
Buss, Daniel F; Borges, Erika L
2008-01-01
This study is part of the effort to test and to establish Rapid Bioassessment Protocols (RBP) using benthic macroinvertebrates as indicators of the water quality of wadeable streams in south-east Brazil. We compared the cost-effectiveness of sampling devices frequently used in RBPs, Surber and Kick-net samplers, and of three mesh sizes (125, 250 and 500 microm). A total of 126,815 benthic macroinvertebrates were collected, representing 57 families. Samples collected with Kick method had significantly higher richness and BMWP scores in relation to Surber, but no significant increase in the effort, measured by the necessary time to process samples. No significant differences were found between samplers considering the cost/effectiveness ratio. Considering mesh sizes, significantly higher abundance and time for processing samples were necessary for finer meshes, but no significant difference were found considering taxa richness or BMWP scores. As a consequence, the 500 microm mesh had better cost/effectiveness ratios. Therefore, we support the use of a kick-net with a mesh size of 500 microm for macroinvertebrate sampling in RBPs using family level in streams of similar characteristics in Brazil.
International Nuclear Information System (INIS)
John L. Bowen; Rowena Gonzalez; David S. Shafer
2001-01-01
As part of the preliminary site characterization conducted for Project 57, soils samples were collected for separation into several size-fractions using the Suspended Soil Particle Sizing System (SSPSS). Soil samples were collected specifically for separation by the SSPSS at three general locations in the deposited Project 57 plume, the projected radioactivity of which ranged from 100 to 600 pCi/g. The primary purpose in focusing on samples with this level of activity is that it would represent anticipated residual soil contamination levels at the site after corrective actions are completed. Consequently, the results of the SSPSS analysis can contribute to dose calculation and corrective action-level determinations for future land-use scenarios at the site
Olberg, Britta; Perleth, Matthias; Felgentraeger, Katja; Schulz, Sandra; Busse, Reinhard
2017-01-01
The aim of this study was to assess the quality of reporting sample size calculation and underlying design assumptions in pivotal trials of high-risk medical devices (MDs) for neurological conditions. Systematic review of research protocols for publicly registered randomized controlled trials (RCTs). In the absence of a published protocol, principal investigators were contacted for additional data. To be included, trials had to investigate a high-risk MD, registered between 2005 and 2015, with indications stroke, headache disorders, and epilepsy as case samples within central nervous system diseases. Extraction of key methodological parameters for sample size calculation was performed independently and peer-reviewed. In a final sample of seventy-one eligible trials, we collected data from thirty-one trials. Eighteen protocols were obtained from the public domain or principal investigators. Data availability decreased during the extraction process, with almost all data available for stroke-related trials. Of the thirty-one trials with sample size information available, twenty-six reported a predefined calculation and underlying assumptions. Justification was given in twenty and evidence for parameter estimation in sixteen trials. Estimates were most often based on previous research, including RCTs and observational data. Observational data were predominantly represented by retrospective designs. Other references for parameter estimation indicated a lower level of evidence. Our systematic review of trials on high-risk MDs confirms previous research, which has documented deficiencies regarding data availability and a lack of reporting on sample size calculation. More effort is needed to ensure both relevant sources, that is, original research protocols, to be publicly available and reporting requirements to be standardized.
International Nuclear Information System (INIS)
Mandyla, Spyridoula P.; Tsogas, George Z.; Vlessidis, Athanasios G.; Giokas, Dimosthenis L.
2017-01-01
Highlights: • A new method has been developed to determine gold nanoparticles in water samples. • Extraction was achieved by cloud point extraction. • A nano-hybrid assembly between AuNPs and dithiol-coated quantum dots was formulated. • Detection was accomplished at pico-molar levels by second-order light scattering. • The method was selective against ionic gold and other nanoparticle species. - Abstract: This work presents a new method for the sensitive and selective determination of gold nanoparticles in water samples. The method combines a sample preparation and enrichment step based on cloud point extraction with a new detection motif that relies on the optical incoherent light scattering of a nano-hybrid assembly that is formed by hydrogen bond interactions between gold nanoparticles and dithiotreitol-functionalized CdS quantum dots. The experimental parameters affecting the extraction and detection of gold nanoparticles were optimized and evaluated to the analysis of gold nanoparticles of variable size and surface coating. The selectivity of the method against gold ions and other nanoparticle species was also evaluated under different conditions reminiscent to those usually found in natural water samples. The developed method was applied to the analysis of gold nanoparticles in natural waters and wastewater with satisfactory results in terms of sensitivity (detection limit at the low pmol L −1 levels), recoveries (>80%) and reproducibility (<9%). Compared to other methods employing molecular spectrometry for metal nanoparticle analysis, the developed method offers improved sensitivity and it is easy-to-operate thus providing an additional tool for the monitoring and the assessment of nanoparticles toxicity and hazards in the environment.
Energy Technology Data Exchange (ETDEWEB)
Mandyla, Spyridoula P.; Tsogas, George Z.; Vlessidis, Athanasios G.; Giokas, Dimosthenis L., E-mail: dgiokas@cc.uoi.gr
2017-02-05
Highlights: • A new method has been developed to determine gold nanoparticles in water samples. • Extraction was achieved by cloud point extraction. • A nano-hybrid assembly between AuNPs and dithiol-coated quantum dots was formulated. • Detection was accomplished at pico-molar levels by second-order light scattering. • The method was selective against ionic gold and other nanoparticle species. - Abstract: This work presents a new method for the sensitive and selective determination of gold nanoparticles in water samples. The method combines a sample preparation and enrichment step based on cloud point extraction with a new detection motif that relies on the optical incoherent light scattering of a nano-hybrid assembly that is formed by hydrogen bond interactions between gold nanoparticles and dithiotreitol-functionalized CdS quantum dots. The experimental parameters affecting the extraction and detection of gold nanoparticles were optimized and evaluated to the analysis of gold nanoparticles of variable size and surface coating. The selectivity of the method against gold ions and other nanoparticle species was also evaluated under different conditions reminiscent to those usually found in natural water samples. The developed method was applied to the analysis of gold nanoparticles in natural waters and wastewater with satisfactory results in terms of sensitivity (detection limit at the low pmol L{sup −1} levels), recoveries (>80%) and reproducibility (<9%). Compared to other methods employing molecular spectrometry for metal nanoparticle analysis, the developed method offers improved sensitivity and it is easy-to-operate thus providing an additional tool for the monitoring and the assessment of nanoparticles toxicity and hazards in the environment.
Kim, Taehong; O'Neal, Dennis L; Ortiz, Carlos
2006-09-01
Air duct systems in nuclear facilities must be monitored with continuous sampling in case of an accidental release of airborne radionuclides. The purpose of this work is to identify the air sampling locations where the velocity and contaminant concentrations fall below the 20% coefficient of variation required by the American National Standards Institute/Health Physics Society N13.1-1999. Experiments of velocity and tracer gas concentration were conducted on a generic "T" mixing system which included combinations of three sub ducts, one main duct, and air velocities from 0.5 to 2 m s (100 to 400 fpm). The experimental results suggest that turbulent mixing provides the accepted velocity coefficients of variation after 6 hydraulic diameters downstream of the T-junction. About 95% of the cases achieved coefficients of variation below 10% by 6 hydraulic diameters. However, above a velocity ratio (velocity in the sub duct/velocity in the main duct) of 2, velocity profiles were uniform in a shorter distance downstream of the T-junction as the velocity ratio went up. For the tracer gas concentration, the distance needed for the coefficients of variation to drop 20% decreased with increasing velocity ratio due to the sub duct airflow momentum. The results may apply to other duct systems with similar geometries and, ultimately, be a basis for selecting a proper sampling location under the requirements of single point representative sampling.
Size selectivity of standardized multimesh gillnets in sampling coarse European species
Czech Academy of Sciences Publication Activity Database
Prchalová, Marie; Kubečka, Jan; Říha, Milan; Mrkvička, Tomáš; Vašek, Mojmír; Jůza, Tomáš; Kratochvíl, Michal; Peterka, Jiří; Draštík, Vladislav; Křížek, J.
2009-01-01
Roč. 96, č. 1 (2009), s. 51-57 ISSN 0165-7836. [Fish Stock Assessment Methods for Lakes and Reservoirs: Towards the true picture of fish stock. České Budějovice, 11.09.2007-15.09.2007] R&D Projects: GA AV ČR(CZ) 1QS600170504; GA ČR(CZ) GA206/07/1392 Institutional research plan: CEZ:AV0Z60170517 Keywords : gillnet * seine * size selectivity * roach * perch * rudd Subject RIV: EH - Ecology, Behaviour Impact factor: 1.531, year: 2009
Energy Technology Data Exchange (ETDEWEB)
Ulusoy, Halil Ibrahim, E-mail: hiulusoy@yahoo.com [University of Cumhuriyet, Faculty of Science, Department of Chemistry, TR-58140, Sivas (Turkey); Akcay, Mehmet; Ulusoy, Songuel; Guerkan, Ramazan [University of Cumhuriyet, Faculty of Science, Department of Chemistry, TR-58140, Sivas (Turkey)
2011-10-10
Graphical abstract: The possible complex formation mechanism for ultra-trace As determination. Highlights: {yields} CPE/HGAAS system for arsenic determination and speciation in real samples has been applied first time until now. {yields} The proposed method has the lowest detection limit when compared with those of similar CPE studies present in literature. {yields} The linear range of the method is highly wide and suitable for its application to real samples. - Abstract: Cloud point extraction (CPE) methodology has successfully been employed for the preconcentration of ultra-trace arsenic species in aqueous samples prior to hydride generation atomic absorption spectrometry (HGAAS). As(III) has formed an ion-pairing complex with Pyronine B in presence of sodium dodecyl sulfate (SDS) at pH 10.0 and extracted into the non-ionic surfactant, polyethylene glycol tert-octylphenyl ether (Triton X-114). After phase separation, the surfactant-rich phase was diluted with 2 mL of 1 M HCl and 0.5 mL of 3.0% (w/v) Antifoam A. Under the optimized conditions, a preconcentration factor of 60 and a detection limit of 0.008 {mu}g L{sup -1} with a correlation coefficient of 0.9918 was obtained with a calibration curve in the range of 0.03-4.00 {mu}g L{sup -1}. The proposed preconcentration procedure was successfully applied to the determination of As(III) ions in certified standard water samples (TMDA-53.3 and NIST 1643e, a low level fortified standard for trace elements) and some real samples including natural drinking water and tap water samples.
Sampled-data L-infinity smoothing: fixed-size ARE solution with free hold function
Meinsma, Gjerrit; Mirkin, Leonid
The problem of estimating an analog signal from its noisy sampled measurements is studied in the L-infinity (induced L2-norm) framework. The main emphasis is placed on relaxing causality requirements. Namely, it is assumed that l future measurements are available to the estimator, which corresponds
In situ detection of small-size insect pests sampled on traps using multifractal analysis
Xia, Chunlei; Lee, Jang-Myung; Li, Yan; Chung, Bu-Keun; Chon, Tae-Soo
2012-02-01
We introduce a multifractal analysis for detecting the small-size pest (e.g., whitefly) images from a sticky trap in situ. An automatic attraction system is utilized for collecting pests from greenhouse plants. We applied multifractal analysis to segment action of whitefly images based on the local singularity and global image characteristics. According to the theory of multifractal dimension, the candidate blobs of whiteflies are initially defined from the sticky-trap image. Two schemes, fixed thresholding and regional minima obtainment, were utilized for feature extraction of candidate whitefly image areas. The experiment was conducted with the field images in a greenhouse. Detection results were compared with other adaptive segmentation algorithms. Values of F measuring precision and recall score were higher for the proposed multifractal analysis (96.5%) compared with conventional methods such as Watershed (92.2%) and Otsu (73.1%). The true positive rate of multifractal analysis was 94.3% and the false positive rate minimal level at 1.3%. Detection performance was further tested via human observation. The degree of scattering between manual and automatic counting was remarkably higher with multifractal analysis (R2=0.992) compared with Watershed (R2=0.895) and Otsu (R2=0.353), ensuring overall detection of the small-size pests is most feasible with multifractal analysis in field conditions.
Aznar, Ramón; Barahona, Francisco; Geiss, Otmar; Ponti, Jessica; José Luis, Tadeo; Barrero-Moreno, Josefa
2017-12-01
Single particle-inductively coupled plasma mass spectrometry (SP-ICPMS) is a promising technique able to generate the number based-particle size distribution (PSD) of nanoparticles (NPs) in aqueous suspensions. However, SP-ICPMS analysis is not consolidated as routine-technique yet and is not typically applied to real test samples with unknown composition. This work presents a methodology to detect, quantify and characterise the number-based PSD of Ag-NPs in different environmental aqueous samples (drinking and lake waters), aqueous samples derived from migration tests and consumer products using SP-ICPMS. The procedure is built from a pragmatic view and involves the analysis of serial dilutions of the original sample until no variation in the measured size values is observed while keeping particle counts proportional to the dilution applied. After evaluation of the analytical figures of merit, the SP-ICPMS method exhibited excellent linearity (r 2 >0.999) in the range (1-25) × 10 4 particlesmL -1 for 30, 50 and 80nm nominal size Ag-NPs standards. The precision in terms of repeatability was studied according to the RSDs of the measured size and particle number concentration values and a t-test (p = 95%) at the two intermediate concentration levels was applied to determine the bias of SP-ICPMS size values compared to reference values. The method showed good repeatability and an overall acceptable bias in the studied concentration range. The experimental minimum detectable size for Ag-NPs ranged between 12 and 15nm. Additionally, results derived from direct SP-ICPMS analysis were compared to the results conducted for fractions collected by asymmetric flow-field flow fractionation and supernatant fractions after centrifugal filtration. The method has been successfully applied to determine the presence of Ag-NPs in: lake water; tap water; tap water filtered by a filter jar; seven different liquid silver-based consumer products; and migration solutions (pure water and
Directory of Open Access Journals (Sweden)
John M Lachin
Full Text Available Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet, repeated 2-hour Mixed Meal Tolerance Tests (MMTT were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC of the C-peptide values. The natural log(x, log(x+1 and square-root (√x transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years and adults (18+ years. The sample size needed to detect a given relative (percentage difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1 and √x transformed values in terms of the original units of measurement (pmol/ml. Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab versus masked placebo. These results provide the information needed to
Energy Technology Data Exchange (ETDEWEB)
Li, Shan, E-mail: ls_tuzi@163.com; Wang, Mei, E-mail: wmei02@163.com; Zhong, Yizhou, E-mail: yizhz@21cn.com; Zhang, Zehua, E-mail: kazuki.0101@aliyun.com; Yang, Bingyi, E-mail: e_yby@163.com
2015-09-01
A new cloud point extraction technique was established and used for the determination of trace inorganic arsenic species in water samples combined with hydride generation atomic fluorescence spectrometry (HGAFS). As(III) and As(V) were complexed with ammonium pyrrolidinedithiocarbamate and molybdate, respectively. The complexes were quantitatively extracted with the non-ionic surfactant (Triton X-114) by centrifugation. After addition of antifoam, the surfactant-rich phase containing As(III) was diluted with 5% HCl for HGAFS determination. For As(V) determination, 50% HCl was added to the surfactant-rich phase, and the mixture was placed in an ultrasonic bath at 70 °C for 30 min. As(V) was reduced to As(III) with thiourea–ascorbic acid solution, followed by HGAFS. Under the optimum conditions, limits of detection of 0.009 and 0.012 μg/L were obtained for As(III) and As(V), respectively. Concentration factors of 9.3 and 7.9, respectively, were obtained for a 50 mL sample. The precisions were 2.1% for As(III) and 2.3% for As(V). The proposed method was successfully used for the determination of trace As(III) and As(V) in water samples, with satisfactory recoveries. - Highlights: • Cloud point extraction was firstly established to determine trace inorganic arsenic(As) species combining with HGAFS. • Separate As(III) and As(V) determinations improve the accuracy. • Ultrasonic release of complexed As(V) enables complete As(V) reduction to As(III). • Direct HGAFS analysis can be performed.
Ultrasonic detection and sizing of cracks in cast stainless steel samples
International Nuclear Information System (INIS)
Allidi, F.; Edelmann, X.; Phister, O.; Hoegberg, K.; Pers-Anderson, E.B.
1986-01-01
The test consisted of 15 samples of cast stainless steel, each with a weld. Some of the specimens were provided with artificially made thermal fatique cracks. The inspection was performed with the P-scan method. The investigations showed an improvement of recognizability relative to earlier investigations. One probe, the dual type, longitudinal wave 45 degrees, low frequence 0.5-1 MHz gives the best results. (G.B.)
Sufficient Sample Size and Power in Multilevel Ordinal Logistic Regression Models
Directory of Open Access Journals (Sweden)
Sabz Ali
2016-01-01
Full Text Available For most of the time, biomedical researchers have been dealing with ordinal outcome variable in multilevel models where patients are nested in doctors. We can justifiably apply multilevel cumulative logit model, where the outcome variable represents the mild, severe, and extremely severe intensity of diseases like malaria and typhoid in the form of ordered categories. Based on our simulation conditions, Maximum Likelihood (ML method is better than Penalized Quasilikelihood (PQL method in three-category ordinal outcome variable. PQL method, however, performs equally well as ML method where five-category ordinal outcome variable is used. Further, to achieve power more than 0.80, at least 50 groups are required for both ML and PQL methods of estimation. It may be pointed out that, for five-category ordinal response variable model, the power of PQL method is slightly higher than the power of ML method.
Sufficient Sample Size and Power in Multilevel Ordinal Logistic Regression Models.
Ali, Sabz; Ali, Amjad; Khan, Sajjad Ahmad; Hussain, Sundas
2016-01-01
For most of the time, biomedical researchers have been dealing with ordinal outcome variable in multilevel models where patients are nested in doctors. We can justifiably apply multilevel cumulative logit model, where the outcome variable represents the mild, severe, and extremely severe intensity of diseases like malaria and typhoid in the form of ordered categories. Based on our simulation conditions, Maximum Likelihood (ML) method is better than Penalized Quasilikelihood (PQL) method in three-category ordinal outcome variable. PQL method, however, performs equally well as ML method where five-category ordinal outcome variable is used. Further, to achieve power more than 0.80, at least 50 groups are required for both ML and PQL methods of estimation. It may be pointed out that, for five-category ordinal response variable model, the power of PQL method is slightly higher than the power of ML method.
Energy Technology Data Exchange (ETDEWEB)
Filik, Hayati, E-mail: filik@istanbul.edu.tr [Istanbul University, Faculty of Engineering, Department of Chemistry, Avcilar, 34320 Istanbul (Turkey); Cengel, Tayfun; Apak, Resat [Istanbul University, Faculty of Engineering, Department of Chemistry, Avcilar, 34320 Istanbul (Turkey)
2009-09-30
A cloud point extraction process using the nonionic surfactant Triton X-114 to extract molybdenum from aqueous solutions was investigated. The method is based on the complexation reaction of Mo(VI) with 1,2,5,8-tetrahydroxyanthracene-9,10-dione (quinalizarine: QA) and micelle-mediated extraction of the complex. The enriched analyte in the surfactant-rich phase was determined by graphite furnace atomic absorption spectrometry (GFAAS). The optimal extraction and reaction conditions (e.g. pH, reagent and surfactant concentrations, temperature, incubation and centrifugation times) were evaluated and optimized. Under the optimized experimental conditions, the limit of detection (LOD) for Mo(VI) was 7.0 ng L{sup -1} with an preconcentration factor of {approx}25 when 10 mL of sample solution was preconcentrated to 0.4 mL. The proposed method (with extraction) showed linear calibration within the range 0.03-0.6 {mu}g L{sup -1}. The relative standard deviation (RSD) was found to be 3.7% (C{sub Mo(VI)} = 0.05 {mu}g L{sup -1}, n = 5) for pure standard solutions, whereas RSD for the recoveries from real samples ranged between 2 and 8% (mean RSD = 3.9%). The method was applied to the determination of Mo(VI) in seawater and tap water samples with a recovery for the spiked samples in the range of 98-103%. The interference effect of some cations and anions was also studied. In the presence of foreign ions, no significant interference was observed. In order to verify the accuracy of the method, a certified reference water sample was analysed and the results obtained were in good agreement with the certified values.
Yin, Gaohong
2016-05-01
Since the failure of the Scan Line Corrector (SLC) instrument on Landsat 7, observable gaps occur in the acquired Landsat 7 imagery, impacting the spatial continuity of observed imagery. Due to the highly geometric and radiometric accuracy provided by Landsat 7, a number of approaches have been proposed to fill the gaps. However, all proposed approaches have evident constraints for universal application. The main issues in gap-filling are an inability to describe the continuity features such as meandering streams or roads, or maintaining the shape of small objects when filling gaps in heterogeneous areas. The aim of the study is to validate the feasibility of using the Direct Sampling multiple-point geostatistical method, which has been shown to reconstruct complicated geological structures satisfactorily, to fill Landsat 7 gaps. The Direct Sampling method uses a conditional stochastic resampling of known locations within a target image to fill gaps and can generate multiple reconstructions for one simulation case. The Direct Sampling method was examined across a range of land cover types including deserts, sparse rural areas, dense farmlands, urban areas, braided rivers and coastal areas to demonstrate its capacity to recover gaps accurately for various land cover types. The prediction accuracy of the Direct Sampling method was also compared with other gap-filling approaches, which have been previously demonstrated to offer satisfactory results, under both homogeneous area and heterogeneous area situations. Studies have shown that the Direct Sampling method provides sufficiently accurate prediction results for a variety of land cover types from homogeneous areas to heterogeneous land cover types. Likewise, it exhibits superior performances when used to fill gaps in heterogeneous land cover types without input image or with an input image that is temporally far from the target image in comparison with other gap-filling approaches.
Mandyla, Spyridoula P; Tsogas, George Z; Vlessidis, Athanasios G; Giokas, Dimosthenis L
2017-02-05
This work presents a new method for the sensitive and selective determination of gold nanoparticles in water samples. The method combines a sample preparation and enrichment step based on cloud point extraction with a new detection motif that relies on the optical incoherent light scattering of a nano-hybrid assembly that is formed by hydrogen bond interactions between gold nanoparticles and dithiotreitol-functionalized CdS quantum dots. The experimental parameters affecting the extraction and detection of gold nanoparticles were optimized and evaluated to the analysis of gold nanoparticles of variable size and surface coating. The selectivity of the method against gold ions and other nanoparticle species was also evaluated under different conditions reminiscent to those usually found in natural water samples. The developed method was applied to the analysis of gold nanoparticles in natural waters and wastewater with satisfactory results in terms of sensitivity (detection limit at the low pmolL -1 levels), recoveries (>80%) and reproducibility (metal nanoparticle analysis, the developed method offers improved sensitivity and it is easy-to-operate thus providing an additional tool for the monitoring and the assessment of nanoparticles toxicity and hazards in the environment. Copyright Â© 2016 Elsevier B.V. All rights reserved.
Yuan, Yuan
2017-12-28
Distance sampling is a widely used method for estimating wildlife population abundance. The fact that conventional distance sampling methods are partly design-based constrains the spatial resolution at which animal density can be estimated using these methods. Estimates are usually obtained at survey stratum level. For an endangered species such as the blue whale, it is desirable to estimate density and abundance at a finer spatial scale than stratum. Temporal variation in the spatial structure is also important. We formulate the process generating distance sampling data as a thinned spatial point process and propose model-based inference using a spatial log-Gaussian Cox process. The method adopts a flexible stochastic partial differential equation (SPDE) approach to model spatial structure in density that is not accounted for by explanatory variables, and integrated nested Laplace approximation (INLA) for Bayesian inference. It allows simultaneous fitting of detection and density models and permits prediction of density at an arbitrarily fine scale. We estimate blue whale density in the Eastern Tropical Pacific Ocean from thirteen shipboard surveys conducted over 22 years. We find that higher blue whale density is associated with colder sea surface temperatures in space, and although there is some positive association between density and mean annual temperature, our estimates are consistent with no trend in density across years. Our analysis also indicates that there is substantial spatially structured variation in density that is not explained by available covariates.
International Nuclear Information System (INIS)
Polach, H.; Robertson, S.; Kaihola, L.
1982-01-01
Radiocarbon dating parameters, such as instrumental techniques used, dating precision achieved, sample size, cost and availability of equipment and, in more detail, the merit of small gas proportional counting systems are considered. It is shown that small counters capable of handling 10-100mg of carbon are a viable proposition in terms of achievable precision and in terms of sample turnover, if some 10 mini-counters are operated simultaneously within the same shield. After consideration of the factors affecting the performance of a small gas proportional system it is concluded that an automatic, labour saving, cost effective and efficient carbon dating system, based on some sixteen 10 ml-size counters operating in parallel, could be built using state-of-art knowledge and components
Sex determination by tooth size in a sample of Greek population.
Mitsea, A G; Moraitis, K; Leon, G; Nicopoulou-Karayianni, K; Spiliopoulou, C
2014-08-01
Sex assessment from tooth measurements can be of major importance for forensic and bioarchaeological investigations, especially when only teeth or jaws are available. The purpose of this study is to assess the reliability and applicability of establishing sex identity in a sample of Greek population using the discriminant function proposed by Rösing et al. (1995). The study comprised of 172 dental casts derived from two private orthodontic clinics in Athens. The individuals were randomly selected and all had clear medical history. The mesiodistal crown diameters of all the teeth were measured apart from those of the 3rd molars. The values quoted for the sample to which the discriminant function was first applied were similar to those obtained for the Greek sample. The results of the preliminary statistical analysis did not support the use of the specific discriminant function for a reliable determination of sex by means of the mesiodistal diameter of the teeth. However, there was considerable variation between different populations and this might explain the reason for lack of discriminating power of the specific function in the Greek population. In order to investigate whether a better discriminant function could be obtained using the Greek data, separate discriminant function analysis was performed on the same teeth and a different equation emerged without, however, any real improvement in the classification process, with an overall correct classification of 72%. The results showed that there were a considerably higher percentage of females correctly classified than males. The results lead to the conclusion that the use of the mesiodistal diameter of teeth is not as a reliable method as one would have expected for determining sex of human remains from a forensic context. Therefore, this method could be used only in combination with other identification approaches. Copyright © 2014. Published by Elsevier GmbH.
Li, Zipeng; Lai, Kelvin Yi-Tse; Chakrabarty, Krishnendu; Ho, Tsung-Yi; Lee, Chen-Yi
2017-12-01
Sample preparation in digital microfluidics refers to the generation of droplets with target concentrations for on-chip biochemical applications. In recent years, digital microfluidic biochips (DMFBs) have been adopted as a platform for sample preparation. However, there remain two major problems associated with sample preparation on a conventional DMFB. First, only a (1:1) mixing/splitting model can be used, leading to an increase in the number of fluidic operations required for sample preparation. Second, only a limited number of sensors can be integrated on a conventional DMFB; as a result, the latency for error detection during sample preparation is significant. To overcome these drawbacks, we adopt a next generation DMFB platform, referred to as micro-electrode-dot-array (MEDA), for sample preparation. We propose the first sample-preparation method that exploits the MEDA-specific advantages of fine-grained control of droplet sizes and real-time droplet sensing. Experimental demonstration using a fabricated MEDA biochip and simulation results highlight the effectiveness of the proposed sample-preparation method.
DEFF Research Database (Denmark)
Shetty, Nisha; Min, Tai-Gi; Gislum, René
2011-01-01
The effects of the number of seeds in a training sample set on the ability to predict the viability of cabbage or radish seeds are presented and discussed. The supervised classification method extended canonical variates analysis (ECVA) was used to develop a classification model. Calibration sub...... and radish data. The misclassification rates at optimal sample size were 8%, 6% and 7% for cabbage and 3%, 3% and 2% for radish respectively for random method (averaged for 10 iterations), DUPLEX and CADEX algorithms. This was similar to the misclassification rate of 6% and 2% for cabbage and radish obtained...
Fraley, R. Chris; Vazire, Simine
2014-01-01
The authors evaluate the quality of research reported in major journals in social-personality psychology by ranking those journals with respect to their N-pact Factors (NF)—the statistical power of the empirical studies they publish to detect typical effect sizes. Power is a particularly important attribute for evaluating research quality because, relative to studies that have low power, studies that have high power are more likely to (a) to provide accurate estimates of effects, (b) to produce literatures with low false positive rates, and (c) to lead to replicable findings. The authors show that the average sample size in social-personality research is 104 and that the power to detect the typical effect size in the field is approximately 50%. Moreover, they show that there is considerable variation among journals in sample sizes and power of the studies they publish, with some journals consistently publishing higher power studies than others. The authors hope that these rankings will be of use to authors who are choosing where to submit their best work, provide hiring and promotion committees with a superior way of quantifying journal quality, and encourage competition among journals to improve their NF rankings. PMID:25296159
Fraley, R Chris; Vazire, Simine
2014-01-01
The authors evaluate the quality of research reported in major journals in social-personality psychology by ranking those journals with respect to their N-pact Factors (NF)-the statistical power of the empirical studies they publish to detect typical effect sizes. Power is a particularly important attribute for evaluating research quality because, relative to studies that have low power, studies that have high power are more likely to (a) to provide accurate estimates of effects, (b) to produce literatures with low false positive rates, and (c) to lead to replicable findings. The authors show that the average sample size in social-personality research is 104 and that the power to detect the typical effect size in the field is approximately 50%. Moreover, they show that there is considerable variation among journals in sample sizes and power of the studies they publish, with some journals consistently publishing higher power studies than others. The authors hope that these rankings will be of use to authors who are choosing where to submit their best work, provide hiring and promotion committees with a superior way of quantifying journal quality, and encourage competition among journals to improve their NF rankings.
Directory of Open Access Journals (Sweden)
Finch Stephen J
2005-04-01
Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.
Tai, Bee-Choo; Grundy, Richard; Machin, David
2011-03-15
To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest. Copyright © 2011 Elsevier Inc. All rights reserved.
RNA Profiling for Biomarker Discovery: Practical Considerations for Limiting Sample Sizes
Directory of Open Access Journals (Sweden)
Danny J. Kelly
2005-01-01
Full Text Available We have compared microarray data generated on Affymetrix™ chips from standard (8 micrograms or low (100 nanograms amounts of total RNA. We evaluated the gene signals and gene fold-change estimates obtained from the two methods and validated a subset of the results by real time, polymerase chain reaction assays. The correlation of low RNA derived gene signals to gene signals obtained from standard RNA was poor for less to moderately abundant genes. Genes with high abundance showed better correlation in signals between the two methods. The signal correlation between the low RNA and standard RNA methods was improved by including a reference sample in the microarray analysis. In contrast, the fold-change estimates for genes were better correlated between the two methods regardless of the magnitude of gene signals. A reference sample based method is suggested for studies that would end up comparing gene signal data from a combination of low and standard RNA templates; no such referencing appears to be necessary when comparing fold-changes of gene expression between standard and low template reactions.
Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data
Dong, Kai
2015-09-16
DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.
Sampling to estimate population size and detect trends in Tricolored Blackbirds
Meese, Robert; Yee, Julie L.; Holyoak, Marcel
2015-01-01
The Tricolored Blackbird (Agelaius tricolor) is a medium-sized passerine that nests in the largest colonies of any North American landbird since the extinction of the passenger pigeon (Ectopistes migratorius) over 100 years ago (Beedy and Hamilton 1999). The species has a restricted range that occurs almost exclusively within California, with only a few hundred birds scattered in small groups in Oregon, Washington, Nevada, and northwestern Baja California, Mexico (Beedy and Hamilton 1999). Tricolored Blackbirds are itinerant breeders (i.e., breed more than once per year in different locations) and use a wide variety of nesting substrates (Hamilton 1998), many of which are ephemeral. They are also insect dependent during the breeding season, and reproductive success is strongly correlated with relative insect abundance (Meese 2013). Researchers have noted for decades that Tricolored Blackbird’s insect prey are highly variable in space and time; Payne (1969), for example, described the species as a grasshopper follower because they are preferred food items, and high grasshopper abundance is often associated with high reproductive success (Payne 1969, Meese 2013). Thus, the species’ basic reproductive strategy is tied to rather infrequent periods of relatively high insect abundance in some locations followed by much longer periods of range -wide relatively low insect abundance and poor reproductive success. Of course, anthropogenic factors such as habitat loss and insecticide use may be at least partly responsible for these patterns (Hallman et al. 2014, Airola et al. 2014).
Kikuchi, Takashi; Gittins, John
2009-08-15
It is necessary for the calculation of sample size to achieve the best balance between the cost of a clinical trial and the possible benefits from a new treatment. Gittins and Pezeshk developed an innovative (behavioral Bayes) approach, which assumes that the number of users is an increasing function of the difference in performance between the new treatment and the standard treatment. The better a new treatment, the more the number of patients who want to switch to it. The optimal sample size is calculated in this framework. This BeBay approach takes account of three decision-makers, a pharmaceutical company, the health authority and medical advisers. Kikuchi, Pezeshk and Gittins generalized this approach by introducing a logistic benefit function, and by extending to the more usual unpaired case, and with unknown variance. The expected net benefit in this model is based on the efficacy of the new drug but does not take account of the incidence of adverse reactions. The present paper extends the model to include the costs of treating adverse reactions and focuses on societal cost-effectiveness as the criterion for determining sample size. The main application is likely to be to phase III clinical trials, for which the primary outcome is to compare the costs and benefits of a new drug with a standard drug in relation to national health-care. Copyright 2009 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Jamshid Jamali
2017-01-01
Full Text Available Evaluating measurement equivalence (also known as differential item functioning (DIF is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.
Dealing with large sample sizes: comparison of a new one spot dot blot method to western blot.
Putra, Sulistyo Emantoko Dwi; Tsuprykov, Oleg; Von Websky, Karoline; Ritter, Teresa; Reichetzeder, Christoph; Hocher, Berthold
2014-01-01
Western blot is the gold standard method to determine individual protein expression levels. However, western blot is technically difficult to perform in large sample sizes because it is a time consuming and labor intensive process. Dot blot is often used instead when dealing with large sample sizes, but the main disadvantage of the existing dot blot techniques, is the absence of signal normalization to a housekeeping protein. In this study we established a one dot two development signals (ODTDS) dot blot method employing two different signal development systems. The first signal from the protein of interest was detected by horseradish peroxidase (HRP). The second signal, detecting the housekeeping protein, was obtained by using alkaline phosphatase (AP). Inter-assay results variations within ODTDS dot blot and western blot and intra-assay variations between both methods were low (1.04-5.71%) as assessed by coefficient of variation. ODTDS dot blot technique can be used instead of western blot when dealing with large sample sizes without a reduction in results accuracy.
Behr, Elijah R.; Ritchie, Marylyn D.; Tanaka, Toshihiro; Kääb, Stefan; Crawford, Dana C.; Nicoletti, Paola; Floratos, Aris; Sinner, Moritz F.; Kannankeril, Prince J.; Wilde, Arthur A. M.; Bezzina, Connie R.; Schulze-Bahr, Eric; Zumhagen, Sven; Guicheney, Pascale; Bishopric, Nanette H.; Marshall, Vanessa; Shakir, Saad; Dalageorgou, Chrysoula; Bevan, Steve; Jamshidi, Yalda; Bastiaenen, Rachel; Myerburg, Robert J.; Schott, Jean-Jacques; Camm, A. John; Steinbeck, Gerhard; Norris, Kris; Altman, Russ B.; Tatonetti, Nicholas P.; Jeffery, Steve; Kubo, Michiaki; Nakamura, Yusuke; Shen, Yufeng; George, Alfred L.; Roden, Dan M.
2013-01-01
Marked prolongation of the QT interval on the electrocardiogram associated with the polymorphic ventricular tachycardia Torsades de Pointes is a serious adverse event during treatment with antiarrhythmic drugs and other culprit medications, and is a common cause for drug relabeling and withdrawal.
International Nuclear Information System (INIS)
Marildes Josefina Lemos Neto; Elizabeth de Souza Nascimento; Mariza Landgraf; Vera Akiko Maihara; Silva, P.S.C.
2014-01-01
Shellfish such as squid and octopus, class Cephalopoda, has high commercial value in restaurants and for export. As, Se and Zn concentrations were determined in 117 octopus acquired in different points of the distribution chain in 4 coastal cities of Sao Paulo state (Guaruja, Santos, Sao Vicente and Praia Grande)-Brazil. The methodology for elemental determination was Instrumental Neutron Activation Analysis (INAA). The element concentration in the octopus samples (wet weight) range from: 0.184 to 35.4 mg kg -1 for As, 0.203 to 2.26 mg kg -1 for Se and 4.73 to 37.4 mg kg -1 for Zn. Arsenic and Se levels were above the limit for fish established by Brazilian legislation, while Zn concentrations were in accordance with literature values. (author)
Directory of Open Access Journals (Sweden)
Laureau Axel
2017-01-01
Full Text Available These studies are performed in the general framework of transient coupled calculations with accurate neutron kinetics models. This kind of application requires a modeling of the influence on the neutronics of the macroscopic cross-section evolution. Depending on the targeted accuracy, this feedback can be limited to the reactivity for point kinetics, or can take into account the redistribution of the power in the core for spatial kinetics. The local correlated sampling technique for Monte Carlo calculation presented in this paper has been developed for this purpose, i.e. estimating the influence on the neutron transport of a local variation of different parameters such as sodium density or fuel Doppler effect. This method is associated to an innovative spatial kinetics model named Transient Fission Matrix, which condenses the time-dependent Monte Carlo neutronic response in Green functions. Finally, an accurate estimation of the feedback effects on these Green functions provides an on-the-fly prediction of the flux redistribution in the core, whatever the actual perturbation shape is during the transient. This approach is also used to estimate local feedback effects for point kinetics resolution.
Directory of Open Access Journals (Sweden)
V. Indira
2015-03-01
Full Text Available Hydraulic brake in automobile engineering is considered to be one of the important components. Condition monitoring and fault diagnosis of such a component is very essential for safety of passengers, vehicles and to minimize the unexpected maintenance time. Vibration based machine learning approach for condition monitoring of hydraulic brake system is gaining momentum. Training and testing the classifier are two important activities in the process of feature classification. This study proposes a systematic statistical method called power analysis to find the minimum number of samples required to train the classifier with statistical stability so as to get good classification accuracy. Descriptive statistical features have been used and the more contributing features have been selected by using C4.5 decision tree algorithm. The results of power analysis have also been verified using a decision tree algorithm namely, C4.5.
Ruthrauff, Daniel R.; Tibbitts, T. Lee; Gill, Robert E.; Dementyev, Maksim N.; Handel, Colleen M.
2012-01-01
The Rock Sandpiper (Calidris ptilocnemis) is endemic to the Bering Sea region and unique among shorebirds in the North Pacific for wintering at high latitudes. The nominate subspecies, the Pribilof Rock Sandpiper (C. p. ptilocnemis), breeds on four isolated islands in the Bering Sea and appears to spend the winter primarily in Cook Inlet, Alaska. We used a stratified systematic sampling design and line-transect method to survey the entire breeding range of this population during springs 2001-2003. Densities were up to four times higher on the uninhabited and more northerly St. Matthew and Hall islands than on St. Paul and St. George islands, which both have small human settlements and introduced reindeer herds. Differences in density, however, appeared to be more related to differences in vegetation than to anthropogenic factors, raising some concern for prospective effects of climate change. We estimated the total population at 19 832 birds (95% CI 17 853–21 930), ranking it among the smallest of North American shorebird populations. To determine the vulnerability of C. p. ptilocnemis to anthropogenic and stochastic environmental threats, future studies should focus on determining the amount of gene flow among island subpopulations, the full extent of the subspecies' winter range, and the current trajectory of this small population.
Mikkelsen, Mark; Loo, Rachelle S; Puts, Nicolaas A J; Edden, Richard A E; Harris, Ashley D
2018-02-21
The relationships between scan duration, signal-to-noise ratio (SNR) and sample size must be considered and understood to design optimal GABA-edited magnetic resonance spectroscopy (MRS) studies. Simulations investigated the effects of signal averaging on SNR, measurement error and group-level variance against a known ground truth. Relative root mean square errors (measurement error) and coefficients of variation (group-level variance) were calculated. GABA-edited data from 18 participants acquired from five voxels were used to examine the relationships between scan duration, SNR and quantitative outcomes in vivo. These relationships were then used to determine the sample sizes required to observe different effect sizes. In both simulated and in vivo data, SNR increased with the square root of the number of averages. Both measurement error and group-level variance were shown to follow an inverse-square-root function, indicating no significant impact of cumulative artifacts. Comparisons between the first two-thirds of the data and the full dataset showed no statistical difference in group-level variance. There was, however, some variability across the five voxels depending on SNR, which impacted the sample sizes needed to detect group differences in specific brain regions. Typical scan durations can be reduced if taking into account a statistically acceptable amount of variance and the magnitudes of predicted effects. While scan duration in GABA-edited MRS has typically been considered in terms of SNR, it is more appropriate to think in terms of the amount of measurement error and group-level variance that provides sufficient statistical power. Copyright © 2018 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Akram, M.; Aftab, F.
2016-01-01
In the present study, fruits (drupes) were collected from Changa Manga Forest Plus Trees (CMF-PT), Changa Manga Forest Teak Stand (CMF-TS) and Punjab University Botanical Gardens (PUBG) and categorized into very large (= 17 mm dia.), large (12-16 mm dia.), medium (9-11 mm dia.) or small (6-8 mm dia.) fruit size grades. Fresh water as well as mechanical scarification and stratification were tested for breaking seed dormancy. Viability status of seeds was estimated by cutting test, X-rays and In vitro seed germination. Out of 2595 fruits from CMF-PT, 500 fruits were of very large grade. This fruit category also had highest individual fruit weight (0.58 g) with more number of 4-seeded fruits (5.29 percent) and fair germination potential (35.32 percent). Generally, most of the fruits were 1-seeded irrespective of size grades and sampling sites. Fresh water scarification had strong effect on germination (44.30 percent) as compared to mechanical scarification and cold stratification after 40 days of sowing. Similarly, sampling sites and fruit size grades also had significant influence on germination. Highest germination (82.33 percent) was obtained on MS (Murashige and Skoog) agar-solidified medium as compared to Woody Plant Medium (WPM) (69.22 percent). Seedlings from all the media were transferred to ex vitro conditions in the greenhouse and achieved highest survival (28.6 percent) from seedlings previously raised on MS agar-solidified medium after 40 days. There was an association between the studied parameters of teak seeds and the sampling sites and fruit size. (author)
Heydari, Rouhollah; Hosseini, Mohammad; Zarabi, Sanaz
2015-01-01
In this paper, a simple and cost effective method was developed for extraction and pre-concentration of carmine in food samples by using cloud point extraction (CPE) prior to its spectrophotometric determination. Carmine was extracted from aqueous solution using Triton X-100 as extracting solvent. The effects of main parameters such as solution pH, surfactant and salt concentrations, incubation time and temperature were investigated and optimized. Calibration graph was linear in the range of 0.04-5.0 μg mL(-1) of carmine in the initial solution with regression coefficient of 0.9995. The limit of detection (LOD) and limit of quantification were 0.012 and 0.04 μg mL(-1), respectively. Relative standard deviation (RSD) at low concentration level (0.05 μg mL(-1)) of carmine was 4.8% (n=7). Recovery values in different concentration levels were in the range of 93.7-105.8%. The obtained results demonstrate the proposed method can be applied satisfactory to determine the carmine in food samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Webb, Kristen; Allard, Marc
2010-02-01
Evolutionary and forensic studies commonly choose the mitochondrial control region as the locus for which to evaluate the domestic dog. However, the number of dogs that need to be sampled in order to represent the control region variation present in the worldwide population is yet to be determined. Following the methods of Pereira et al. (2004), we have demonstrated the importance of surveying the complete control region rather than only the popular left domain. We have also evaluated sample saturation in terms of the haplotype number and the number of polymorphisms within the control region. Of the most commonly cited evolutionary research, only a single study has adequately surveyed the domestic dog population, while all forensic studies have failed to meet the minimum values. We recommend that future studies consider dataset size when designing experiments and ideally sample both domains of the control region in an appropriate number of domestic dogs.
Neeson, Thomas M; Van Rijn, Itai; Mandelik, Yael
2013-07-01
Ecologists and paleontologists often rely on higher taxon surrogates instead of complete inventories of biological diversity. Despite their intrinsic appeal, the performance of these surrogates has been markedly inconsistent across empirical studies, to the extent that there is no consensus on appropriate taxonomic resolution (i.e., whether genus- or family-level categories are more appropriate) or their overall usefulness. A framework linking the reliability of higher taxon surrogates to biogeographic setting would allow for the interpretation of previously published work and provide some needed guidance regarding the actual application of these surrogates in biodiversity assessments, conservation planning, and the interpretation of the fossil record. We developed a mathematical model to show how taxonomic diversity, community structure, and sampling effort together affect three measures of higher taxon performance: the correlation between species and higher taxon richness, the relative shapes and asymptotes of species and higher taxon accumulation curves, and the efficiency of higher taxa in a complementarity-based reserve-selection algorithm. In our model, higher taxon surrogates performed well in communities in which a few common species were most abundant, and less well in communities with many equally abundant species. Furthermore, higher taxon surrogates performed well when there was a small mean and variance in the number of species per higher taxa. We also show that empirically measured species-higher-taxon correlations can be partly spurious (i.e., a mathematical artifact), except when the species accumulation curve has reached an asymptote. This particular result is of considerable practical interest given the widespread use of rapid survey methods in biodiversity assessment and the application of higher taxon methods to taxa in which species accumulation curves rarely reach an asymptote, e.g., insects.
An integrated paper-based sample-to-answer biosensor for nucleic acid testing at the point of care.
Choi, Jane Ru; Hu, Jie; Tang, Ruihua; Gong, Yan; Feng, Shangsheng; Ren, Hui; Wen, Ting; Li, XiuJun; Wan Abas, Wan Abu Bakar; Pingguan-Murphy, Belinda; Xu, Feng
2016-02-07
With advances in point-of-care testing (POCT), lateral flow assays (LFAs) have been explored for nucleic acid detection. However, biological samples generally contain complex compositions and low amounts of target nucleic acids, and currently require laborious off-chip nucleic acid extraction and amplification processes (e.g., tube-based extraction and polymerase chain reaction (PCR)) prior to detection. To the best of our knowledge, even though the integration of DNA extraction and amplification into a paper-based biosensor has been reported, a combination of LFA with the aforementioned steps for simple colorimetric readout has not yet been demonstrated. Here, we demonstrate for the first time an integrated paper-based biosensor incorporating nucleic acid extraction, amplification and visual detection or quantification using a smartphone. A handheld battery-powered heating device was specially developed for nucleic acid amplification in POC settings, which is coupled with this simple assay for rapid target detection. The biosensor can successfully detect Escherichia coli (as a model analyte) in spiked drinking water, milk, blood, and spinach with a detection limit of as low as 10-1000 CFU mL(-1), and Streptococcus pneumonia in clinical blood samples, highlighting its potential use in medical diagnostics, food safety analysis and environmental monitoring. As compared to the lengthy conventional assay, which requires more than 5 hours for the entire sample-to-answer process, it takes about 1 hour for our integrated biosensor. The integrated biosensor holds great potential for detection of various target analytes for wide applications in the near future.
Directory of Open Access Journals (Sweden)
Sebastian Wilhelm
2015-12-01
Full Text Available The production of silica is performed by mixing an inorganic, silicate-based precursor and an acid. Monomeric silicic acid forms and polymerizes to amorphous silica particles. Both further polymerization and agglomeration of the particles lead to a gel network. Since polymerization continues after gelation, the gel network consolidates. This rather slow process is known as “natural syneresis” and strongly influences the product properties (e.g., agglomerate size, porosity or internal surface. “Enforced syneresis” is the superposition of natural syneresis with a mechanical, external force. Enforced syneresis may be used either for analytical or preparative purposes. Hereby, two open key aspects are of particular interest. On the one hand, the question arises whether natural and enforced syneresis are analogous processes with respect to their dependence on the process parameters: pH, temperature and sample size. On the other hand, a method is desirable that allows for correlating natural and enforced syneresis behavior. We can show that the pH-, temperature- and sample size-dependency of natural and enforced syneresis are indeed analogous. It is possible to predict natural syneresis using a correlative model. We found that our model predicts maximum volume shrinkages between 19% and 30% in comparison to measured values of 20% for natural syneresis.
Directory of Open Access Journals (Sweden)
Alexandre Gonzales
2013-05-01
Full Text Available Accounting has been undergoing significant changes, and among these changes is the creation of a standard for the accounting of small and medium enterprises, in line with international accounting standards for companies of this size. This rule arose from the development of a pronouncement by the Accounting Pronouncements Committee – CPC, which subsequently was approved by the Federal Accounting Council - CFC through specific resolutions. The present study aimed to analyze trends of accounting professionals and business managers concerning the adoption of Technical Pronouncement emitted by Accounting Pronouncements Committee for Small and Medium Enterprises. Considering the level of complexity of the operations performed by companies of this size, the lack of oversight by specific entities and the question of enforcement of these pronouncements, it was used the game theory to determine possible strategies adopted by the accountants and business managers regarding to the effective adoption of pronouncement for SMEs. The study was characterized as a descriptive research, using bibliographical and field research. With the use of surveys sought to identify the perceptions of accounting professionals regarding the adoption of the pronouncement. It was found that this statement constitutes a valid legal standard, endowed with legal effectiveness and technical efficiency, but lacking social effectiveness, due to the low level of efforts for its adoption, both by accounting professionals and by firms.
Schiemann, Martin; Geier, Manfred; Shaddix, Christopher R; Vorobiev, Nikita; Scherer, Viktor
2014-07-01
In this study, the char burnout characteristics of two German coals (a lignite and a high-volatile bituminous coal) were investigated using two different experimental configurations and optical techniques in two distinct laboratories for measurement of temperature and size of burning particles. The optical diagnostic hardware is quite different in the two systems, but both perform two-color pyrometry and optical sizing measurements on individual particles burning in isolation from each other in high-temperature laminar flows to characterize the char consumption kinetics. The performance of the specialized systems is compared for two different combustion atmospheres (with 6.6 and 12 vol.% O2) and gas temperatures between 1700 and 1800 K. The measured particle temperatures and diameters are converted to char burning rate parameters for several residence times during the course of the particles' burnout. The results confirm that comparable results are obtained with the two configurations, although higher levels of variability in the measured data were observed in the imaging-based pyrometer setup. Corresponding uncertainties in kinetics parameters were larger, and appear to be more sensitive to systematic measurement errors when lower oxygen contents are used in the experiments. Consequently, burnout experiments in environments with sufficiently high O2 contents may be used to measure reliable char burning kinetics rates. Based on simulation results for the two coals, O2 concentrations in the range 10%-30% are recommended for kinetic rate measurements on 100 μm particles.
Directory of Open Access Journals (Sweden)
Robert eHeise
2015-06-01
Full Text Available Pool size measurements are important for the estimation of absolute intracellular fluxes in particular scenarios based on data from heavy carbon isotope experiments. Recently, steady-state fluxes estimates were obtained for central carbon metabolism in an intact illuminated rosette of Arabidopsis thaliana grown photoautotrophically (Szecowka et al., 2013; Heise et al., 2014. Fluxes were estimated therein by integrating mass-spectrometric data of the dynamics of the unlabeled metabolic fraction, data on metabolic pool sizes, partitioning of metabolic pools between cellular compartments and estimates of photosynthetically inactive pools, with a simplified model of plant central carbon metabolism. However, the fluxes were determined by treating the pool sizes as fixed parameters. Here we investigated whether and, if so, to what extent the treatment of pool sizes as parameters to be optimized in three scenarios may affect the flux estimates. The results are discussed in terms of benchmark values for canonical pathways and reactions, including starch and sucrose synthesis as well as the ribulose-1,5-bisphosphate carboxylation and oxygenation reactions. In addition, we discuss pathways emerging from a divergent branch point for which pool sizes are required for flux estimation, irrespective of the computational approach used for the simulation of the observable labelling pattern. Therefore, our findings indicate the necessity for development of techniques for accurate pool size measurements to improve the quality of flux estimates from nonstationary flux estimates in intact plant cells in the absence of alternative flux measurements.
Vlašić Tanasković, Jelena; Coucke, Wim; Leniček Krleža, Jasna; Vuković Rodriguez, Jadranka
2017-03-01
Laboratory evaluation through external quality assessment (EQA) schemes is often performed as 'peer group' comparison under the assumption that matrix effects influence the comparisons between results of different methods, for analytes where no commutable materials with reference value assignment are available. With EQA schemes that are not large but have many available instruments and reagent options for same analyte, homogenous peer groups must be created with adequate number of results to enable satisfactory statistical evaluation. We proposed a multivariate analysis of variance (MANOVA)-based test to evaluate heterogeneity of peer groups within the Croatian EQA biochemistry scheme and identify groups where further splitting might improve laboratory evaluation. EQA biochemistry results were divided according to instruments used per analyte and the MANOVA test was used to verify statistically significant differences between subgroups. The number of samples was determined by sample size calculation ensuring a power of 90% and allowing the false flagging rate to increase not more than 5%. When statistically significant differences between subgroups were found, clear improvement of laboratory evaluation was assessed before splitting groups. After evaluating 29 peer groups, we found strong evidence for further splitting of six groups. Overall improvement of 6% reported results were observed, with the percentage being as high as 27.4% for one particular method. Defining maximal allowable differences between subgroups based on flagging rate change, followed by sample size planning and MANOVA, identifies heterogeneous peer groups where further splitting improves laboratory evaluation and enables continuous monitoring for peer group heterogeneity within EQA schemes.
Canepari, Silvia; Perrino, Cinzia; Olivieri, Fabio; Astolfi, Maria Luisa
A study of the elemental composition and size distribution of atmospheric particulate matter and of its spatial and temporal variability has been conducted at two traffic sites and one urban background site in the area of Rome, Italy. Chemical analysis included the fractionation of 22 elements (Al, As, Ba, Ca, Cd, Co, Cr, Cu, Fe, Mg, Mn, Na, Ni, Pb, S, Sb, Si, Sn, Sr, Ti, Tl, V) into a water-extractable and a residual fraction. Size distribution analysis included measurements of aerosols in twelve size classes in the range 0.03-10 μm. The simultaneous determination of PM 10 and PM 2.5 at three sites during a 2-week study allowed the necessary evaluation of space and time concentration variations. The application of a chemical fractionation procedure to size-segregated samples proved to be a valuable approach for the characterisation of PM and for discriminating different emission sources. Extractable and residual fractions of the elements showed in fact different size distributions: for almost all elements the extractable fraction was mainly distributed in the fine particle size, while the residual fraction was in general predominant in the coarse size range. For some elements (As, Cd, Sb, Sn, V) the dimensional separation between the extractable fraction, almost quantitatively present in the fine mode particles, and the residual fraction, mainly distributed in the coarse mode particles, was almost quantitative. Under these conditions, the application of the chemical fractionation procedure to PM 10 samples allows a clear distinction between contributes originating from fine and coarse particle emission sources. The results related to PM (10-2.5) and PM 2.5 daily samples confirmed that chemical fractionation analysis increases the selectivity of most elements as source tracers. Extractable and residual fractions of As, Mg, Ni, Pb, S, Sn, Tl, Sb, Cd and V showed different time patterns and different spatial and size distributions, clearly indicating that the two
Lejoly, Cassandra; Howell, Ellen S.; Taylor, Patrick A.; Springmann, Alessondra; Virkki, Anne; Nolan, Michael C.; Rivera-Valentin, Edgard G.; Benner, Lance A. M.; Brozovic, Marina; Giorgini, Jon D.
2017-10-01
The Near-Earth Asteroid (NEA) population ranges in size from a few meters to more than 10 kilometers. NEAs have a wide variety of taxonomic classes, surface features, and shapes, including spheroids, binary objects, contact binaries, elongated, as well as irregular bodies. Using the Arecibo Observatory planetary radar system, we have measured apparent rotation rate, radar reflectivity, apparent diameter, and radar albedos for over 350 NEAs. The radar albedo is defined as the radar cross-section divided by the geometric cross-section. If a shape model is available, the actual cross-section is known at the time of the observation. Otherwise we derive a geometric cross-section from a measured diameter. When radar imaging is available, the diameter was measured from the apparent range depth. However, when radar imaging was not available, we used the continuous wave (CW) bandwidth radar measurements in conjunction with the period of the object. The CW bandwidth provides apparent rotation rate, which, given an independent rotation measurement, such as from lightcurves, constrains the size of the object. We assumed an equatorial view unless we knew the pole orientation, which gives a lower limit on the diameter. The CW also provides the polarization ratio, which is the ratio of the SC and OC cross-sections.We confirm the trend found by Benner et al. (2008) that taxonomic types E and V have very high polarization ratios. We have obtained a larger sample and can analyze additional trends with spin, size, rotation rate, taxonomic class, polarization ratio, and radar albedo to interpret the origin of the NEAs and their dynamical processes. The distribution of radar albedo and polarization ratio at the smallest diameters (≤50 m) differs from the distribution of larger objects (>50 m), although the sample size is limited. Additionally, we find more moderate radar albedos for the smallest NEAs when compared to those with diameters 50-150 m. We will present additional trends we
International Nuclear Information System (INIS)
Ostrowsky, A.; Bordy, J.M.; Daures, J.; De Carlan, L.; Delaunay, F.
2010-01-01
Solving the problem of traceability of the absorbed dose to the tumour for the radiation fields of small and very small dimensions, like those used for new treatment modality usually results in the use of dosemeters of much smaller size than those of the beam. For the realisation of the reference in primary standards laboratories, the absence of technology likely to produce absolute small-size dosemeters leaves no possibility for the direct measurement of the absorbed dose at a point and implies the use of passive or active small-size transfer dosemeters. This report intends to introduce a new kind of dose quantity for radiotherapy similar do the Dose Area Product concept used in radiology. Such a new concept has to be propagated through the metrology chain, including the TPS, to the calculation of the absorbed dose to the tumour. (authors)
Hagell, Peter; Westergren, Albert
Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).
International Nuclear Information System (INIS)
Blackwood, Daniel J.
2015-01-01
The pitting behaviours of 304L and 316L stainless steels were investigated at 3 °C to 90 °C in 1 M solutions of NaCl, NaBr and NaI by potentiodynamic polarization. The temperature dependences of the pitting potential varied according to the anion, being near linear in bromide but exponential in chloride. As a result, at low temperatures grades 304L and 316L steel are most susceptible to pitting by bromide ions, while at high temperatures both stainless steels were more susceptible to pitting by small chloride anions than the larger bromide and iodide. Thus, increasing temperature appears to favour attack by smaller anions. This paper will attempt to rationalise both of the above findings in terms of the point defect model. Initial findings are that qualitatively this approach can be reasonably successful, but not at the quantitative level, possibly due to insufficient data on the mechanical properties of thin passive films
Directory of Open Access Journals (Sweden)
Daniel Vasiliu
Full Text Available Global gene expression analysis using microarrays and, more recently, RNA-seq, has allowed investigators to understand biological processes at a system level. However, the identification of differentially expressed genes in experiments with small sample size, high dimensionality, and high variance remains challenging, limiting the usability of these tens of thousands of publicly available, and possibly many more unpublished, gene expression datasets. We propose a novel variable selection algorithm for ultra-low-n microarray studies using generalized linear model-based variable selection with a penalized binomial regression algorithm called penalized Euclidean distance (PED. Our method uses PED to build a classifier on the experimental data to rank genes by importance. In place of cross-validation, which is required by most similar methods but not reliable for experiments with small sample size, we use a simulation-based approach to additively build a list of differentially expressed genes from the rank-ordered list. Our simulation-based approach maintains a low false discovery rate while maximizing the number of differentially expressed genes identified, a feature critical for downstream pathway analysis. We apply our method to microarray data from an experiment perturbing the Notch signaling pathway in Xenopus laevis embryos. This dataset was chosen because it showed very little differential expression according to limma, a powerful and widely-used method for microarray analysis. Our method was able to detect a significant number of differentially expressed genes in this dataset and suggest future directions for investigation. Our method is easily adaptable for analysis of data from RNA-seq and other global expression experiments with low sample size and high dimensionality.
Directory of Open Access Journals (Sweden)
Paillisson J.-M.
2011-05-01
Full Text Available The ecological importance of the red-swamp crayfish (Procambarus clarkii in the functioning of freshwater aquatic ecosystems is becoming more evident. It is important to know the limitations of sampling methods targeting this species, because accurate determination of population characteristics is required for predicting the ecological success of P. clarkii and its potential impacts on invaded ecosystems. In the current study, we addressed the question of trap efficiency by comparing population structure provided by eight trap devices (varying in number and position of entrances, mesh size, trap size and construction materials in three habitats (a pond, a reed bed and a grassland in a French marsh in spring 2010. Based on a large collection of P. clarkii (n = 2091, 272 and 213 respectively in the pond, reed bed and grassland habitats, we found that semi-cylindrical traps made from 5.5 mm mesh galvanized steel wire (SCG were the most efficient in terms of catch probability (96.7–100% compared to 15.7–82.8% depending on trap types and habitats and catch-per-unit effort (CPUE: 15.3, 6.0 and 5.1 crayfish·trap-1·24 h-1 compared to 0.2–4.4, 2.9 and 1.7 crayfish·trap-1·24 h-1 by the other types of fishing gear in the pond, reed bed and grassland respectively. The SCG trap was also the most effective for sampling all size classes, especially small individuals (carapace length \\hbox{$\\leqslant 30$} ⩽ 30 mm. Sex ratio was balanced in all cases. SCG could be considered as appropriate trapping gear to likely give more realistic information about P. clarkii population characteristics than many other trap types. Further investigation is needed to assess the catching effort required for ultimately proposing a standardised sampling method in a large range of habitats.
Energy Technology Data Exchange (ETDEWEB)
Mabille, Mylene [Department of Radiology, Institut Gustave-Roussy, 39 rue Camille Desmoulins 94805 Villejuif (France); Department of Radiology, Hopital Antoine Beclere, 157 rue de la Porte de Trivaux 92140 Clamart (France); Vanel, Daniel [Department of Radiology, Institut Gustave-Roussy, 39 rue Camille Desmoulins 94805 Villejuif (France); Istituti Ortopedici Rizzoli, 1/10 via del Barbiano 40106 Bologna (Italy)], E-mail: dvanel@ior.it; Albiter, Marcela [Department of Radiology, Hopital Saint Louis, 01 Avenue Claude Vellefaux 75175 Paris Cedex 10 (France); Le Cesne, Axel [Department of Medical Oncology, Institut Gustave Roussy, 39 rue Camille Desmoulins 94805 Villejuif (France); Bonvalot, Sylvie [Department of Surgery, Institut Gustave Roussy, 39 rue Camille Desmoulins 94805 Villejuif (France); Le Pechoux, Cecile [Department of Radiotherapy, Institut Gustave Roussy, 39 rue Camille Desmoulins 94805 Villejuif (France); Terrier, Philippe [Department of Pathology, Institut Gustave Roussy, 39 rue Camille Desmoulins 94805 Villejuif (France); Shapeero, Lorraine G. [Department of Radiology, Uniformed Services University of the Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814 (United States); Bone and Soft Tissue Program, United States Military Cancer Institute, 6900 Georgia Ave, NW, Washington, DC 20307 (United States); Dromain, Clarisse [Department of Radiology, Institut Gustave-Roussy, 39 rue Camille Desmoulins 94805 Villejuif (France)
2009-02-15
Purpose: To define computed tomography (CT) criteria for evaluating the response of patients with gastrointestinal stromal tumors (GIST) who are receiving Imatinib (tyrosine-kinase inhibitor therapy). Materials and methods: This prospective CT study evaluated 107 consecutive patients with advanced metastatic GIST treated with Imatinib. Results: Seventy patients had total or partial cystic-like transformation of hepatic and/or peritoneal metastases. These pseudocysts remained unchanged in size or stable in size on successive CT examinations (stable disease according to RECIST criteria). Forty-six patients developed metastases, 17 patients showed increasing parietal thickness and 29 patients with peripheral enhancing nodules. These CT changes represented local recurrence consistent with GIST resistance to Imatinib treatment. WHO or RECIST criteria did not provide a reliable evaluation of disease evolution or recurrence. Development of new enhancement of lesions (parietal thickness or nodule) was the only reliable criterion. Conclusion: The development of peripheral thickening or enhancing nodules within cystic-like metastatic lesions, even without any change in size, represented progressive GIST under Imatinib, growing in a short time and should alert the clinician for the possible need for a change in therapy.
Evaluation of a new handheld point-of-care blood gas analyser using 100 equine blood samples.
Bardell, David; West, Eleanor; Mark Senior, J
2017-02-22
To determine whether the Enterprise point-of-care blood analysis system (EPOC) produces results in agreement with two other blood gas analysers in regular clinical use (i-STAT and Radiometer ABL77) and to investigate the precision of the new machine when used with equine whole blood. Prospective, randomized, non-blinded, comparative laboratory analyser study. Horses admitted to a university teaching hospital requiring arterial or venous blood gas analysis as part of their routine clinical management. One hundred equine blood samples were run immediately, consecutively and in randomized order on three blood gas analysers. Results of variables common to all three analysers were tested for agreement and compared with guidelines used in human medicine. These require 80% of results from the test analyser to fall within a defined range or percentage of results from the comparator devices to achieve acceptability. Additionally, 21 samples were run twice in quick succession on the EPOC analyser to investigate precision. Agreement targets were not met for haematocrit, haemoglobin and base excess for either i-STAT or ABL77 analysers. EPOC precision targets were not met for partial pressure of carbon dioxide, ionized calcium, haematocrit and haemoglobin. Overall comparative performance of the EPOC was good to excellent for pH, oxygen tension, potassium, bicarbonate and oxygen saturation of haemoglobin, but marginal to poor for other parameters. The EPOC may be useful in performing analysis of equine whole blood, but trend analysis of carbon dioxide tension, ionized calcium, haematocrit and haemoglobin should be interpreted with caution. The EPOC should not be used interchangeably with other blood gas analysers. Copyright © 2016 Association of Veterinary Anaesthetists and American College of Veterinary Anesthesia and Analgesia. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Sunil Kumar C
2014-01-01
Full Text Available With number of students growing each year there is a strong need to automate systems capable of evaluating descriptive answers. Unfortunately, there aren’t many systems capable of performing this task. In this paper, we use a machine learning tool called LightSIDE to accomplish auto evaluation and scoring of descriptive answers. Our experiments are designed to cater to our primary goal of identifying the optimum training sample size so as to get optimum auto scoring. Besides the technical overview and the experiments design, the paper also covers challenges, benefits of the system. We also discussed interdisciplinary areas for future research on this topic.
Selleri, Paolo; Di Girolamo, Nicola
2014-01-01
Point-of-care testing is an attractive option in rabbit medicine, because it permits rapid analysis of a panel of electrolytes, chemistries, blood gases, hemoglobin, and hematocrit, requiring only 65 μL of blood. The purpose of this study was to evaluate the performance of a portable clinical analyzer for measurement of pH, partial pressure of CO2, Na, chloride, potassium, blood urea nitrogen, glucose, hematocrit, and hemoglobin in healthy and diseased rabbits. Blood samples obtained from 30 pet rabbits were analyzed immediately after collection by the portable clinical analyzer (PCA) and immediately thereafter (time <20 sec) by a reference analyzer. Bland-Altman plots and Passing-Bablok regression analysis were used to compare the results. Limits of agreement were wide for all the variables studied, with the exception of pH. Most variables presented significant proportional and/or constant bias. The current study provides sufficient evidence that the PCA presents reliability for pH, although its low agreement with a reference analyzer for the other variables does not support their interchangeability. Limits of agreement provided for each variable allow researchers to evaluate if the PCA is reliable enough for their scope. To the authors' knowledge, the present is the first report evaluating a PCA in the rabbit.
National Oceanic and Atmospheric Administration, Department of Commerce — Benthic fauna and sediment in the vicinity of the Barbers Point (Honouliuli) ocean outfall were sampled from 1986-2010. To assess the environmental quality, sediment...
Inoue, Akiomi; Kawakami, Norito; Tsuchiya, Masao; Sakurai, Keiko; Hashimoto, Hideki
2010-01-01
The purpose of this study was to investigate the cross-sectional association of employment contract, company size, and occupation with psychological distress using a nationally representative sample of the Japanese population. From June through July 2007, a total of 9,461 male and 7,717 female employees living in the community were randomly selected and surveyed using a self-administered questionnaire and interview including questions about occupational class variables, psychological distress (K6 scale), treatment for mental disorders, and other covariates. Among males, part-time workers had a significantly higher prevalence of psychological distress than permanent workers. Among females, temporary/contract workers had a significantly higher prevalence of psychological distress than permanent workers. Among males, those who worked at companies with 300-999 employees had a significantly higher prevalence of psychological distress than those who worked at the smallest companies (with 1-29 employees). Company size was not significantly associated with psychological distress among females. Additionally, occupation was not significantly associated with psychological distress among males or females. Similar patterns were observed when the analyses were conducted for those who had psychological distress and/or received treatment for mental disorders. Working as part-time workers, for males, and as temporary/contract workers, for females, may be associated with poor mental health in Japan. No clear gradient in mental health along company size or occupation was observed in Japan.
International Nuclear Information System (INIS)
Jenkins, M. L.
1998-01-01
We have made an analysis of the conditions necessary for the successful use of the weak-beam technique for identifying and characterizing small point-defect clusters in ion-irradiated copper. The visibility of small defects was found to depend only weakly on the magnitude of the beam-convergence. In general, the image sizes of small clusters were found to be most sensitive to the magnitude of Sa with the image sizes of some individual defects changing by large amounts with changes as small as 0.025 nm -1 . The most reliable information on the true defect size is likely to be obtained by taking a series of 5-9 micrographs with a systematic variation of deviation parameter from 0.2-0.3 nm -1 . This procedure allows size information to be obtained down to a resolution limit of about 0.5 nm for defects situated throughout a foil thickness of 60 nm. The technique has been applied to the determination of changes in the sizes of small defects produced by a low-temperature in-situ irradiation and annealing experiment
Venkatesan, Arjun K; Gan, Wenhui; Ashani, Harsh; Herckes, Pierre; Westerhoff, Paul
2018-04-15
Phosphorus (P) is an important and often limiting element in terrestrial and aquatic ecosystem. A lack of understanding of its distribution and structures in the environment limits the design of effective P mitigation and recovery approaches. Here we developed a robust method employing size exclusion chromatography (SEC) coupled to an ICP-MS to determine the molecular weight (MW) distribution of P in environmental samples. The most abundant fraction of P varied widely in different environmental samples: (i) orthophosphate was the dominant fraction (93-100%) in one lake, two aerosols and DOC isolate samples, (ii) species of 400-600 Da range were abundant (74-100%) in two surface waters, and (iii) species of 150-350 Da range were abundant in wastewater effluents. SEC-DOC of the aqueous samples using a similar SEC column showed overlapping peaks for the 400-600 Da species in two surface waters, and for >20 kDa species in the effluents, suggesting that these fractions are likely associated with organic matter. The MW resolution and performance of SEC-ICP-MS agreed well with the time integrated results obtained using conventional ultrafiltration method. Results show that SEC in combination with ICP-MS and DOC has the potential to be a powerful and easy-to-use method in identifying unknown fractions of P in the environment. Copyright © 2018 Elsevier Ltd. All rights reserved.
Chen, Xiao; Lu, Bin; Yan, Chao-Gan
2018-01-01
Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Sorption of water vapour by the Na+-exchanged clay-sized fractions of some tropical soil samples
International Nuclear Information System (INIS)
Yormah, T.B.R.; Hayes, M.H.B.
1993-09-01
Water vapour sorption isotherms at 299K for the Na + -exchanged clay-sized (≤ 2μm e.s.d.) fraction of two sets of samples taken at three different depths from a tropical soil profile have been studied. One set of samples was treated (with H 2 O 2 ) for the removal of much of the organic matter (OM); the other set (of the same samples) was not so treated. The isotherms obtained were all of type II and analyses by the BET method yielded values for the Specific Surface Areas (SSA) and for the average energy of adsorption of the first layer of adsorbate (E a ). OM content and SSA for the untreated samples were found to decrease with depth. Whereas removal of organic matter made negligible difference to the SSA of the top/surface soil, the same treatment produced a significant increase in the SSA of the samples taken from the middle and from the lower depths in the profile; the resulting increase was more pronounced for the subsoil. It has been deduced from these results that OM in the surface soil was less involved with the inorganic soil colloids than that in the subsoil. The increase in surface area which resulted from the removal of OM from the subsoil was most probably due to disaggregation. Values of E a obtained show that for all the samples the adsorption of water vapour became more energetic after the oxidative removal of organic matter; the resulting ΔE a also increased with depth. This suggests that in the dry state, the ''cleaned'' surface of the inorganic soil colloids was more energetic than the ''organic-matter-coater surface''. These data provide strong support for the deduction that OM in the subsoil was in a more ''combined'' state than that in the surface soil. (author). 21 refs, 4 figs, 2 tabs
Orth, Patrick; Zurakowski, David; Alini, Mauro; Cucchiarini, Magali; Madry, Henning
2013-11-01
Advanced tissue engineering approaches for articular cartilage repair in the knee joint rely on translational animal models. In these investigations, cartilage defects may be established either in one joint (unilateral design) or in both joints of the same animal (bilateral design). We hypothesized that a lower intraindividual variability following the bilateral strategy would reduce the number of required joints. Standardized osteochondral defects were created in the trochlear groove of 18 rabbits. In 12 animals, defects were produced unilaterally (unilateral design; n=12 defects), while defects were created bilaterally in 6 animals (bilateral design; n=12 defects). After 3 weeks, osteochondral repair was evaluated histologically applying an established grading system. Based on intra- and interindividual variabilities, required sample sizes for the detection of discrete differences in the histological score were determined for both study designs (α=0.05, β=0.20). Coefficients of variation (%CV) of the total histological score values were 1.9-fold increased following the unilateral design when compared with the bilateral approach (26 versus 14%CV). The resulting numbers of joints needed to treat were always higher for the unilateral design, resulting in an up to 3.9-fold increase in the required number of experimental animals. This effect was most pronounced for the detection of small-effect sizes and estimating large standard deviations. The data underline the possible benefit of bilateral study designs for the decrease of sample size requirements for certain investigations in articular cartilage research. These findings might also be transferred to other scoring systems, defect types, or translational animal models in the field of cartilage tissue engineering.
Energy Technology Data Exchange (ETDEWEB)
Chang, Ying-jie [Department of Agricultural Chemistry, National Taiwan University, Taipei 106, Taiwan (China); Shih, Yang-hsin, E-mail: yhs@ntu.edu.tw [Department of Agricultural Chemistry, National Taiwan University, Taipei 106, Taiwan (China); Su, Chiu-Hun [Material and Chemical Research Laboratories, Industrial Technology Research Institute, Hsinchu 310, Taiwan (China); Ho, Han-Chen [Department of Anatomy, Tzu-Chi University, Hualien 970, Taiwan (China)
2017-01-15
Highlights: • Three emerging techniques to detect NPs in the aquatic environment were evaluated. • The pretreatment of centrifugation to decrease the interference was established. • Asymmetric flow field flow fractionation has a low recovery of NPs. • Hydrodynamic chromatography is recommended to be a low-cost screening tool. • Single particle ICPMS is recommended to accurately measure trace NPs in water. - Abstract: Due to the widespread application of engineered nanoparticles, their potential risk to ecosystems and human health is of growing concern. Silver nanoparticles (Ag NPs) are one of the most extensively produced NPs. Thus, this study aims to develop a method to detect Ag NPs in different aquatic systems. In complex media, three emerging techniques are compared, including hydrodynamic chromatography (HDC), asymmetric flow field flow fractionation (AF4) and single particle inductively coupled plasma-mass spectrometry (SP-ICP-MS). The pre-treatment procedure of centrifugation is evaluated. HDC can estimate the Ag NP sizes, which were consistent with the results obtained from DLS. AF4 can also determine the size of Ag NPs but with lower recoveries, which could result from the interactions between Ag NPs and the working membrane. For the SP-ICP-MS, both the particle size and concentrations can be determined with high Ag NP recoveries. The particle size resulting from SP-ICP-MS also corresponded to the transmission electron microscopy observation (p > 0.05). Therefore, HDC and SP-ICP-MS are recommended for environmental analysis of the samples after our established pre-treatment process. The findings of this study propose a preliminary technique to more accurately determine the Ag NPs in aquatic environments and to use this knowledge to evaluate the environmental impact of manufactured NPs.
Sankar Sana, Shib
2016-01-01
The paper develops a production-inventory model of a two-stage supply chain consisting of one manufacturer and one retailer to study production lot size/order quantity, reorder point sales teams' initiatives where demand of the end customers is dependent on random variable and sales teams' initiatives simultaneously. The manufacturer produces the order quantity of the retailer at one lot in which the procurement cost per unit quantity follows a realistic convex function of production lot size. In the chain, the cost of sales team's initiatives/promotion efforts and wholesale price of the manufacturer are negotiated at the points such that their optimum profits reached nearer to their target profits. This study suggests to the management of firms to determine the optimal order quantity/production quantity, reorder point and sales teams' initiatives/promotional effort in order to achieve their maximum profits. An analytical method is applied to determine the optimal values of the decision variables. Finally, numerical examples with its graphical presentation and sensitivity analysis of the key parameters are presented to illustrate more insights of the model.
Heidarizadi, Elham; Tabaraki, Reza
2016-01-01
A sensitive cloud point extraction method for simultaneous determination of trace amounts of sunset yellow (SY), allura red (AR) and brilliant blue (BB) by spectrophotometry was developed. Experimental parameters such as Triton X-100 concentration, KCl concentration and initial pH on extraction efficiency of dyes were optimized using response surface methodology (RSM) with a Doehlert design. Experimental data were evaluated by applying RSM integrating a desirability function approach. The optimum condition for extraction efficiency of SY, AR and BB simultaneously were: Triton X-100 concentration 0.0635 mol L(-1), KCl concentration 0.11 mol L(-1) and pH 4 with maximum overall desirability D of 0.95. Correspondingly, the maximum extraction efficiency of SY, AR and BB were 100%, 92.23% and 95.69%, respectively. At optimal conditions, extraction efficiencies were 99.8%, 92.48% and 95.96% for SY, AR and BB, respectively. These values were only 0.2%, 0.25% and 0.27% different from the predicted values, suggesting that the desirability function approach with RSM was a useful technique for simultaneously dye extraction. Linear calibration curves were obtained in the range of 0.02-4 for SY, 0.025-2.5 for AR and 0.02-4 μg mL(-1) for BB under optimum condition. Detection limit based on three times the standard deviation of the blank (3Sb) was 0.009, 0.01 and 0.007 μg mL(-1) (n=10) for SY, AR and BB, respectively. The method was successfully used for the simultaneous determination of the dyes in different food samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Fotini Kokou
2016-05-01
Full Text Available One of the main concerns in gene expression studies is the calculation of statistical significance which in most cases remains low due to limited sample size. Increasing biological replicates translates into more effective gains in power which, especially in nutritional experiments, is of great importance as individual variation of growth performance parameters and feed conversion is high. The present study investigates in the gilthead sea bream Sparus aurata, one of the most important Mediterranean aquaculture species. For 24 gilthead sea bream individuals (biological replicates the effects of gradual substitution of fish meal by plant ingredients (0% (control, 25%, 50% and 75% in the diets were studied by looking at expression levels of four immune-and stress-related genes in intestine, head kidney and liver. The present results showed that only the lowest substitution percentage is tolerated and that liver is the most sensitive tissue to detect gene expression variations in relation to fish meal substituted diets. Additionally the usage of three independent biological replicates were evaluated by calculating the averages of all possible triplets in order to assess the suitability of selected genes for stress indication as well as the impact of the experimental set up, thus in the present work the impact of FM substitution. Gene expression was altered depending of the selected biological triplicate. Only for two genes in liver (hsp70 and tgf significant differential expression was assured independently of the triplicates used. These results underlined the importance of choosing the adequate sample number especially when significant, but minor differences in gene expression levels are observed. Keywords: Sample size, Gene expression, Fish meal replacement, Immune response, Gilthead sea bream
International Nuclear Information System (INIS)
Ito, Hiroshi; Akaizawa, Takashi; Goto, Ryoui
1994-01-01
In a simplified method for measurement of cerebral blood flow using one 123 I-IMP SPECT scan and one point arterial blood sampling (Autoradiography method), input function is obtained by calibrating a standard input function by one point arterial blood sampling. A purpose of this study is validation of calibration by one point venous blood sampling as a substitute for one point arterial blood sampling. After intravenous infusion of 123 I-IMP, frequent arterial and venous blood sampling were simultaneously performed on 12 patients of CNS disease without any heart and lung disease and 5 normal volunteers. The radioactivity ratio of venous whole blood which obtained from cutaneous cubital vein to arterial whole blood were 0.76±0.08, 0.80±0.05, 0.81±0.06, 0.83±0.11 at 10, 20, 30, 50 min after 123 I-IMP infusion, respectively. The venous blood radioactivities were always 20% lower than those of arterial blood radioactivity during 50 min. However, the ratio which obtained from cutaneous dorsal hand vein to artery were 0.93±0.02, 0.94±0.05, 0.98±0.04, 0.98±0.03, at 10, 20, 30, 50 min after 123 I-IMP infusion, respectively. The venous blood radioactivity was consistent with artery. These indicate that arterio-venous difference of radioactivity in a peripheral cutaneous vein like a dorsal hand vein is minimal due to arteriovenous shunt in palm. Therefore, a substitution by blood sampling from cutaneous dorsal hand vein for artery will be possible. Optimized time for venous blood sampling evaluated by error analysis was 20 min after 123 I-IMP infusion, which is 10 min later than that of arterial blood sampling. (author)
Energy Technology Data Exchange (ETDEWEB)
Garino, Terry J.
2007-09-01
The sintering behavior of Sandia chem-prep high field varistor materials was studied using techniques including in situ shrinkage measurements, optical and scanning electron microscopy and x-ray diffraction. A thorough literature review of phase behavior, sintering and microstructure in Bi{sub 2}O{sub 3}-ZnO varistor systems is included. The effects of Bi{sub 2}O{sub 3} content (from 0.25 to 0.56 mol%) and of sodium doping level (0 to 600 ppm) on the isothermal densification kinetics was determined between 650 and 825 C. At {ge} 750 C samples with {ge}0.41 mol% Bi{sub 2}O{sub 3} have very similar densification kinetics, whereas samples with {le}0.33 mol% begin to densify only after a period of hours at low temperatures. The effect of the sodium content was greatest at {approx}700 C for standard 0.56 mol% Bi{sub 2}O{sub 3} and was greater in samples with 0.30 mol% Bi{sub 2}O{sub 3} than for those with 0.56 mol%. Sintering experiments on samples of differing size and shape found that densification decreases and mass loss increases with increasing surface area to volume ratio. However, these two effects have different causes: the enhancement in densification as samples increase in size appears to be caused by a low oxygen internal atmosphere that develops whereas the mass loss is due to the evaporation of bismuth oxide. In situ XRD experiments showed that the bismuth is initially present as an oxycarbonate that transforms to metastable {beta}-Bi{sub 2}O{sub 3} by 400 C. At {approx}650 C, coincident with the onset of densification, the cubic binary phase, Bi{sub 38}ZnO{sub 58} forms and remains stable to >800 C, indicating that a eutectic liquid does not form during normal varistor sintering ({approx}730 C). Finally, the formation and morphology of bismuth oxide phase regions that form on the varistors surfaces during slow cooling were studied.
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Heo, Moonseong; Kim, Namhee; Rinke, Michael L; Wylie-Rosett, Judith
2018-02-01
Stepped-wedge (SW) designs have been steadily implemented in a variety of trials. A SW design typically assumes a three-level hierarchical data structure where participants are nested within times or periods which are in turn nested within clusters. Therefore, statistical models for analysis of SW trial data need to consider two correlations, the first and second level correlations. Existing power functions and sample size determination formulas had been derived based on statistical models for two-level data structures. Consequently, the second-level correlation has not been incorporated in conventional power analyses. In this paper, we derived a closed-form explicit power function based on a statistical model for three-level continuous outcome data. The power function is based on a pooled overall estimate of stratified cluster-specific estimates of an intervention effect. The sampling distribution of the pooled estimate is derived by applying a fixed-effect meta-analytic approach. Simulation studies verified that the derived power function is unbiased and can be applicable to varying number of participants per period per cluster. In addition, when data structures are assumed to have two levels, we compare three types of power functions by conducting additional simulation studies under a two-level statistical model. In this case, the power function based on a sampling distribution of a marginal, as opposed to pooled, estimate of the intervention effect performed the best. Extensions of power functions to binary outcomes are also suggested.
G. J. Jordan; M. J. Ducey; J. H. Gove
2004-01-01
We present the results of a timed field trial comparing the bias characteristics and relative sampling efficiency of line-intersect, fixed-area, and point relascope sampling for downed coarse woody material. Seven stands in a managed northern hardwood forest in New Hampshire were inventoried. Significant differences were found among estimates in some stands, indicating...
Sevelius, Jae M.
2017-01-01
Background. Transgender individuals have a gender identity that differs from the sex they were assigned at birth. The population size of transgender individuals in the United States is not well-known, in part because official records, including the US Census, do not include data on gender identity. Population surveys today more often collect transgender-inclusive gender-identity data, and secular trends in culture and the media have created a somewhat more favorable environment for transgender people. Objectives. To estimate the current population size of transgender individuals in the United States and evaluate any trend over time. Search methods. In June and July 2016, we searched PubMed, Cumulative Index to Nursing and Allied Health Literature, and Web of Science for national surveys, as well as “gray” literature, through an Internet search. We limited the search to 2006 through 2016. Selection criteria. We selected population-based surveys that used probability sampling and included self-reported transgender-identity data. Data collection and analysis. We used random-effects meta-analysis to pool eligible surveys and used meta-regression to address our hypothesis that the transgender population size estimate would increase over time. We used subsample and leave-one-out analysis to assess for bias. Main results. Our meta-regression model, based on 12 surveys covering 2007 to 2015, explained 62.5% of model heterogeneity, with a significant effect for each unit increase in survey year (F = 17.122; df = 1,10; b = 0.026%; P = .002). Extrapolating these results to 2016 suggested a current US population size of 390 adults per 100 000, or almost 1 million adults nationally. This estimate may be more indicative for younger adults, who represented more than 50% of the respondents in our analysis. Authors’ conclusions. Future national surveys are likely to observe higher numbers of transgender people. The large variety in questions used to ask
Schuler, Michael; Whitsitt, Seth; Henry, Louis-Paul; Sachdev, Subir; Läuchli, Andreas M
2016-11-18
The low-energy spectra of many body systems on a torus, of finite size L, are well understood in magnetically ordered and gapped topological phases. However, the spectra at quantum critical points separating such phases are largely unexplored for (2+1)D systems. Using a combination of analytical and numerical techniques, we accurately calculate and analyze the low-energy torus spectrum at an Ising critical point which provides a universal fingerprint of the underlying quantum field theory, with the energy levels given by universal numbers times 1/L. We highlight the implications of a neighboring topological phase on the spectrum by studying the Ising* transition (i.e. the transition between a Z_{2} topological phase and a trivial paramagnet), in the example of the toric code in a longitudinal field, and advocate a phenomenological picture that provides qualitative insight into the operator content of the critical field theory.
DEFF Research Database (Denmark)
Nørgaard, Birgitte; Mogensen, Christian Backer
2012-01-01
Time is a crucial factor in an emergency department and the effectiveness of diagnosing depends on, among other things, the accessibility of rapid reported laboratory test results; i.e.: a short turnaround time (TAT). Former studies have shown a reduced time to action when point of care...... technologies (POCT) are used in emergency departments. This study assesses the hypothesis, that using Point of Care Technology in analysing blood samples versus tube transporting blood samples for laboratory analyses results in shorter time from the blood sample is collected to the result is reported...
Majedi, Seyed Mohammad; Lee, Hian Kee; Kelly, Barry C
2012-08-07
Cloud point extraction (CPE) with inductively coupled plasma mass spectrometry (ICPMS) was applied to the analysis of zinc oxide nanoparticles (ZnO NPs, mean diameter ~40 nm) in water and wastewater samples. Five CPE factors, surfactant (Triton X-114 (TX-114)) concentration, pH, ionic strength, incubation temperature, and incubation time, were investigated and optimized by orthogonal array design (OAD). A three-level OAD, OA(27) (3(13)) matrix was employed in which the effects of the factors and their contributions to the extraction efficiency were quantitatively assessed by the analysis of variance (ANOVA). Based on the analysis, the best extraction efficiency (87.3%) was obtained at 0.25% (w/v) of TX-114, pH = 10, salt content of 15 mM NaCl, incubation temperature of 45 °C, and incubation time of 30 min. The results showed that surfactant concentration, pH, incubation time, and ionic strength exert significant effects on the extraction efficiency. Preconcentration factors of 62 and 220 were obtained with 0.25 and 0.05% (w/v) TX-114, respectively. The relative recoveries of ZnO NPs from different environmental waters were in the range 64-123% at 0.5-100 μg/L spiked levels. The ZnO NPs extracted into the TX-114-rich phase were characterized by transmission electron microscopy (TEM) combined with energy-dispersive X-ray spectroscopy (EDS) and UV-visible spectrometry. Based on the results, no significant changes in size and shape of NPs were observed compared to those in the water before extraction. The extracted ZnO NPs were determined after microwave digestion by ICPMS. A detection limit of 0.05 μg/L was achieved for ZnO NPs. The optimized conditions were successfully applied to the analysis of ZnO NPs in water samples.
Chhikara, R. S.; Odell, P. L.
1973-01-01
A multichannel scanning device may fail to observe objects because of obstructions blocking the view, or different categories of objects may make up a resolution element giving rise to a single observation. Ground truth will be required on any such categories of objects in order to estimate their expected proportions associated with various classes represented in the remote sensing data. Considering the classes to be distributed as multivariate normal with different mean vectors and common covariance, maximum likelihood estimates are given for the expected proportions of objects associated with different classes, using the Bayes procedure for classification of individuals obtained from these classes. An approximate solution for simultaneous confidence intervals on these proportions is given, and thereby a sample-size needed to achieve a desired amount of accuracy for the estimates is determined.
Li, Aifeng; Ma, Feifei; Song, Xiuli; Yu, Rencheng
2011-03-18
Solid-phase adsorption toxin tracking (SPATT) technology was developed as an effective passive sampling method for dissolved diarrhetic shellfish poisoning (DSP) toxins in seawater. HP20 and SP700 resins have been reported as preferred adsorption substrates for lipophilic algal toxins and are recommended for use in SPATT testing. However, information on the mechanism of passive adsorption by these polymeric resins is still limited. Described herein is a study on the adsorption of OA and DTX1 toxins extracted from Prorocentrum lima algae by HP20 and SP700 resins. The pore size distribution of the adsorbents was characterized by a nitrogen adsorption method to determine the relationship between adsorption and resin porosity. The Freundlich equation constant showed that the difference in adsorption capacity for OA and DTX1 toxins was not determined by specific surface area, but by the pore size distribution in particular, with micropores playing an especially important role. Additionally, it was found that differences in affinity between OA and DTX1 for aromatic resins were as a result of polarity discrepancies due to DTX1 having an additional methyl moiety. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.
Woo, Hyun-Kyung; Sunkara, Vijaya; Park, Juhee; Kim, Tae-Hyeong; Han, Ja-Ryoung; Kim, Chi-Ju; Choi, Hyun-Il; Kim, Yoon-Keun; Cho, Yoon-Kyoung
2017-02-28
Extracellular vesicles (EVs) are cell-derived, nanoscale vesicles that carry nucleic acids and proteins from their cells of origin and show great potential as biomarkers for many diseases, including cancer. Efficient isolation and detection methods are prerequisites for exploiting their use in clinical settings and understanding their physiological functions. Here, we presented a rapid, label-free, and highly sensitive method for EV isolation and quantification using a lab-on-a-disc integrated with two nanofilters (Exodisc). Starting from raw biological samples, such as cell-culture supernatant (CCS) or cancer-patient urine, fully automated enrichment of EVs in the size range of 20-600 nm was achieved within 30 min using a tabletop-sized centrifugal microfluidic system. Quantitative tests using nanoparticle-tracking analysis confirmed that the Exodisc enabled >95% recovery of EVs from CCS. Additionally, analysis of mRNA retrieved from EVs revealed that the Exodisc provided >100-fold higher concentration of mRNA as compared with the gold-standard ultracentrifugation method. Furthermore, on-disc enzyme-linked immunosorbent assay using urinary EVs isolated from bladder cancer patients showed high levels of CD9 and CD81 expression, suggesting that this method may be potentially useful in clinical settings to test urinary EV-based biomarkers for cancer diagnostics.
Hua, Xue; Hibar, Derrek P.; Ching, Christopher R.K.; Boyle, Christina P.; Rajagopalan, Priya; Gutman, Boris A.; Leow, Alex D.; Toga, Arthur W.; Jack, Clifford R.; Harvey, Danielle; Weiner, Michael W.; Thompson, Paul M.
2013-01-01
Various neuroimaging measures are being evaluated for tracking Alzheimer’s disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24 months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39 AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. PMID:23153970
Directory of Open Access Journals (Sweden)
Valéria Schimitz Marodim
2000-10-01
Full Text Available Este estudo visa a estabelecer o delineamento experimental e o tamanho de amostra para a cultura da alface (Lactuca sativa em hidroponia, pelo sistema NFT (Nutrient film technique. O experimento foi conduzido no Laboratório de Cultivos Sem Solo/Hidroponia, no Departamento de Fitotecnia da Universidade Federal de Santa Maria e baseou-se em dados de massa de plantas. Os resultados obtidos mostraram que, usando estrutura de cultivo de alface em hidroponia sobre bancadas de fibrocimento com seis canais, o delineamento experimental adequado é blocos ao acaso se a unidade experimental for constituída de faixas transversais aos canais das bancadas, e deve ser inteiramente casualizado se a bancada for a unidade experimental; para a variável massa de plantas, o tamanho da amostra é de 40 plantas para uma semi-amplitude do intervalo de confiança em percentagem da média (d igual a 5% e de 7 plantas para um d igual a 20%.This study was carried out to establish the experimental design and sample size for hydroponic lettuce (Lactuca sativa crop under nutrient film technique. The experiment was conducted in the Laboratory of Hydroponic Crops of the Horticulture Department of the Federal University of Santa Maria. The evaluated traits were plant weight. Under hydroponic conditions on concrete bench with six ducts, the most indicated experimental design for lettuce is randomised blocks for duct transversal plots or completely randomised for bench plot. The sample size for plant weight should be 40 and 7 plants, respectively, for a confidence interval of mean percentage (d equal to 5% and 20%.
Ting, Tan Xue; Hashim, Rohaidah; Ahmad, Norazah; Abdullah, Khairul Hafizi
2013-01-01
Pertussis or whooping cough is a highly infectious respiratory disease caused by Bordetella pertussis. In vaccinating countries, infants, adolescents, and adults are relevant patients groups. A total of 707 clinical specimens were received from major hospitals in Malaysia in year 2011. These specimens were cultured on Regan-Lowe charcoal agar and subjected to end-point PCR, which amplified the repetitive insertion sequence IS481 and pertussis toxin promoter gene. Out of these specimens, 275 were positive: 4 by culture only, 6 by both end-point PCR and culture, and 265 by end-point PCR only. The majority of the positive cases were from ≤3 months old patients (77.1%) (P 0.05). Our study showed that the end-point PCR technique was able to pick up more positive cases compared to culture method.
Shi, Guo-Liang; Tian, Ying-Ze; Ma, Tong; Song, Dan-Lin; Zhou, Lai-Dong; Han, Bo; Feng, Yin-Chang; Russell, Armistead G
2017-06-01
Long-term and synchronous monitoring of PM 10 and PM 2.5 was conducted in Chengdu in China from 2007 to 2013. The levels, variations, compositions and size distributions were investigated. The sources were quantified by two-way and three-way receptor models (PMF2, ME2-2way and ME2-3way). Consistent results were found: the primary source categories contributed 63.4% (PMF2), 64.8% (ME2-2way) and 66.8% (ME2-3way) to PM 10 , and contributed 60.9% (PMF2), 65.5% (ME2-2way) and 61.0% (ME2-3way) to PM 2.5 . Secondary sources contributed 31.8% (PMF2), 32.9% (ME2-2way) and 31.7% (ME2-3way) to PM 10 , and 35.0% (PMF2), 33.8% (ME2-2way) and 36.0% (ME2-3way) to PM 2.5 . The size distribution of source categories was estimated better by the ME2-3way method. The three-way model can simultaneously consider chemical species, temporal variability and PM sizes, while a two-way model independently computes datasets of different sizes. A method called source directional apportionment (SDA) was employed to quantify the contributions from various directions for each source category. Crustal dust from east-north-east (ENE) contributed the highest to both PM 10 (12.7%) and PM 2.5 (9.7%) in Chengdu, followed by the crustal dust from south-east (SE) for PM 10 (9.8%) and secondary nitrate & secondary organic carbon from ENE for PM 2.5 (9.6%). Source contributions from different directions are associated with meteorological conditions, source locations and emission patterns during the sampling period. These findings and methods provide useful tools to better understand PM pollution status and to develop effective pollution control strategies. Copyright © 2016. Published by Elsevier B.V.
Carr, Greg J; Bailer, A John; Rawlings, Jane M; Belanger, Scott E
2018-01-19
The fish acute toxicity test method is foundational to aquatic toxicity testing strategies, yet the literature lacks a concise sample size assessment. While various sources address sample size, historical precedent seems to play a larger role than objective measures. Here, a novel and comprehensive quantification of the effect of sample size on estimation of the LC 50 is presented, covering a wide range of scenarios. The results put into perspective the practical differences across a range of sample sizes, from N = 5/concentration up to N = 23/concentration. This work provides a framework for setting sample size guidance. It illustrates ways to quantify the performance of LC 50 estimation, which can be used to set sample size guidance given reasonably difficult, or worst-case scenarios. There is a clear benefit to larger sample size studies: they reduce error in the determination of LC 50 s, and lead to more robust safe environmental concentration determinations, particularly in cases likely to be called worst-case (shallow slope and true LC 50 near the edges of the concentration range). Given that the use of well-justified sample sizes is crucial to reducing uncertainty in toxicity estimates, these results lead us to recommend a reconsideration of the current de minimis 7/concentration sample size for critical studies (e.g., studies needed for a chemical registration, which are being tested for the first time, or involving difficult test substances). This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Test results of the first 50 kA NbTi full size sample for ITER
International Nuclear Information System (INIS)
Ciazynski, D.; Zani, L.; Huber, S.; Stepanov, B.; Karlemo, B.
2003-01-01
Within the framework of the research studies for the International Thermonuclear Experimental Reactor (ITER) project, the first full size NbTi conductor sample was fabricated in industry and tested in the SULTAN facility (Villigen, Switzerland). This sample (PF-FSJS), which is relevant to the Poloidal Field coils of ITER, is composed of two parallel straight bars of conductor, connected at bottom through a joint designed according to the Cea twin-box concept. The two conductor legs are identical except for the use of different strands: a nickel plated NbTi strand with a pure copper matrix in one leg, and a bare NbTi strand with copper matrix and internal CuNi barrier in the other leg. The two conductors and the joint were extensively tested regarding DC (direct current) and AC (alternative current) properties. This paper reports on the tests results and analysis, stressing the differences between the two conductor legs and discussing the impact of the test results on the ITER design criteria for conductor and joint. While joint DC resistance, conductors and joint AC losses, fulfilled the ITER requirements, neither conductor could reach its current sharing temperature at relevant ITER currents, due to instabilities. Although the drop in temperature is slight for the CuNi strand cable, it is more significant for the Ni plated strand cable. (authors)
González, C. M.; Gómez, C. D.; Rojas, N. Y.; Acevedo, H.; Aristizábal, B. H.
2017-03-01
Cities in emerging countries are facing a fast growth and urbanization; however, the study of air pollutant emissions and its dynamics is scarce, making their populations vulnerable to potential effects of air pollution. This situation is critical in medium-sized urban areas built along the tropical Andean mountains. This work assesses the contribution of on-road vehicular and point-source industrial activities in the medium-sized Andean city of Manizales, Colombia. Annual fluxes of criteria pollutants, NMVOC, and greenhouse gases were estimated. Emissions were dominated by vehicular activity, with more than 90% of total estimated releases for the majority of air pollutants. On-road vehicular emissions for CO (43.4 Gg/yr) and NMVOC (9.6 Gg/yr) were mainly associated with the use of motorcycles (50% and 81% of total CO and NMVOC emissions respectively). Public transit buses were the main source of PM10 (47%) and NOx (48%). The per-capita emission index was significantly higher in Manizales than in other medium-sized cities, especially for NMVOC, CO, NOx and CO2. The unique mountainous terrain of Andean cities suggest that a methodology based on VSP model could give more realistic emission estimates, with additional model components that include slope and acceleration. Food and beverage facilities were the main contributors of point-source industrial emissions for PM10 (63%), SOx (55%) and NOx (45%), whereas scrap metal recycling had high emissions of CO (73%) and NMVOC (47%). Results provide the baseline for ongoing research in atmospheric modeling and urban air quality, in order to improve the understanding of air pollutant fluxes, transport and transformation in the atmosphere. In addition, this emission inventory could be used as a tool to identify areas of public health exposure and provide information for future decision makers.
Panja, Rajeswar; Roy, Sourav; Jana, Debanjan; Maikap, Siddheswar
2014-12-01
Impact of the device size and thickness of Al2O3 film on the Cu pillars and resistive switching memory characteristics of the Al/Cu/Al2O3/TiN structures have been investigated for the first time. The memory device size and thickness of Al2O3 of 18 nm are observed by transmission electron microscope image. The 20-nm-thick Al2O3 films have been used for the Cu pillar formation (i.e., stronger Cu filaments) in the Al/Cu/Al2O3/TiN structures, which can be used for three-dimensional (3D) cross-point architecture as reported previously Nanoscale Res. Lett.9:366, 2014. Fifty randomly picked devices with sizes ranging from 8 × 8 to 0.4 × 0.4 μm2 have been measured. The 8-μm devices show 100% yield of Cu pillars, whereas only 74% successful is observed for the 0.4-μm devices, because smaller size devices have higher Joule heating effect and larger size devices show long read endurance of 105 cycles at a high read voltage of -1.5 V. On the other hand, the resistive switching memory characteristics of the 0.4-μm devices with a 2-nm-thick Al2O3 film show superior as compared to those of both the larger device sizes and thicker (10 nm) Al2O3 film, owing to higher Cu diffusion rate for the larger size and thicker Al2O3 film. In consequence, higher device-to-device uniformity of 88% and lower average RESET current of approximately 328 μA are observed for the 0.4-μm devices with a 2-nm-thick Al2O3 film. Data retention capability of our memory device of >48 h makes it a promising one for future nanoscale nonvolatile application. This conductive bridging resistive random access memory (CBRAM) device is forming free at a current compliance (CC) of 30 μA (even at a lowest CC of 0.1 μA) and operation voltage of ±3 V at a high resistance ratio of >104.
International Nuclear Information System (INIS)
Odano, Ikuo; Takahashi, Naoya; Noguchi, Eikichi; Ohtaki, Hiro; Hatano, Masayoshi; Yamazaki, Yoshihiro; Higuchi, Takeshi; Ohkubo, Masaki.
1994-01-01
We developed a new non-invasive technique; one-point sampling method, for quantitative measurement of regional cerebral blood flow (rCBF) with N-isopropyl-p-[ 123 I]iodoamphetamine and SPECT. Although the continuous withdrawal of arterial blood and octanol treatment of the blood are required in the conventional microsphere method, the new technique dose not require these two procedures. The total activity of 123 I-IMP obtained by the continuous withdrawal of arterial blood is inferred by the activity of 133 I-IMP obtained by the one point arterial sample using a regression line. To determine when one point sampling time was optimum for inferring integral input function of the continuous withdrawal and whether the treatment of sampled blood for octanol fraction was required, we examined a correlation between the total activity of arterial blood withdrawn from 0 to 5 min after the injection and the activity of one point sample obtained at time t, and calculated a regression line. As a result, the minimum % error for the inference using the regression line was obtained at 6 min after the 123 I-IMP injection, moreover, the octanol treatment was not required. Then examining an effect on the values of rCBF when the sampling time was deviated from 6 min, we could correct the values in approximately 3% error when the sample was obtained at 6±1 min after the injection. The one-point sampling method provides accurate and relatively non-invasive measurement of rCBF without octanol extraction of arterial blood. (author)
Patel, Nitin R; Ankolekar, Suresh
2007-11-30
Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.
Bowden, J; Mander, A
2014-01-01
In this paper, we review the adaptive design methodology of Li et al. (Biostatistics 3:277-287) for two-stage trials with mid-trial sample size adjustment. We argue that it is closer in principle to a group sequential design, in spite of its obvious adaptive element. Several extensions are proposed that aim to make it even more attractive and transparent alternative to a standard (fixed sample size) trial for funding bodies to consider. These enable a cap to be put on the maximum sample size and for the trial data to be analysed using standard methods at its conclusion. The regulatory view of trials incorporating unblinded sample size re-estimation is also discussed. © 2014 The Authors. Pharmaceutical Statistics published by John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
H Mohamadi Monavar
2017-10-01
Full Text Available Introduction Precision agriculture (PA is a technology that measures and manages within-field variability, such as physical and chemical properties of soil. The nondestructive and rapid VIS-NIR technology detected a significant correlation between reflectance spectra and the physical and chemical properties of soil. On the other hand, quantitatively predict of soil factors such as nitrogen, carbon, cation exchange capacity and the amount of clay in precision farming is very important. The emphasis of this paper is comparing different techniques of choosing calibration samples such as randomly selected method, chemical data and also based on PCA. Since increasing the number of samples is usually time-consuming and costly, then in this study, the best sampling way -in available methods- was predicted for calibration models. In addition, the effect of sample size on the accuracy of the calibration and validation models was analyzed. Materials and Methods Two hundred and ten soil samples were collected from cultivated farm located in Avarzaman in Hamedan province, Iran. The crop rotation was mostly potato and wheat. Samples were collected from a depth of 20 cm above ground and passed through a 2 mm sieve and air dried at room temperature. Chemical analysis was performed in the soil science laboratory, faculty of agriculture engineering, Bu-ali Sina University, Hamadan, Iran. Two Spectrometer (AvaSpec-ULS 2048- UV-VIS and (FT-NIR100N were used to measure the spectral bands which cover the UV-Vis and NIR region (220-2200 nm. Each soil sample was uniformly tiled in a petri dish and was scanned 20 times. Then the pre-processing methods of multivariate scatter correction (MSC and base line correction (BC were applied on the raw signals using Unscrambler software. The samples were divided into two groups: one group for calibration 105 and the second group was used for validation. Each time, 15 samples were selected randomly and tested the accuracy of
Kristin Bunte; Steven R. Abt
2001-01-01
This document provides guidance for sampling surface and subsurface sediment from wadable gravel-and cobble-bed streams. After a short introduction to streams types and classifications in gravel-bed rivers, the document explains the field and laboratory measurement of particle sizes and the statistical analysis of particle-size distributions. Analysis of particle...
Klaver, M.; Smeets, R.J.; Koornneef, J.M.; Davies, G.R.; Vroon, P.Z.
2016-01-01
The use of the double spike technique to correct for instrumental mass fractionation has yielded high precision results for lead isotope measurements by thermal ionisation mass spectrometry (TIMS), but the applicability to ng size Pb samples is hampered by the small size of the
Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren
2016-09-01
We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Atterton, Thomas; De Groote, Isabelle; Eliopoulos, Constantine
2016-10-01
The construction of the biological profile from human skeletal remains is the foundation of anthropological examination. However, remains may be fragmentary and the elements usually employed, such as the pelvis and skull, are not available. The clavicle has been successfully used for sex estimation in samples from Iran and Greece. In the present study, the aim was to test the suitability of the measurements used in those previous studies on a British Medieval population. In addition, the project tested whether discrimination between sexes was due to size or clavicular strength. The sample consisted of 23 females and 25 males of pre-determined sex from two medieval collections: Poulton and Gloucester. Six measurements were taken using an osteometric board, sliding calipers and graduated tape. In addition, putty rings and bi-planar radiographs were made and robusticity measures calculated. The resulting variables were used in stepwise discriminant analyses. The linear measurements allowed correct sex classification in 89.6% of all individuals. This demonstrates the applicability of the clavicle for sex estimation in British populations. The most powerful discriminant factor was maximum clavicular length and the best combination of factors was maximum clavicular length and circumference. This result is similar to that obtained by other studies. To further investigate the extent of sexual dimorphism of the clavicle, the biomechanical properties of the polar second moment of area J and the ratio of maximum to minimum bending rigidity are included in the analysis. These were found to have little influence when entered into the discriminant function analysis. Copyright © 2016 Elsevier GmbH. All rights reserved.
Starkey, Lindsay A; Bowles, Joy V; Payton, Mark E; Blagburn, Byron L
2017-11-09
Dirofilaria immitis is a worldwide parasite that is endemic in many parts of the United States. There are many commercial assays available for the detection of D. immitis antigen, one of which was modified and has reentered the market. Our objective was to compare the recently reintroduced Witness® Heartworm (HW) Antigen test Kit (Zoetis, Florham Park, NJ) and the SNAP® Heartworm RT (IDEXX Laboratories, Inc., Westbrook, ME) to the well-based ELISA DiroChek® Heartworm Antigen Test Kit (Zoetis, Florham Park, NJ). Canine plasma samples were either received at the Auburn Diagnostic Parasitology Laboratory from veterinarians submitting samples for additional heartworm testing (n = 100) from 2008 to 2016 or purchased from purpose-bred beagles (n = 50, presumed negative) in 2016. Samples were categorized as "positive," "borderline" or "negative" using our established spectrophotometric cutoff value with the DiroChek® assay when a sample was initially received and processed. Three commercially available heartworm antigen tests (DiroChek®, Witness® HW, and SNAP® RT) were utilized for simultaneous testing of the 150 samples in random order as per their package insert with the addition of spectrophotometric optical density (OD) readings of the DiroChek® assay. Any samples yielding discordant test results between assays were further evaluated by heat treatment of plasma and retesting. Chi-square tests for the equality of proportions were utilized for statistical analyses. Concordant results occurred in 140/150 (93.3%) samples. Discrepant results occurred in 10/150 samples tested (6.6%): 9/10 occurring in the borderline heartworm (HW) category and 1/10 occurring in the negative HW category. The sensitivity and specificity of each test compared to the DiroChek® read by spectrophotometer was similar to what has been reported previously (Witness®: sensitivity 97.0% [94.1-99.4%], specificity 96.4% [95.5-100.0%]; SNAP® RT: sensitivity 90.9% [78.0-100.0%], specificity
Directory of Open Access Journals (Sweden)
Ming-Yen Tsai
Full Text Available OBJECTIVES: The Meridian Energy Analysis Device is currently a popular tool in the scientific research of meridian electrophysiology. In this field, it is generally believed that measuring the electrical conductivity of meridians provides information about the balance of bioenergy or Qi-blood in the body. METHODS AND RESULTS: PubMed database based on some original articles from 1956 to 2014 and the authoŕs clinical experience. In this short communication, we provide clinical examples of Meridian Energy Analysis Device application, especially in the field of traditional Chinese medicine, discuss the reliability of the measurements, and put the values obtained into context by conside